WO2021102676A1 - 物体状态获取方法、可移动平台及存储介质 - Google Patents

物体状态获取方法、可移动平台及存储介质 Download PDF

Info

Publication number
WO2021102676A1
WO2021102676A1 PCT/CN2019/120911 CN2019120911W WO2021102676A1 WO 2021102676 A1 WO2021102676 A1 WO 2021102676A1 CN 2019120911 W CN2019120911 W CN 2019120911W WO 2021102676 A1 WO2021102676 A1 WO 2021102676A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
probability
point
point cloud
probability distribution
Prior art date
Application number
PCT/CN2019/120911
Other languages
English (en)
French (fr)
Inventor
吴显亮
陈进
李星河
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980041121.1A priority Critical patent/CN112313536B/zh
Priority to PCT/CN2019/120911 priority patent/WO2021102676A1/zh
Publication of WO2021102676A1 publication Critical patent/WO2021102676A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/862Combination of radar systems with sonar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/86Combinations of sonar systems with lidar systems; Combinations of sonar systems with systems not using wave reflection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/66Tracking systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the embodiments of the present application relate to target tracking technology, and in particular, to a method for acquiring an object state, a movable platform, and a storage medium.
  • target tracking technology is an important research direction in the dynamic environment where mobile platforms such as unmanned aerial vehicles or unmanned vehicles are located.
  • a mobile platform such as a drone or an unmanned vehicle effectively fuse observation information in a dynamic environment and use the observation information to update the status information of multiple objects online in real time, including the position, orientation, and speed of the dynamic object? Waiting for the status, and the associated information of each target on different time series, (that is, whether the observations at different times belong to the same target), so as to achieve the effect of multi-target tracking.
  • the movable platform can predict the state of the object, preprocess the data collected by the sensor, and update the state of the object through the preprocessed data to obtain the optimal state of the object.
  • the above process relies too much on the pre-processed data, which often leads to the loss of the original data collected by the sensor in the pre-processing process, resulting in inaccurate state of the final determined object.
  • the embodiments of the present application provide a method for acquiring the state of an object, a movable platform, and a storage medium. Thereby, it can be ensured that the obtained target probability distribution is more accurate.
  • this application provides a method for acquiring the state of an object.
  • the movable platform is equipped with multiple sensors, and the sensors are used for data collection of the environment in which the movable platform is located.
  • the method includes: acquiring the initial probability of the movement state of the object in the environment Distribution.
  • the initial probability distribution is the fusion result of data collected by multiple sensors.
  • the initial probability distribution includes the probability value of each value point corresponding to the motion state; the probability value of each value point is updated according to the data collected by the target sensor; The updated probability value of the value point determines the target probability distribution of the motion state.
  • the present application provides a movable platform equipped with multiple sensors.
  • the sensors are used to collect data on the environment in which the movable platform is located.
  • the movable platform includes: an acquisition module, an update module, and a first determination. Module.
  • the acquisition module is used to acquire the initial probability distribution of the motion state of the object in the environment, the initial probability distribution is the fusion result of the data collected by multiple sensors, and the initial probability distribution includes the probability value of each value point corresponding to the motion state; update;
  • the module is used to update the probability value of each value point according to the data collected by the target sensor;
  • the first determining module is used to determine the target probability distribution of the motion state according to the updated probability value of each value point.
  • the present application provides a movable platform equipped with multiple sensors, the sensors are used to collect data on the environment where the movable platform is located, and the movable platform includes: a processor, used to: obtain the environment The initial probability distribution of the motion state of the object.
  • the initial probability distribution is the fusion result of the data collected by multiple sensors.
  • the initial probability distribution includes the probability value of each value point corresponding to the motion state; each value is updated according to the data collected by the target sensor The probability value of the point; according to the updated probability value of each value point, determine the target probability distribution of the motion state.
  • the present application provides a computer-readable storage medium, the computer-readable storage medium includes computer instructions, and the computer instructions are used to implement the method described in the first aspect.
  • the present application provides a method for acquiring the state of an object, a movable platform, and a storage medium.
  • the movable platform is equipped with a plurality of sensors, and the sensors are used for data collection of the environment in which the movable platform is located.
  • the method includes: acquiring the movement of objects in the environment The initial probability distribution of the state.
  • the initial probability distribution is the fusion result of the data collected by multiple sensors.
  • the initial probability distribution includes the probability value of each value point corresponding to the motion state; the probability of each value point is updated according to the data collected by the target sensor Value: Determine the target probability distribution of the motion state according to the updated probability value of each value point.
  • This method can be called a post-processing method, which can overcome the problem of the loss of the original data collected by the sensor in the process of obtaining the initial probability distribution of the motion state, thereby ensuring that the obtained target probability distribution is more accurate.
  • Figure 1 is an application scenario diagram provided by an embodiment of the application
  • FIG. 2 is a flowchart of a method for acquiring the state of an object according to an embodiment of the application
  • 3 is a flowchart of a method for updating the probability value of each value point provided by an embodiment of the application
  • FIG. 5 is a schematic diagram of the likelihood function of the speed deviation provided by an embodiment of the application.
  • FIG. 6 is a flowchart of a method for determining each value point provided by an embodiment of the application.
  • FIG. 7 is a flowchart of a method for generating point cloud clusters provided by an embodiment of the application.
  • FIG. 8 is a flowchart of an object state acquisition method provided by another embodiment of this application.
  • FIG. 9 is a flowchart of an object state acquisition method provided by still another embodiment of this application.
  • FIG. 10 is a schematic diagram of a movable platform provided by an embodiment of this application.
  • FIG. 11 is a schematic diagram of a movable platform provided by an embodiment of this application.
  • These status information include the location and orientation of dynamic objects.
  • Algorithms such as Kalman filter or particle filter are a state estimation technique that uses online observation recursion to estimate a set of timing states.
  • it includes a system model and a state model.
  • the system model is mainly used to constrain the state relationship between different time series.
  • the actual operation stage is mainly used to predict the time series.
  • the observation model is mainly used to constrain the relationship between the observation and the state. After predicting the state at a certain current time, the observation model and the corresponding observation can be used to update the state.
  • Kalman filter is the best estimation method for state estimation
  • the system model or observation model does not satisfy the linear relationship, it needs to use the extended Kalman filter or unscented Kalman filter technology to deal with. If the uncertainty is not only Gaussian, the particle filter technology can be used for sampling estimation.
  • the state to be estimated is the combination of the states of each target, where the state of each object can include information such as position, velocity, orientation, angular velocity, acceleration, etc.
  • the state of each object is independent of each other, so that many filters can be used to estimate the state of each object individually.
  • the premise of using filter estimation for each object is to ensure that the observation and the object have a clear correspondence, that is, the observation is indeed the observation of the object, rather than the observation of other objects.
  • Observation if the relationship is unknown, data association technology is needed to associate each observation with the object being estimated in the system, and then filter technology can be used for state estimation.
  • common association algorithms include Hungarian allocation, multi-hypothesis tracking, and joint probabilistic data association technology.
  • the vehicle body dynamics model can be used for modeling.
  • the observation data source used can be image, laser, millimeter wave radar, ultrasound and other sensor information.
  • images and lasers a common approach is to use vision or point cloud processing technology for preprocessing to obtain two-dimensional or three-dimensional stereo detection frames, and then use these frames as observations to update state information, and assume that these observations have a Gaussian distribution.
  • the standard extended Kalman filtering technique can be used for state estimation.
  • the original observation can also be directly used as the observation result without any pre-processing. Such observations are often very slow to satisfy or approximately satisfy the Gaussian distribution assumption, which is difficult to process.
  • sampling update technologies such as particle filtering can be used, but the determination of the number of particles and the phenomenon of particle degradation have largely restricted the wide application of this technology in the industry, even if technologies such as resampling have alleviated
  • the problem of particle degradation is solved.
  • due to the need to perform random sampling in a multi-dimensional state space this will bring an excessive computational burden, which will have a cost impact on practical applications.
  • the filtering algorithm relies too much on the preprocessing detection result, which often leads to the loss of the original data information in the preprocessing step, which causes the final filtering result to be unreliable.
  • a multi-target tracking algorithm often requires a detection algorithm, but the missing and false detection of the algorithm will introduce other unreliable factors in the original sensor data, causing the final result to be abnormal and large inconsistency with the original data.
  • the problem of data association needs to be considered.
  • most of the common data association technologies use the Hungarian allocation algorithm.
  • the algorithm is simple and effective, but only hard allocation is made for the observation of the current frame. If a frame is incorrectly allocated, then the incorrectly allocated observation cannot update the state of the object that actually corresponds to it, and at the same time, the wrong allocation cannot. restore.
  • the movable platform can predict the state of the object, preprocess the data collected by the sensor, and update the state of the object through the preprocessed data to obtain the optimal state of the object .
  • the above process relies too much on the pre-processed data, which often leads to the loss of the original data collected by the sensor in the pre-processing process, resulting in inaccurate state of the final determined object.
  • this application provides a method for acquiring the state of an object, a removable platform, and a storage medium.
  • FIG. 1 is an application scenario diagram provided by an embodiment of the application.
  • the movable platform 11 is equipped with multiple sensors, and each sensor is used to Data collection is performed in the environment where the platform 11 is located.
  • the environment where the mobile platform 11 is located also includes: at least one object 12, where the mobile platform 11 in this application may be a drone, an unmanned vehicle, etc., and the sensor may be Lidar sensors, binocular vision sensors, millimeter wave radars, ultrasonic sensors, etc.
  • the above multiple sensors can be the same type of sensors, such as all lidar sensors, or they can be different types of sensors, such as lidar sensors and millimeters Wave radar.
  • different sensors collect different data from the environment in which the mobile platform is located. For example: Lidar sensors, binocular vision sensors, millimeter wave radar collects point cloud data, and ultrasonic sensors collect ultrasonic signals.
  • the movable platform can predict the motion state of an object (that is, any object in the environment where the mobile platform is located), and update the motion state in combination with the data collected by the sensor.
  • the movable platform can fuse data collected by multiple sensors through a certain algorithm to predict the motion state of the object.
  • the aforementioned algorithm may be Kalman filter, single-step particle filter, brute force search or neural network, etc., which is not limited in this application.
  • the data measured by the sensor may not be completely accurate. Therefore, by fusing the data collected by multiple sensors, the obtained motion state of the object can be understood as a random variable that conforms to a certain probability distribution, such as Gaussian distribution, normal distribution, linear distribution, and nonlinear distribution. , Non-Gaussian distribution, etc., this application does not impose restrictions on this.
  • FIG. 2 is a flowchart of a method for acquiring an object state according to an embodiment of the application.
  • the execution subject of the method may be part or all of a movable platform, and the so-called movable platform may be a processor of the movable platform.
  • the movable platform is equipped with multiple sensors. The sensors are used to collect data on the environment in which the movable platform is located.
  • the following describes the method for acquiring the state of the object with the movable platform as the execution subject, as shown in Figure 2. The method includes the following steps:
  • Step S201 Obtain the initial probability distribution of the motion state of the object in the environment.
  • Step S202 Update the probability value of each value point according to the data collected by the target sensor.
  • Step S203 Determine the target probability distribution of the motion state according to the updated probability value of each value point.
  • the initial probability distribution is the fusion result of data collected by multiple sensors, and the initial probability distribution includes the probability value of each value point corresponding to the motion state.
  • the motion state of the object includes at least one of the following: the position parameter, the orientation parameter, the velocity parameter, and the acceleration parameter of the object. That is, the motion state of the object can be any one of the object's position parameter, orientation parameter, speed parameter, and acceleration parameter. At this time, the probability distribution that the motion state conforms to is the probability distribution corresponding to any one of the parameters.
  • the motion state of the object may be a combination of at least two parameters of the object's position parameter, orientation parameter, speed parameter, and acceleration parameter. At this time, the probability distribution that the motion state conforms to is also the probability distribution corresponding to the combined parameter.
  • the probability distribution that the motion state conforms to refers to the probability distribution corresponding to the position parameter and the orientation parameter.
  • the motion state of the object is a parameter
  • the spatial dimension of the probability distribution can be reduced, and the calculation amount of the movable platform can be reduced.
  • the mobile platform can select one sensor among multiple sensors as the target sensor, and update the probability value of each value point through the data collected by the target sensor.
  • the movable platform may randomly select one sensor among multiple sensors as the target sensor, or may select a sensor with the highest accuracy as the target sensor. For example: When the movable platform is to fuse the point cloud data collected by the lidar sensor, binocular vision sensor, and millimeter wave radar, the initial probability distribution of the position parameter of the object is obtained.
  • the movable platform uses the lidar sensor as the target sensor, and uses the point cloud data collected to take each value in the initial probability distribution The probability value of the point is updated.
  • the movable platform can select multiple target sensors.
  • the movable platform first updates the probability value of each value point in the initial probability distribution based on the data collected by one target sensor to obtain the updated probability Further, the movable platform updates the updated probability value according to the data collected by the next target sensor, until the probability value of each value point is updated through the data collected by all the target sensors.
  • the speed parameters and/or acceleration parameters can be obtained directly by the lidar sensor, or by comparing the position parameters of the current frame and the previous frame. And the orientation parameter difference is obtained. Similarly, the acceleration parameter is also obtained by the position parameter and the orientation parameter difference of the current frame and the previous frame.
  • the position parameter and orientation parameter of the current frame can be determined by the point cloud data collected by the lidar sensor or millimeter wave radar in the current frame.
  • the position parameter and orientation parameter of the previous frame can be determined by the lidar sensor or millimeter wave radar in the previous frame. The collected point cloud data is determined.
  • the data collected by the target sensor is also the data collected in the current frame and the previous frame.
  • the initial probability distribution corresponding to the speed parameter is obtained based on the point cloud data collected by multiple sensors in the current frame and the previous frame, so the data collected by the target sensor is also the point cloud data collected by the target sensor in the current frame and the previous frame .
  • the position parameter and the orientation parameter of the current frame can be determined by the point cloud data collected by the lidar sensor or the millimeter wave radar in the current frame. Therefore, if the initial probability distribution is obtained based on the data of multiple sensors in the current frame, the data collected by the target sensor is also the data collected in the current frame. For example, the initial probability distribution corresponding to the position parameter is obtained based on the point cloud data of multiple sensors in the current frame, so the data collected by the target sensor is also the point cloud data collected by the target sensor in the current frame.
  • the movable platform may select at least one target value point with a probability value greater than a preset threshold, and obtain the probability value of at least one target value point according to the probability value The target probability distribution of the motion state. For example: the movable platform selects multiple target value points with a probability value greater than a preset threshold, and uses the average of the probability values of multiple target value points as the mean value of the target probability distribution, and calculates the probability of multiple target value points The variance of the value is used as the variance of the updated probability distribution.
  • the movable platform can select a target value point with a probability value greater than a preset threshold, and sample within the preset radius of the target value point to obtain multiple other target value points, and then compare all target value points
  • the average value of the probability values is used as the mean value of the target probability distribution; the variance of the probability values of all target value points is used as the variance of the updated probability distribution.
  • the movable platform may send an alarm message to remind the user that the movable platform is abnormal.
  • the alarm information can be voice alarm information or text alarm information, or alarm information formed by flashing a warning light, which is not limited in this application.
  • the target probability distribution determined by the movable platform can also be used as a priori for the next frame.
  • the movable platform can update the probability value of each value point according to the data collected by the target sensor, and determine the target probability distribution of the motion state according to the updated probability value of each value point.
  • This method can be called a post-processing method, which can overcome the problem of the loss of the original data collected by the sensor in the process of obtaining the initial probability distribution of the motion state, thereby ensuring that the obtained target probability distribution is more accurate.
  • the technical solution of the present application is also applicable to situations where the motion state does not conform to the Gaussian distribution, or the motion state has a relatively large degree of nonlinearity.
  • step S202 The following is a detailed description of the above step S202:
  • FIG. 3 is a flowchart of a method for updating the probability value of each value point provided by an embodiment of the application. As shown in FIG. 3, the method includes the following steps:
  • Step S301 For any value point, determine the posterior probability of the value point according to the data collected by the target sensor.
  • the movable platform can determine the likelihood probability of the value point according to the data collected by the target sensor, and calculate the product of the probability value of the value point and the likelihood probability to obtain the posterior probability of the value point.
  • the likelihood probability of the value point is the acquisition probability of the data collected by the target sensor under the condition of obtaining the value point; the posterior probability of the value point is the probability of obtaining the value point under the condition of collecting the data collected by the target sensor .
  • the movable platform can determine the likelihood probability of the value point according to the data collected by the target sensor, and calculate the product of the probability value of the value point and the likelihood probability to obtain the product result, and calculate the product result and the normalization factor Quotient to get the posterior probability of the value point. For example: assuming that the probability value of a value point x i is f i (x i ), its likelihood probability is f i (z i
  • z i ) f i (z i
  • the mobile platform determines the likelihood probability of the value point in different ways.
  • the target sensor is a laser sensor
  • the object is represented by a point cloud cluster
  • the movable platform will obtain multiple first likelihood probabilities of value points in the point cloud cluster, where the first likelihood probability is The probability of collecting the position of a point cloud particle under the condition of obtaining the value point.
  • the mobile platform accumulates multiple first likelihood probabilities, namely To get the likelihood probability of the value point.
  • each z i,k represents the position of the k-th point cloud particle in the point cloud cluster
  • x i ) represents a first likelihood probability, that is, the value obtained at the value point x i
  • the acquisition probability of z i,k , m+1 is the number of point cloud particles in the point cloud cluster.
  • Fig. 4 is a schematic diagram of the first likelihood function g(x) provided by an embodiment of the application.
  • the corresponding first likelihood probability is the largest, which is 0.4.
  • the ideal distance between the movable platform and the object that is, 20
  • the actual laser distance measurement x
  • this situation may be caused by the obstruction of the object. Therefore, this type of x is given the first
  • a likelihood probability can be a constant probability. If x>20, it means that the ideal distance between the movable platform and the object is less than the actual laser ranging. This situation violates physical common sense, so the first likelihood probability assigned to such x is close to At 0, or equal to 0.
  • the binocular vision sensor can obtain multiple images of the object, and the movable platform obtains these images and processes these images to obtain a point cloud cluster that represents the object. Based on this, the movable platform can sample sparse points in the point cloud cluster. For these sparse points, the movable platform uses the above-mentioned method of determining the likelihood probability of the value point when the target sensor is a laser sensor, and obtains the value of each sparse point. Likelihood probability.
  • the millimeter-wave radar can obtain the point cloud data used to represent the object, and the movable platform can use the radial velocity to evaluate the velocity of the object, that is, the velocity component of the object pointing to the millimeter-wave radar and the corresponding millimeter
  • the velocity of the wave radar is compared to calculate the likelihood function of the value point.
  • Fig. 5 is a schematic diagram of the likelihood function of the velocity deviation provided by an embodiment of this application. As shown in Fig. 5, when the velocity deviation is 0 m/s 2 When, the corresponding likelihood probability is 0.8 at most.
  • Step S302 Update the probability value of each value point according to the posterior probability of each value point.
  • the movable platform uses the posterior probability of each value point as the new probability value of the value point.
  • the movable platform calculates the posterior probability of each value point and the average value of the probability value of the value point. To get the new probability value of the value point.
  • the movable platform determines the posterior probability of the value point according to the data collected by the target sensor.
  • the movable platform can update the probability value of each value point according to the posterior probability of each value point. It can be seen that, in the present application, the probability value of each value point is updated in combination with the original data collected by the sensor. This method can overcome the loss of the original data collected by the sensor in the process of obtaining the initial probability distribution of the motion state, thereby ensuring that the obtained target probability distribution is more accurate.
  • the technical solution of the present application is also applicable to situations where the motion state does not conform to the Gaussian distribution, or the motion state has a relatively large degree of nonlinearity.
  • the movable platform can introduce the vehicle body dynamics model when determining the likelihood probability, so that the obtained posterior probability conforms to the vehicle body motion model , And the probability value of the updated value point obtained is more accurate.
  • FIG. 6 is a flowchart of a method for determining each value point provided by an embodiment of the application. As shown in FIG. 6, the method includes the following steps:
  • Step S601 Taking the value point with the largest probability value in the initial probability distribution as the center, and taking the fusion accuracy value of the target sensor as the radius, set the value range.
  • Step S602 Determine each value point at an intermediate interval in the value range.
  • the value range obtained is [4.5, 5.5].
  • the movable platform divides [4.5, 5.5] at equal intervals. For example, if the interval is set to 0.1, the values determined by the movable platform in [4.5, 5.5] are: 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5.
  • Option 2 The movable platform determines each value point in the range of the corresponding motion state according to the probability density of the initial probability distribution, where the greater the probability density, the smaller the interval between the value point and the adjacent value point .
  • the Gaussian distribution is an ellipsoid, and the corresponding
  • the sampling density of the value points and the spatial distribution probability density maintain a proportional relationship, that is, the greater the probability density, the smaller the interval between the value point and the adjacent value point, so that each value point obtained can basically ensure that the initial probability distribution is met .
  • the movable platform can sample at equal intervals or perform deterministic sampling based on Gaussian distribution instead of random sampling, so that the frequency of use of random sampling can be avoided as much as possible, and more accurate value points can be obtained.
  • the data collected by the target sensor from the environment is point cloud data
  • the object is represented by a point cloud cluster.
  • each value point is the position of each point cloud particle in the point cloud cluster
  • the probability value of each value point is The probability value of the position of each point cloud particle.
  • FIG. 7 is a flowchart of a method for generating point cloud clusters provided by an embodiment of the application. As shown in FIG. 7, the method includes the following steps:
  • Step S701 Determine point cloud particles to be inspected in the first point cloud cluster of the object, where the point cloud particles to be inspected are point cloud particles with a probability value greater than a first preset threshold.
  • Step S702 Detect whether there are point cloud particles whose distance from the point cloud particles to be detected is less than a preset distance in the second point cloud clusters corresponding to other objects.
  • Step S703 If there are point cloud particles in the second point cloud cluster whose distance from the point cloud particles to be detected is less than the preset distance, the movable platform will determine the probability distribution of the point cloud particles in the first point cloud cluster and the second point. The probability distribution of the point cloud particles of the cloud cluster is calculated, and the joint probability of the first point cloud cluster and the second point cloud cluster is calculated.
  • Step S704 Generate a new point cloud cluster according to the point cloud particles whose joint probability is greater than the second preset threshold.
  • the new point cloud cluster may still correspond to the object, or a new target object may be re-determined based on the new point cloud cluster.
  • the above-mentioned first preset threshold may be set according to actual conditions, for example: the first preset threshold may be 0.6, 0.8, etc.
  • the foregoing preset distance may also be set according to actual conditions, for example: the preset distance is 10cm, 20cm, etc.
  • the foregoing second preset threshold may also be set according to actual conditions, for example: the first preset threshold may be 0.6, 0.8, etc. This application does not limit how to set the first preset threshold, the second preset threshold, and the preset distance.
  • the movable platform can calculate the point cloud particle to be detected The product of the probability value of and the probability value of the first point cloud particle to obtain the joint probability of the probability value of the to-be-detected point cloud particle and the first point cloud particle. Since there may be multiple point cloud particles to be detected in the first point cloud cluster, the movable platform can calculate the joint probability of each point cloud particle to be detected and its corresponding first point cloud particle to obtain the first point cloud cluster and the first point cloud particle. The joint probability of two point cloud clusters.
  • the movable platform may combine point cloud particles with a joint probability greater than a second preset threshold to form a new point cloud cluster corresponding to the object, and perform a local optimal estimation of the new point cloud cluster through the joint probability, and reverse the new point cloud cluster.
  • the probability value of each point cloud particle in the first point cloud cluster in the point cloud cluster thereby updating the probability distribution of the object.
  • the point cloud particles to be inspected in step S701 refer to the point cloud particles whose probability value is greater than the first preset threshold in the target probability distribution.
  • the probability distribution of the point cloud particles in the first point cloud cluster in step S703 refers to the target probability distribution of the point cloud particles in the first point cloud cluster.
  • the movable platform reverses the probability value of each point cloud particle in the new point cloud cluster in the first point cloud cluster, thereby updating the target probability distribution of the object.
  • the above steps S701 to S704 can be executed before step S203.
  • the point cloud particles to be inspected in step S701 refer to the point cloud particles whose probability value is greater than the first preset threshold in the initial probability distribution.
  • the probability distribution of the point cloud particles in the first point cloud cluster in step S703 refers to the initial probability distribution of the point cloud particles in the first point cloud cluster.
  • the movable platform reversely deduces the probability value of each point cloud particle in the new point cloud cluster in the first point cloud cluster, thereby updating the initial probability distribution of the object.
  • the lidar sensor or millimeter wave radar collects the point cloud data of a truck
  • the front of the truck and the body may be recognized as two objects.
  • One point cloud cluster is represented, and the car body is represented by a second point cloud cluster.
  • the movable platform can determine the point cloud particles to be inspected in the first point cloud cluster of the object, and detect the first point cloud particles whose distance from the point cloud particles to be inspected is less than a preset distance, and it can be seen that the first point cloud The particles should be the point cloud particles at the junction of the front of the car and the car body.
  • the movable platform calculates the product of the probability value of each point cloud particle to be inspected and the corresponding first point cloud particle to obtain the joint probability of the first point cloud cluster and the second point cloud cluster.
  • the movable platform can combine such point cloud particles into new points Cloud clusters.
  • the mobile platform determines the joint probability distribution of a new point cloud cluster based on the joint probability of these discrete point cloud particles, and performs a local optimal estimation based on the joint probability distribution, and inversely infers the new one based on the local optimal estimation.
  • the probability value of each point cloud particle in the point cloud cluster in the first point cloud cluster thereby updating the probability distribution of the front of the car.
  • an object conflicts with other objects, that is, there are point cloud particles with a joint probability greater than a second preset threshold.
  • the movable platform can generate new points based on the point cloud particles with a joint probability greater than the second preset threshold.
  • Cloud clusters Based on this, the mobile platform estimates the joint probability distribution of the new point cloud cluster, and uses the joint probability to estimate the local optimum of the new point cloud cluster, and inversely deduces the point cloud particles in the new point cloud cluster in the first point cloud.
  • the probability value of the cluster is used to update the probability distribution of the object, so that the second point cloud cluster corresponding to other objects does not have a point cloud particle whose distance from the point cloud particle to be detected is less than the preset distance. So as to resolve the conflict between objects and other objects.
  • FIG. 8 is a flowchart of an object state acquisition method provided by another embodiment of the application. As shown in FIG. 8, after step S203, the object state acquisition method further includes the following steps:
  • Step S801 Determine whether the initial probability distribution and the target probability distribution meet the consistency condition.
  • Step S802 If the initial probability distribution and the target probability distribution do not meet the consistency condition, the mobile platform pushes alarm information to prompt the user that the mobile platform is abnormal.
  • a chi-square test is used to determine whether the initial probability distribution and the target probability distribution meet the consistency condition.
  • the movable platform selects at least one first discrete point in the initial probability distribution, and selects a second discrete point corresponding to at least one first discrete point one-to-one in the target probability distribution, and calculates each first discrete point and the first discrete point.
  • the difference between the probability values of two discrete points is the probability difference, and all the probability differences are summed to get the sum result. If the sum result is greater than the preset result, the movable platform determines that the initial probability distribution and the target probability distribution do not meet the consistency condition; otherwise, it determines that the initial probability distribution and the target probability distribution satisfy the consistency condition.
  • the mobile platform can push alarm information.
  • the alarm information can be voice alarm information or text alarm information, or alarm information formed by flashing warning lights , This application does not restrict this.
  • the mobile platform pushes alarm information to remind the user that the mobile platform exists Exception, thereby improving the reliability of the movable platform.
  • FIG. 9 is a flowchart of an object state acquisition method provided by still another embodiment of this application. As shown in FIG. 9, after step S203, the object state acquisition method further includes the following steps:
  • Step S901 Determine the absolute value of the motion state of the object according to the value point of the motion state of the movable platform and the value point in the target probability distribution of the motion state of the corresponding object.
  • the mobile platform can obtain itself through Inertial Measurement Unit (IMU), Global Positioning System (GPS), wheel odometry (wheel odometry) and visual odometry (visual odometry), etc.
  • IMU Inertial Measurement Unit
  • GPS Global Positioning System
  • wheel odometry wheel odometry
  • visual odometry visual odometry
  • Motion estimation information ego-motion
  • the mobile platform can The value point of the motion state and the probability value corresponding to the value point in the target probability distribution of the motion state of the object are summed to obtain the absolute value of the motion state of the object.
  • the position parameters of the movable platform can be understood as a random variable.
  • the variable conforms to a certain probability distribution, so the movable platform can select the position parameter corresponding to the position parameter in the target probability distribution in the probability distribution, and sum the corresponding position parameters to obtain the absolute position parameter of the object.
  • the absolute position and speed of the opponent's object are also unobservable. In this case, only the relative position and speed can be estimated.
  • a filter or positioning algorithm for the estimation of the state of the vehicle. It can integrate sensors such as IMU, GPS, wheel odometer, high-precision map, vision, laser and even millimeter wave radar for positioning. When this information is obtained Then, the absolute state estimation of the object can be obtained through coordinate conversion, so that the dynamic model and the observation model can be introduced more naturally.
  • the absolute value of the motion state of the object can be determined according to the value point of the motion state of the movable platform and the value point in the target probability distribution of the motion state of the corresponding object.
  • the data collected by sensors are usually relative data.
  • the motion parameters of the movable platform itself can be used to calculate the absolute value, thereby reducing the jump in the value of the motion state of the detected object.
  • FIG. 10 is a schematic diagram of a movable platform provided by an embodiment of the application.
  • the movable platform is equipped with multiple sensors, and the sensors are used to collect data on the environment in which the movable platform is located.
  • the movable platform is The platform includes:
  • the acquiring module 1001 is used to acquire the initial probability distribution of the motion state of the object in the environment.
  • the initial probability distribution is the fusion result of data collected by multiple sensors.
  • the initial probability distribution includes the probability value of each value point corresponding to the motion state.
  • the update module 1002 is used to update the probability value of each value point according to the data collected by the target sensor.
  • the first determining module 1003 is configured to determine the target probability distribution of the motion state according to the updated probability value of each value point.
  • the update module 1002 includes: a determination sub-module and an update sub-module, where the determination sub-module is used to determine the posterior probability of the value point according to the data collected by the target sensor for any value point, The posterior probability is the probability of obtaining the value point under the conditions of the data collected by the target sensor.
  • the update sub-module is used to update the probability value of each value point according to the posterior probability of each value point.
  • the update submodule is specifically used to: determine the likelihood probability of the value point according to the data collected by the target sensor.
  • the likelihood probability of the value point is the collection of the data collected by the target sensor under the condition of obtaining the value point.
  • Probability Calculate the product of the probability value of the value point and the likelihood probability to obtain the posterior probability of the value point.
  • the movable platform further includes: a setting module 1004 and a second determining module 1005.
  • the setting module 1004 is used to set the value range by taking the value point with the largest probability value in the initial probability distribution as the center and the fusion accuracy value of the target sensor as the radius.
  • the second determining module 1005 is used to determine each value point at an intermediate interval in the value range.
  • the movable platform further includes: a third determining module 1006, configured to determine each value point in the value range of the corresponding motion state according to the probability density of the initial probability distribution, where the value point with the greater probability density is related to the relative value. The smaller the interval between adjacent value points.
  • the data collected by the target sensor from the environment is point cloud data
  • the object is represented by a point cloud cluster
  • each value point is the position of each point cloud particle in the point cloud cluster
  • the probability value of each value point is each point cloud The probability value of the particle's position.
  • the movable platform also includes:
  • the fourth determining module 1007 is configured to determine the point cloud particles to be inspected in the first point cloud cluster of the object, where the point cloud particles to be inspected are point cloud particles with a probability value greater than the first preset threshold.
  • the detection module 1008 is configured to detect whether there are point cloud particles whose distance from the point cloud particles to be detected is less than a preset distance in the second point cloud clusters corresponding to other objects.
  • the calculation module 1009 is configured to: if there are point cloud particles in the second point cloud cluster whose distance from the point cloud particles to be detected is less than the preset distance, according to the probability distribution of the point cloud particles in the first point cloud cluster and the second point The probability distribution of the point cloud particles of the cloud cluster is calculated, and the joint probability of the first point cloud cluster and the second point cloud cluster is calculated.
  • the generating module 1010 is configured to generate a new point cloud cluster according to the point cloud particles whose joint probability is greater than the second preset threshold.
  • the initial probability distribution is obtained based on data collected by multiple sensors in the current frame and the previous frame.
  • the update module 1002 is specifically configured to update the probability value of the value point according to the data collected by the target sensor in the current frame and the previous frame.
  • the movable platform also includes:
  • the judging module 1011 is used for judging whether the initial probability distribution and the target probability distribution meet the consistency condition after the update module updates the probability value of each value point according to the data collected by the target sensor to obtain the target probability distribution of the motion state.
  • the push module 1012 is configured to push alarm information if the initial probability distribution and the target probability distribution do not meet the consistency condition, to prompt the user that there is an abnormality in the mobile platform.
  • the judging module 1011 is specifically configured to judge whether the initial probability distribution and the target probability distribution meet the consistency condition through a chi-square test.
  • the movable platform further includes: a fifth determining module 1013, configured to determine the target probability distribution of the motion state according to the motion state of the movable platform after the first determining module determines the target probability distribution of the motion state according to the updated probability value of each value point
  • the value point of and the value point in the target probability distribution corresponding to the motion state of the object determine the absolute value of the motion state of the object.
  • the multiple sensors include at least one of the following: a lidar sensor, a binocular vision sensor, a millimeter wave radar, and an ultrasonic sensor.
  • the motion state includes at least one of the following: a position parameter, an orientation parameter, a velocity parameter, and an acceleration parameter of the object.
  • the present application provides a movable platform that can execute the above-mentioned object state acquisition method.
  • the method embodiment part please refer to the method embodiment part, which will not be repeated here.
  • FIG. 11 is a schematic diagram of a movable platform provided by an embodiment of the application.
  • the movable platform includes a plurality of sensors 1101, and the sensors are used to collect data on the environment in which the movable platform is located.
  • two sensors 1101 and one processor 1102 are taken as an example.
  • the processor 1102 is configured to: obtain the initial probability distribution of the motion state of the object in the environment, the initial probability distribution is the fusion result of the data collected by multiple sensors, and the initial probability distribution includes the probability value of each value point corresponding to the motion state;
  • the data collected by the target sensor updates the probability value of each value point; according to the updated probability value of each value point, the target probability distribution of the motion state is determined.
  • the processor 1102 is specifically configured to: for any value point, determine the posterior probability of the value point according to the data collected by the target sensor, and the posterior probability of the value point is the collection condition of the data collected by the target sensor Next, the probability of the value point is obtained; according to the posterior probability of each value point, the probability value of each value point is updated.
  • the processor 1102 is specifically configured to: determine the likelihood probability of the value point according to the data collected by the target sensor.
  • the likelihood probability of the value point is the collection of the data collected by the target sensor under the condition of obtaining the value point.
  • Probability Calculate the product of the probability value of the value point and the likelihood probability to obtain the posterior probability of the value point.
  • the processor 1102 is further configured to: take the value point with the largest probability value in the initial probability distribution as the center, and set the value range with the fusion accuracy value of the target sensor as the radius; Value point.
  • the processor 1102 is further configured to: determine each value point in the value range of the corresponding motion state according to the probability density of the initial probability distribution, where the value point between the value point with the greater probability density and the adjacent value point is The smaller the interval.
  • the data collected by the target sensor from the environment is point cloud data
  • the object is represented by a point cloud cluster
  • each value point is the position of each point cloud particle in the point cloud cluster
  • the probability value of each value point is each point cloud The probability value of the particle's position.
  • the processor 1102 is further configured to: determine the point cloud particles to be inspected in the first point cloud cluster of the object, where the point cloud particles to be inspected are point cloud particles with a probability value greater than a first preset threshold; and detect the first point cloud particles corresponding to other objects.
  • the initial probability distribution is obtained based on data collected by multiple sensors in the current frame and the previous frame; the processor 1102 is specifically configured to: update the probability of the value point according to the data collected by the target sensor in the current frame and the previous frame value.
  • the processor 1102 is further configured to: after updating the probability value of each value point according to the data collected by the target sensor to obtain the target probability distribution of the motion state, determine whether the initial probability distribution and the target probability distribution meet the consistency condition; If the initial probability distribution and the target probability distribution do not meet the consistency condition, an alarm message is pushed to remind the user that there is an abnormality in the mobile platform.
  • the processor 1102 is specifically configured to determine whether the initial probability distribution and the target probability distribution satisfy the consistency condition through a chi-square test.
  • the processor 1102 is further configured to: after determining the target probability distribution of the motion state according to the updated probability value of each value point, according to the value point of the motion state of the movable platform and the motion state of the corresponding object The value point in the target probability distribution of, determines the absolute value of the object's motion state.
  • the multiple sensors include at least one of the following: a lidar sensor, a binocular vision sensor, a millimeter wave radar, and an ultrasonic sensor.
  • the motion state includes at least one of the following: a position parameter, an orientation parameter, a velocity parameter, and an acceleration parameter of the object.
  • the processor involved in this application can be a motor controller MCU (Motor control unit, MCU for short), a central processing unit (Central Processing Unit, CPU for short), or other general-purpose processors, digital signal processing Digital Signal Processor (DSP for short), Application Specific Integrated Circuit (ASIC for short), etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in combination with the embodiments of the present application may be directly embodied as being executed and completed by a hardware processor, or executed and completed by a combination of hardware and software modules in the processor.
  • a person of ordinary skill in the art can understand that all or part of the steps in the foregoing method embodiments can be implemented by a program instructing relevant hardware.
  • the aforementioned program can be stored in a computer readable storage medium. When the program is executed, it executes the steps including the foregoing method embodiments; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.
  • the present application also provides a computer program product, which includes computer instructions, and the computer instructions are used to implement the steps of the foregoing method embodiments.
  • a computer program product which includes computer instructions, and the computer instructions are used to implement the steps of the foregoing method embodiments.

Abstract

一种物体状态获取方法、可移动平台(11)及存储介质,其中可移动平台(11)搭载多个传感器,传感器用于对可移动平台(11)所处的环境进行数据采集,该方法包括:获取环境中物体(12)的运动状态的初始概率分布(S201),初始概率分布是对多个传感器采集的数据的融合结果,初始概率分布包括运动状态对应的各个取值点的概率值;根据目标传感器采集的数据更新各个取值点的概率值(S202);根据各个取值点更新后的概率值,确定运动状态的目标概率分布(S203)。通过该方法可以克服传感器采集的原始数据在运动状态的初始概率分布获得过程中损失的问题,从而可以保证获取的目标概率分布更加准确。

Description

物体状态获取方法、可移动平台及存储介质 技术领域
本申请实施例涉及目标跟踪技术,尤其涉及一种物体状态获取方法、可移动平台及存储介质。
背景技术
目前在无人机或者无人车等可移动平台所处的动态环境中,目标跟踪技术是一个重要研究方向。
无人机或者无人车等可移动平台在动态环境中如何有效的融合观测信息并利用这些观测信息实时在线的去更新多个物体的状态信息,这些状态信息包括动态物体的位置,朝向,速度等状态,以及不同时间序列上每个目标的关联信息,(即不同时刻的观测是否属于同样的目标),从而达到对多目标跟踪的效果。
现有技术中,可移动平台可以对物体的状态进行预测,并对传感器采集的数据进行预处理,通过预处理后的数据对物体的状态进行更新,以得到物体的最优状态。上述过程过分依赖预处理后的数据,往往导致传感器采集的原始数据在预处理的过程中丢失,从而造成最终确定的物体的状态不准确。
发明内容
本申请实施例提供一种物体状态获取方法、可移动平台及存储介质。从而可以保证获取的目标概率分布更加准确。
第一方面,本申请提供一种物体状态获取方法,可移动平台搭载多个传感器,传感器用于对可移动平台所处的环境进行数据采集,方法包括:获取环境中物体的运动状态的初始概率分布,初始概率分布是对多个传感器采集的数据的融合结果,初始概率分布包括运动状态对应的各个取值点的概率值;根据目标传感器采集的数据更新各个取值点的概率值;根据各个取值点更新后的概率值,确定运动状态的目标概率分布。
第二方面,本申请提供一种可移动平台,可移动平台搭载多个传感器,传感器用于对可移动平台所处的环境进行数据采集,可移动平台包括:获取模块、更新模块和第一确定模块。其中,获取模块用于获取环境中物体的运动状态的初始概率分布,初始概率分布是对多个传感器采集的数据的融合结果,初始概率分布包括运动状态对应的各个取值点的概率值;更新模块用于根据目标传感器采集的数据更新各个取值点的概率值;第一确定模块用于根据各个取值点更新后的概率值,确定运动状态的目标概率分布。
第三方面,本申请提供一种可移动平台,可移动平台搭载多个传感器,传感器用于对可移动平台所处的环境进行数据采集,可移动平台包括:处理器,用于:获取环境中物体的运动状态的初始概率分布,初始概率分布是对多个传感器采集的数据的融合结果,初始概率分布包括运动状态对应的各个取值点的概率值;根据目标传感器采集的数据更新各个取值点的概率值;根据各个取值点更新后的概率值,确定运动状态的目标概率分布。
第四方面,本申请提供一种计算机可读存储介质,计算机可读存储介质包括计算机指令,计算机指令用于实现上述第一方面所述的方法。
本申请提供一种物体状态获取方法、可移动平台及存储介质,其中可移动平台搭载多个传感器,传感器用于对可移动平台所处的环境进行数据采集,方法包括:获取环境中物体的运动状态的初始概率分布,初始概率分布是对多个传感器采集的数据的融合结果,初始概率分布包括运动状态对应的各个取值点的概率值;根据目标传感器采集的数据更新各个取值点的概率值;根据各个取值点更新后的概率值,确定运动状态的目标概率分布。该方法可以被称为一种后处理方法,通过这种后处理方法可以克服传感器采集的原始数据在运动状态的初始概率分布获得过程中损失的问题,从而可以保证获取的目标概率分布更加准确。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请一实施例提供的应用场景图;
图2为本申请一实施例提供的一种物体状态获取方法的流程图;
图3为本申请一实施例提供的更新各个取值点的概率值的方法流程图;
图4为本申请一实施例提供的第一似然函数g(x)的示意图;
图5为本申请一实施例提供的速度偏差的似然函数的示意图;
图6为本申请一实施例提供的确定各个取值点的方法流程图;
图7为本申请一实施例提供的生成点云簇的方法流程图;
图8为本申请另一实施例提供的一种物体状态获取方法的流程图;
图9为本申请再一实施例提供的一种物体状态获取方法的流程图;
图10为本申请一实施例提供的一种可移动平台的示意图;
图11为本申请一实施例提供的一种可移动平台的示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
针对无人机或者无人车等可移动平台在动态环境中如何有效的融合观测信息并利用这些观测信息实时在线的去更新多个物体的状态信息,这些状态信息包括动态物体的位置,朝向,速度等状态,以及不同时间序列上每个目标的关联信息,(即不同时刻的观测是否属于同样的目标),从而达到对多目标跟踪的效果。
卡尔曼滤波或者粒子滤波等算法,是一种利用在线观测递推的来估计某一组时序状态的状态估计技术。一般会包括系统模型和状态模型,系统模型主要用于约束不同时序之间状态的关系,实际运行阶段主要用来进行时序的预测,观测模型主要是用来约束观测和状态之间的关系,得到某一个当前时间的状态预测后,可以利用该观测模型以及对应的观测,来对状态进行更新估计。当系统模型和观测模型都是线性的时候,并且观测模型和系统模型中的不确定性都符合零均值高斯分布并方差已知,这种情况下卡尔曼滤波是对 状态估计的最优估计方法,当系统该模型或者观测模型不满足线性关系的时候,需要利用扩展卡尔曼滤波或者无迹卡尔曼滤波技术来处理,如果不确定性不仅有高斯特性,可以利用粒子滤波技术进行采样估计。
针对多目标跟踪问题,要估计的状态是每一个目标的状态的联立,其中,每一个物体的状态,可以包括位置,速度,朝向,角速度,加速度等信息。通常来说,假设每一个物体的状态是相互独立的,这样可以用很多个滤波器去各自估计每一个物体的状态。但是在多目标跟踪问题中,不同于单目标跟踪,每个物体利用滤波器估计的前提是保证观测和物体有明确的对应关系,即该观测确实是对该物体的观测,而不是其他物体的观测;如果这个关系未知,需要利用数据关联的技术来进行关联,将每个观测都和系统中正在估计的物体关联起来,然后才能用滤波技术进行状态估计。目前,常见的关联算法有匈牙利分配,多假设跟踪,联合概率数据关联技术等。
得到关联信息后,接下来就是定义系统模型和观测模型。针对系统模型,如果估计的物体是特定类别,比如车辆,可以利用车体动力学模型进行建模。对于观测模型,利用的观测数据源可以是图像,激光,毫米波雷达,超声等传感器信息。针对图像和激光,常见做法是利用视觉或者点云处理技术进行预处理,得到二维或者三维的立体检测框,然后将这些框作为观测来更新状态信息,同时假设这些观测具有高斯分布,这样便可以利用标准的扩展卡尔曼滤波技术进行状态估计。当然,也可以直接把原始观测作为观测结果,没有经过任何的预处理,这样的观测往往很慢满足或者近似满足高斯分布假设,很难加以处理。
针对多目标跟踪的问题,诸如粒子滤波这样的采样更新技术可以利用,但是粒子数量的确定以及粒子退化现象,很大程度上制约了该技术在工业界的广泛应用,即便诸如重采样技术已经缓解了粒子退化的问题。同时,由于需要在多维的状态空间里进行随机采样,这样会带来过大的计算负担,从而对实际应用产生成本的冲击。
目前来看,更多的路线选择了先用视觉或者点云处理技术进行预处理,得到高质量的检测结果。然后利用这些检测结果作为观测,这些检测结果大多数是以三维或者二维框的形式得到,这些三维框或者二维框可以直接作为观测对物体的状态进行更新,但是这些通过算法预处理得到的结果,往往无 法给出很准确的不确定性描述,大多数情况下,他们都无法满足单个高斯分布,这样的话,利用基于高斯分布强假设的条件来进行滤波,往往会导致滤波系统的不准确甚至不稳定。
以上方法的另外一个缺陷是,滤波算法过分依赖预处理的检测结果,往往导致原始数据信息在预处理的步骤中丢失,从而造成最终滤波结果的不可靠。例如,多目标跟踪算法往往需要一个检测算法,但是算法的漏检和误检,会在原有传感器数据引入其他的不可靠因素,导致最后的结果异常,并且和原始数据产生较大的不一致性。
另一方面,在多目标跟踪问题里面,需要考虑数据关联的问题,目前常见的数据关联技术大多数利用了匈牙利分配算法。该算法简单有效,但是只针对当前帧的观测情况进行硬性分配,如果某一帧错误分配,那么,错误分配的那个观测就无法即使更新真正对应的那个物体的状态,同时,错误的分配也无法恢复。
如上所述,现有技术中,可移动平台可以对物体的状态进行预测,并对传感器采集的数据进行预处理,通过预处理后的数据对物体的状态进行更新,以得到物体的最优状态。上述过程过分依赖预处理后的数据,往往导致传感器采集的原始数据在预处理的过程中丢失,从而造成最终确定的物体的状态不准确。
为了解决上述技术问题,本申请提供一种物体状态获取方法、可移动平台及存储介质。
示例性地,本申请可适用于如下应用场景:图1为本申请一实施例提供的应用场景图,如图1所示,可移动平台11搭载多个传感器,每个传感器用于对可移动平台11所处的环境进行数据采集,可移动平台11所处的环境还包括:至少一个物体12,其中,本申请中的可移动平台11可以是无人机、无人车辆等,传感器可以是激光雷达传感器、双目视觉传感器,毫米波雷达,超声传感器等,上述多个传感器可以是同类型的传感器,如都是激光雷达传感器,或者可以是不同类型的传感器,比如是激光雷达传感器和毫米波雷达。进一步地,不同的传感器对可移动平台所处的环境采集的数据也不尽相同。比如:激光雷达传感器、双目视觉传感器,毫米波雷达采集的数据是点云数据,超声传感器采集的数据是超声波信号。
本申请的主旨思想是:可移动平台可以对物体(即可移动平台所处的环境中的任一个物体)的运动状态进行预测,并结合传感器采集的数据对运动状态进行更新。其中,可移动平台可以通过一定的算法对多个传感器采集的数据进行融合,以预测物体的运动状态。上述算法可以是卡尔曼滤波、单步粒子滤波、暴力搜索或神经网络等算法,本申请对此不做限制。
需要说明的是,由于传感器的工艺不可能达到完美,或者其他不能被人为预测到或者控制到的因素和噪声等的存在,导致传感器测量得到的数据不可能是完全准确的。因此,对多个传感器采集的数据进行融合,所得到的物体的运动状态可以被理解为一个随机变量,该随机变量符合一定的概率分布,如高斯分布、正态分布、线性分布、非线性分布、非高斯分布等,本申请对此不做限制。
下面对本申请技术方案进行详细说明:
图2为本申请一实施例提供的一种物体状态获取方法的流程图,该方法的执行主体可以是可移动平台的部分或者全部,所谓可移动平台的部分可以是可移动平台的处理器。如上所述,可移动平台搭载多个传感器,传感器用于对可移动平台所处的环境进行数据采集,下面以可移动平台为执行主体对物体状态获取方法进行说明,如图2所示,该方法包括如下步骤:
步骤S201:获取环境中物体的运动状态的初始概率分布。
步骤S202:根据目标传感器采集的数据更新各个取值点的概率值。
步骤S203:根据各个取值点更新后的概率值,确定运动状态的目标概率分布。
其中,初始概率分布是对多个传感器采集的数据的融合结果,初始概率分布包括运动状态对应的各个取值点的概率值。
在本申请中,物体的运动状态包括以下至少一项:物体的位置参数、朝向参数、速度参数、加速度参数。即物体的运动状态可以是物体的位置参数、朝向参数、速度参数、加速度参数中任一项,这时运动状态符合的概率分布是该任一项参数对应的概率分布。或者,物体的运动状态可以是物体的位置参数、朝向参数、速度参数、加速度参数中至少两项参数的组合。这时运动状态符合的概率分布也是该组合参数对应的概率分布。比如:当运动状态包括:物体的位置参数和朝向参数时,运动状态符合的概率分布指的是位置参 数和朝向参数联立对应的概率分布。当物体的运动状态是一个参数时,可以降低概率分布的空间维度,并且降低可移动平台的计算量。
在可移动平台获取到上述初始概率分布之后,可移动平台可以在多个传感器中选择一个传感器作为目标传感器,并通过目标传感器采集的数据更新各个取值点的概率值。其中,可移动平台可以在多个传感器中随机选择一个传感器作为目标传感器,或者,可以选择一个精度最高的传感器作为目标传感器。例如:当可移动平台是对激光雷达传感器、双目视觉传感器,毫米波雷达采集的点云数据进行融合,得到物体的位置参数的初始概率分布。进一步地,假设激光雷达传感器的精度高于双目视觉传感器和毫米波雷达的精度,则可移动平台将该激光雷达传感器作为目标传感器,通过其采集的点云数据对初始概率分布中各个取值点的概率值进行更新。
可选的,可移动平台可以选择多个目标传感器,这种情况下,可移动平台先根据一个目标传感器采集的数据对初始概率分布中各个取值点的概率值进行更新,得到更新后的概率值,进一步地,可移动平台再根据下一个目标传感器采集的数据对更新后的概率值进行更新,直至通过所有的目标传感器采集的数据将各个取值点的概率值更新完毕为止。
需要说明的是,当物体的运动状态包括:速度参数和/或加速度参数时,该速度参数和/或加速度参数可以由激光雷达传感器直接获得,也可以通过对当前帧和上一帧的位置参数以及朝向参数差分得到,同样的,加速度参数也是通过对当前帧和上一帧的位置参数以及朝向参数差分得到。而当前帧的位置参数和朝向参数可以通过激光雷达传感器或者毫米波雷达在当前帧采集的点云数据确定,上一帧的位置参数和朝向参数可以通过激光雷达传感器或者毫米波雷达在上一帧采集的点云数据确定。因此,若初始概率分布是基于多个传感器在当前帧和上一帧采集数据得到的,则目标传感器采集的数据也是当前帧和上一帧采集的数据。例如:速度参数对应的初始概率分布是基于多个传感器在当前帧和上一帧采集的点云数据得到的,因此目标传感器采集的数据也是目标传感器在当前帧和上一帧采集的点云数据。
当物体的运动状态为位置参数或者朝向参数时,由于当前帧的位置参数和朝向参数可以通过激光雷达传感器或者毫米波雷达在当前帧采集的点云数据确定。因此,若初始概率分布是基于多个传感器在当前帧数据得到的,则 目标传感器采集的数据也是当前帧采集的数据。例如:位置参数对应的初始概率分布是基于多个传感器在当前帧的点云数据得到的,因此目标传感器采集的数据也是目标传感器在当前帧采集的点云数据。
可选的,在可移动平台对各个取值点的概率更新之后,可移动平台可以选择概率值大于预设阈值的至少一个目标取值点,并根据至少一个目标取值点的概率值,得到运动状态的目标概率分布。例如:可移动平台选择概率值大于预设阈值的多个目标取值点,并将多个目标取值点的概率值的平均值作为目标概率分布的均值,将多个目标取值点的概率值的方差作为所述更新后的概率分布的方差。或者,可移动平台可以选择概率值大于预设阈值的一个目标取值点,在该目标取值点的预设半径内进行采样,得到多个其他目标取值点,将所有目标取值点的概率值的平均值作为目标概率分布的均值;将所有目标取值点的概率值的方差作为所述更新后的概率分布的方差。
当可移动平台无法选择概率值大于预设阈值的至少一个目标取值点时,可移动平台可以发送报警信息,以提示用户可移动平台存在异常。该报警信息可以是语音报警信息或者文字报警信息,由或者是通过警示灯闪烁所形成的报警信息,本申请对此不做限制。
可选的,可移动平台确定得到的目标概率分布还可以作为下一帧的先验来利用。
在本申请中,可移动平台可以根据目标传感器采集的数据更新各个取值点的概率值,并根据各个取值点更新后的概率值,确定运动状态的目标概率分布。该方法可以被称为一种后处理方法,通过这种后处理方法可以克服传感器采集的原始数据在运动状态的初始概率分布获得过程中损失的问题,从而可以保证获取的目标概率分布更加准确。同时,本申请技术方案还适用于运动状态不符合高斯分布,或者,运动状态具有较大的非线性度的情况。
下面针对上述步骤S202进行详细说明:
图3为本申请一实施例提供的更新各个取值点的概率值的方法流程图,如图3所示,该方法包括如下步骤:
步骤S301:针对任一个取值点,根据目标传感器采集的数据,确定取值点的后验概率。
可移动平台可以根据目标传感器采集的数据确定取值点的似然概率,并计算取值点的概率值和似然概率的乘积,得到取值点的后验概率。取值点的似然概率是在取值点的得到条件下,目标传感器采集的数据的采集概率;取值点的后验概率是目标传感器采集的数据的采集条件下,得到取值点的概率。例如:假设某取值点x i的概率值为f i(x i),其似然概率为f i(z i|x i),z i表示目标传感器采集的数据,那么根据贝叶斯定理,该取值点的后验概率f i(x i|z i)=f i(z i|x i)f i(x i)。
或者,可移动平台可以根据目标传感器采集的数据确定取值点的似然概率,并计算取值点的概率值和似然概率的乘积,得到乘积结果,并计算该乘积结果与归一化因子的商,以得到取值点的后验概率。例如:假设某取值点x i的概率值为f i(x i),其似然概率为f i(z i|x i),z i表示目标传感器采集的数据,那么该取值点的后验概率f i(x i|z i)=f i(z i|x i)f i(x i)/μ,其中μ为归一化因子。
需要说明的是,针对不同的目标传感器采集的数据,可移动平台确定取值点的似然概率的方式也不尽相同。
例如:若目标传感器为激光传感器,则物体通过点云簇表示,且可移动平台将获取点云簇中的取值点的多个第一似然概率,其中,第一似然概率是在取值点的得到条件下,一个点云粒子的位置的采集概率。进一步地,可移动平台对多个第一似然概率进行累计,即
Figure PCTCN2019120911-appb-000001
以得到取值点的似然概率。其中,每个z i,k表示点云簇中的第k个点云粒子的位置,f(z i,k|x i)表示一个第一似然概率,即在取值点x i的得到条件下,z i,k的采集概率,m+1为点云簇中点云粒子的个数。假设可移动平台根据z i,k确定目标物理与可移动平台的距离为r i,k,这样可以定义上述的取值点的第一似然概率为g(r i,k)=f(z i,k|x i)。
图4为本申请一实施例提供的第一似然函数g(x)的示意图,如图4所示,当x=20m时,其对应的第一似然概率最大,其为0.4。当0<x<20时,可移动平台与物体的理想距离(即20)远于实际激光测距(x),这种情况可能是由于物体被遮挡造成的,因而这类x被赋予的第一似然概率可以是一个恒定概率,如果x>20,说明可移动平台与物体的理想距离小于实际激光测距,这种情况违背物理常识,因而这类x被赋予的第一似然概率接近于0,或者等于0。
若目标传感器为双目视觉传感器,则双目视觉传感器可以获取物体的多幅图像,可移动平台获取这些图像,并对这些图像进行处理,以得到用来表示物体的点云簇。基于此,可移动平台可以在点云簇中采样稀疏点,针对这些稀疏点,可移动平台采用上述当目标传感器为激光传感器时,确定取值点的似然概率的方式,得到各个稀疏点的似然概率。
若目标传感器为毫米波雷达,则毫米波雷达可以获取用来表示物体的点云数据,可移动平台可以利用径向速度来评估物体的速度,即用物体指向毫米波雷达的速度分量和对应毫米波雷达的速度进行对比,来计算取值点的似然函数,图5为本申请一实施例提供的速度偏差的似然函数的示意图,如图5所示,当速度偏差为0m/s 2时,对应的似然概率最大为0.8。
步骤S302:根据各个取值点的后验概率,更新各个取值点的概率值。
可选的,可移动平台将每个取值点的后验概率作为该取值点新的概率值。或者,可移动平台计算每个取值点的后验概率和取值点的概率值的平均值。以得到该取值点新的概率值。
在本申请中,可移动平台针对任一个取值点,根据目标传感器采集的数据,确定取值点的后验概率。可移动平台可以根据各个取值点的后验概率,更新各个取值点的概率值。由此可知,本申请在更新各个取值点的概率值结合了传感器采集的原始数据。通过这种方法可以克服传感器采集的原始数据在运动状态的初始概率分布获得过程中损失的问题,从而可以保证获取的目标概率分布更加准确。同时,本申请技术方案还适用于运动状态不符合高斯分布,或者,运动状态具有较大的非线性度的情况。需要说明的是,当物体的运动状态包括:速度参数和/或加速度参数时,可移动平台在确定似然概率时可以引入车体动力学模型,从而使得到的后验概率符合车体运动模型,进而得到的更新后的取值点的概率值更加准确。
下面对如何确定上述各个取值点进行说明:
可选方式一:图6为本申请一实施例提供的确定各个取值点的方法流程图,如图6所示,该方法包括如下步骤:
步骤S601:以初始概率分布中,概率值最大的取值点为中心,以目标传感器的融合精度值为半径,设置取值范围。
步骤S602:在取值范围中等间隔确定各个取值点。
以物体的运动状态为速度参数为例,假设在初始概率分布中,概率值最大的取值点为5m/s,速度参数对应的融合精度值为0.5,那么得到的取值范围为【4.5,5.5】。进一步地,可移动平台对【4.5,5.5】进行等间隔划分,比如:设置间隔为0.1,则可移动平台在【4.5,5.5】中确定的各个取值点分别为:4.5、4.6、4.7、4.8、4.9、5.0、5.1、5.2、5.3、5.4、5.5。
可选方式二:可移动平台根据初始概率分布的概率密度在对应运动状态的值域内确定各个取值点,其中,概率密度越大的取值点与相邻取值点之间的间隔越小。
例如:假设物体的运动状态符合高斯分布,且该运动状态为该物体的位置参数、朝向参数、速度参数、加速度参数中至少两项参数的组合,则该高斯分布为一个椭球体,且对取值点的采样密度和空间分布概率密度保持正比关系,即概率密度越大的取值点与相邻取值点之间的间隔越小,这样得到的各个取值点基本能够保证符合初始概率分布。
在本申请中,可移动平台可以等间隔采样或者基于高斯分布进行确定性采样,而非随机性采样,从而可以尽量避免随机采样的使用频率,进而可以获取较为精准的取值点。
可选的,目标传感器对环境采集的数据为点云数据,物体通过点云簇表示,基于此,各个取值点为点云簇中各个点云粒子的位置,各个取值点的概率值为各个点云粒子的位置的概率值。考虑到由于传感器的工艺不可能达到完美,或者其他不能被人为预测到或者控制到的因素和噪声等的存在,导致传感器测量得到的数据不可能是完全准确的。因此,上述物体对应的第一点云簇和其他物体对应的第二点云簇可能会存在冲突,即存在一些点云粒子既属于第一点云簇,也属于第二点云簇。基于此,为了解决这种冲突,下面说明生成点云簇的方法:
图7为本申请一实施例提供的生成点云簇的方法流程图,如图7所示,该方法包括如下步骤:
步骤S701:在物体的第一点云簇中确定待检点云粒子,待检点云粒子为概率值大于第一预设阈值的点云粒子。
步骤S702:检测其他物体对应的第二点云簇中是否存在与待检测点云粒子的距离小于预设距离的点云粒子。
步骤S703:若第二点云簇中存在与待检测点云粒子的距离小于预设距离的点云粒子,则可移动平台根据第一点云簇中的点云粒子的概率分布和第二点云簇的点云粒子的概率分布,计算第一点云簇和第二点云簇的联合概率。
步骤S704:根据联合概率大于第二预设阈值的点云粒子生成新的点云簇。
值得说明的是,该新的点云簇有可能仍然对应所述物体,也可以基于新的点云簇重新确定出新的目标物。
其中,上述第一预设阈值可以根据实际情况设置,比如:第一预设阈值可以为0.6、0.8等。上述预设距离也可以根据实际情况设置,比如:预设距离为10cm、20cm等,上述第二预设阈值也可以根据实际情况设置,比如:第一预设阈值可以为0.6、0.8等。本申请对如何设置第一预设阈值、第二预设阈值和预设距离不做限制。
可选的,假设将与待检测点云粒子的距离小于预设距离的点云粒子称为该待检测点云粒子对应的第一点云粒子,该可移动平台可以计算该待检测点云粒子的概率值与第一点云粒子的概率值的乘积,以得到该待检测点云粒子的概率值与第一点云粒子的联合概率。由于第一点云簇中可能存在多个待检测点云粒子,可移动平台可以计算每个待检测点云粒子与其对应的第一点云粒子的联合概率,以得到第一点云簇和第二点云簇的联合概率。
可选的,可移动平台可以将联合概率大于第二预设阈值的点云粒子组成物体对应的新的点云簇,并通过联合概率对新的点云簇进行局部最优估计,反推新的点云簇中各个点云粒子在第一点云簇的概率值,从而更新物体的概率分布。
需要说明的是,上述步骤S701至步骤S704可以在步骤S203之后执行,因此,步骤S701中的待检点云粒子指的是在目标概率分布中,概率值大于第一预设阈值的点云粒子。步骤S703中第一点云簇中的点云粒子的概率分布指的是第一点云簇中的点云粒子的目标概率分布。相应的,可移动平台反推新的点云簇中各个点云粒子在第一点云簇的概率值,从而更新物体的目标概率分布。或者,上述步骤S701至步骤S704可以在步骤S203之前执行,因此,步骤S701中的待检点云粒子指的是在初始概率分布中,概率值大于第一预设 阈值的点云粒子。步骤S703中第一点云簇中的点云粒子的概率分布指的是第一点云簇中的点云粒子的初始概率分布。相应的,可移动平台反推新的点云簇中各个点云粒子在第一点云簇的概率值,从而更新物体的初始概率分布。
下面结合实例对上述步骤S701至步骤S704进行说明:
当激光雷达传感器或者毫米波雷达采集一辆卡车的点云数据时,由于卡车的车头和车身之间存在较大的间隙,因此,车头和车身可能会被识别为两个物体,即车头用第一点云簇表示,车身用第二点云簇表示。基于此,可移动平台可以在物体的第一点云簇中确定待检点云粒子,并检测与待检点云粒子距离小于预设距离的第一点云粒子,由此可知,该第一点云粒子应该是车头和车身的接合处的点云粒子。进一步地,可移动平台计算每个待检点云粒子与对应的第一点云粒子的概率值的乘积,得到第一点云簇和第二点云簇的联合概率,这时如果某个点云粒子的联合概率大于第二预设阈值,说明该点云粒子既属于第一点云簇,也属于第二点云簇,基于此,可移动平台可以将这类点云粒子组合成新的点云簇。基于此,可移动平台通过这些离散的点云粒子的联合概率确定新的点云簇的联合概率分布,并基于该联合概率分布进行局部最优估计,通过该局部最优估计,反推新的点云簇中各个点云粒子在第一点云簇的概率值,从而更新车头的概率分布。
在本申请中,当物体和其他物体存在冲突时,即存在联合概率大于第二预设阈值的点云粒子,可移动平台可以根据联合概率大于第二预设阈值的点云粒子生成新的点云簇。基于此,可移动平台对新的点云簇的联合概率分布,并通过联合概率对新的点云簇进行局部最优估计,反推新的点云簇中各个点云粒子在第一点云簇的概率值,以更新物体的概率分布,使得其他物体对应的第二点云簇中不存在与待检测点云粒子的距离小于预设距离的点云粒子。从而解决物体和其他物体之间的冲突。
示例性地,图8为本申请另一实施例提供的一种物体状态获取方法的流程图,如图8所示,在步骤S203之后,物体状态获取方法还包括如下步骤:
步骤S801:判断初始概率分布和目标概率分布是否满足一致性条件。
步骤S802:若初始概率分布和目标概率分布不满足一致性条件,则可移动平台推送报警信息,以提示用户可移动平台存在异常。
可选的,通过卡方检验方式判断初始概率分布和目标概率分布是否满足一致性条件。或者,可移动平台在初始概率分布中选择至少一个第一离散点,并在目标概率分布中选择与至少一个第一离散点一一对应的第二离散点,计算每个第一离散点与第二离散点的概率值之差,得到概率差值,对所有的概率差值求和,得到求和结果。若该求和结果大于预设结果,则可移动平台确定初始概率分布和目标概率分布不满足一致性条件,否则,则确定初始概率分布和目标概率分布满足一致性条件。
进一步地,若初始概率分布和目标概率分布不满足一致性条件,则可移动平台推送报警信息,该报警信息可以是语音报警信息或者文字报警信息,由或者是通过警示灯闪烁所形成的报警信息,本申请对此不做限制。
在本申请中,若初始概率分布和目标概率分布不满足一致性条件,说明初始概率分布和目标概率分布相差较远,这种情况下,可移动平台推送报警信息,以提示用户可移动平台存在异常,从而提高可移动平台的可靠性。
示例性地,图9为本申请再一实施例提供的一种物体状态获取方法的流程图,如图9所示,在步骤S203之后,物体状态获取方法还包括如下步骤:
步骤S901:根据可移动平台的运动状态的取值点,和对应物体的运动状态的目标概率分布中的取值点,确定物体的运动状态的绝对取值。
其中,可移动平台可以通过惯性测量单元(Inertial Measurement Unit,IMU)、全球定位系统(Global Positioning System,GPS)、轮子编码器的里程计(wheel odometry)和视觉里程计(visual odometry)等获得自身运动估计信息(ego-motion),即可移动平台的运动状态的取值点,而物体的运动状态的目标概率分布中的取值点实际是一个相对取值,因此,可移动平台可以对自己运动状态的取值点和物体的运动状态的目标概率分布中的取值点对应的概率值求和,以得到物体的运动状态的绝对取值。
例如:可以通过自身的GPS获取自己的位置参数,物体的运动状态的物体的位置参数,由于可移动平台的GPS也会存在误差,因此可以将可移动平台的位置参数理解为随机变量,该随机变量符合一定的概率分布,因此可移动平台可以在概率分布中选择与目标概率分布中的位置参数对应的位置参数,并对对应的位置参数求和,以得到物体绝对的位置参数。
在无人机或者无人车应用中,如果无人机或者无人车自身的状态未知,这样对方物体的绝对位置和速度也是不可观的,这种情况下只能估计相对的位置和速度,当然,也可以维护一个针对本车状态估计的滤波器或者定位算法,可以去融合IMU,GPS,轮子里程计,高精度地图,视觉,激光甚至是毫米波雷达等传感器进行定位,当得到这些信息后可以通过坐标转换得到物体的绝对状态估计,从而更加自然的引入动力学模型和观测模型。
在本申请中,可以根据可移动平台的运动状态的取值点,和对应物体的运动状态的目标概率分布中的取值点,确定物体的运动状态的绝对取值。需要说明的是,通常传感器采集的数据都是相对数据,对于速度参数来说,因相对速度变化较大,可能会发生前后帧的跳变,不利于通过传感器采集的速度参数来更新概率值,所以可以利用可移动平台本身的运动参数,计算绝对取值,进而减少检测到的物体的运动状态取值的跳变。
图10为本申请一实施例提供的一种可移动平台的示意图,可移动平台搭载多个传感器,传感器用于对可移动平台所处的环境进行数据采集,如图10所示,该可移动平台包括:
获取模块1001,用于获取环境中物体的运动状态的初始概率分布,初始概率分布是对多个传感器采集的数据的融合结果,初始概率分布包括运动状态对应的各个取值点的概率值。
更新模块1002,用于根据目标传感器采集的数据更新各个取值点的概率值。
第一确定模块1003,用于根据各个取值点更新后的概率值,确定运动状态的目标概率分布。
可选的,更新模块1002包括:确定子模块和更新子模块,其中确定子模块用于针对任一个取值点,根据目标传感器采集的数据,确定取值点的后验概率,取值点的后验概率是目标传感器采集的数据的采集条件下,得到取值点的概率。更新子模块用于根据各个取值点的后验概率,更新各个取值点的概率值。
可选的,更新子模块具体用于:根据目标传感器采集的数据确定取值点的似然概率,取值点的似然概率是在取值点的得到条件下,目标传感器采集 的数据的采集概率。计算取值点的概率值和似然概率的乘积,得到取值点的后验概率。
可选的,可移动平台还包括:设置模块1004和第二确定模块1005。其中设置模块1004用于以初始概率分布中,概率值最大的取值点为中心,以目标传感器的融合精度值为半径,设置取值范围。第二确定模块1005用于在取值范围中等间隔确定各个取值点。
可选的,可移动平台还包括:第三确定模块1006,用于根据初始概率分布的概率密度在对应运动状态的值域内确定各个取值点,其中,概率密度越大的取值点与相邻取值点之间的间隔越小。
可选的,目标传感器对环境采集的数据为点云数据,物体通过点云簇表示,各个取值点为点云簇中各个点云粒子的位置,各个取值点的概率值为各个点云粒子的位置的概率值。
可选的,可移动平台还包括:
第四确定模块1007,用于在物体的第一点云簇中确定待检点云粒子,待检点云粒子为概率值大于第一预设阈值的点云粒子。
检测模块1008,用于检测其他物体对应的第二点云簇中是否存在与待检测点云粒子的距离小于预设距离的点云粒子。
计算模块1009,用于若第二点云簇中存在与待检测点云粒子的距离小于预设距离的点云粒子,则根据第一点云簇中的点云粒子的概率分布和第二点云簇的点云粒子的概率分布,计算第一点云簇和第二点云簇的联合概率。
生成模块1010,用于根据联合概率大于第二预设阈值的点云粒子生成新的点云簇。
可选的,初始概率分布是基于多个传感器在当前帧和上一帧采集数据得到的。更新模块1002具体用于:根据目标传感器在当前帧和上一帧采集的数据更新取值点的概率值。
可选的,可移动平台还包括:
判断模块1011,用于在更新模块根据目标传感器采集的数据更新各个取值点的概率值,得到运动状态的目标概率分布之后,判断初始概率分布和目标概率分布是否满足一致性条件。
推送模块1012,用于若初始概率分布和目标概率分布不满足一致性条件, 则推送报警信息,以提示用户可移动平台存在异常。
可选的,判断模块1011具体用于:通过卡方检验方式判断初始概率分布和目标概率分布是否满足一致性条件。
可选的,可移动平台还包括:第五确定模块1013,用于在第一确定模块根据各个取值点更新后的概率值,确定运动状态的目标概率分布之后,根据可移动平台的运动状态的取值点,和对应物体的运动状态的目标概率分布中的取值点,确定物体的运动状态的绝对取值。
可选的,多个传感器包括以下至少一种:激光雷达传感器、双目视觉传感器,毫米波雷达,超声传感器。
可选的,运动状态包括以下至少一项:物体的位置参数、朝向参数、速度参数、加速度参数。
综上,本申请提供一种可移动平台,该可移动平台可以执行上述的物体状态获取方法,其内容和效果可参考方法实施例部分,对此不再赘述。
图11为本申请一实施例提供的一种可移动平台的示意图,如图11所示,该可移动平台包括:多个传感器1101,传感器用于对可移动平台所处的环境进行数据采集,至少一个处理器1102以及与至少一个处理器通信连接的存储器1103。其中图11中以包括两个传感器1101和一个处理器1102为例。
处理器1102用于:获取环境中物体的运动状态的初始概率分布,初始概率分布是对多个传感器采集的数据的融合结果,初始概率分布包括运动状态对应的各个取值点的概率值;根据目标传感器采集的数据更新各个取值点的概率值;根据各个取值点更新后的概率值,确定运动状态的目标概率分布。
可选的,处理器1102具体用于:针对任一个取值点,根据目标传感器采集的数据,确定取值点的后验概率,取值点的后验概率是目标传感器采集的数据的采集条件下,得到取值点的概率;根据各个取值点的后验概率,更新各个取值点的概率值。
可选的,处理器1102具体用于:根据目标传感器采集的数据确定取值点的似然概率,取值点的似然概率是在取值点的得到条件下,目标传感器采集的数据的采集概率;计算取值点的概率值和似然概率的乘积,得到取值点的后验概率。
可选的,处理器1102还用于:以初始概率分布中,概率值最大的取值点为中心,以目标传感器的融合精度值为半径,设置取值范围;在取值范围中等间隔确定各个取值点。
可选的,处理器1102还用于:根据初始概率分布的概率密度在对应运动状态的值域内确定各个取值点,其中,概率密度越大的取值点与相邻取值点之间的间隔越小。
可选的,目标传感器对环境采集的数据为点云数据,物体通过点云簇表示,各个取值点为点云簇中各个点云粒子的位置,各个取值点的概率值为各个点云粒子的位置的概率值。
可选的,处理器1102还用于:在物体的第一点云簇中确定待检点云粒子,待检点云粒子为概率值大于第一预设阈值的点云粒子;检测其他物体对应的第二点云簇中是否存在与待检测点云粒子的距离小于预设距离的点云粒子;若第二点云簇中存在与待检测点云粒子的距离小于预设距离的点云粒子,则根据第一点云簇中的点云粒子的概率分布和第二点云簇的点云粒子的概率分布,计算第一点云簇和第二点云簇的联合概率;根据联合概率大于第二预设阈值的点云粒子生成新的点云簇。
可选的,初始概率分布是基于多个传感器在当前帧和上一帧采集数据得到的;处理器1102具体用于:根据目标传感器在当前帧和上一帧采集的数据更新取值点的概率值。
可选的,处理器1102还用于:在根据目标传感器采集的数据更新各个取值点的概率值,得到运动状态的目标概率分布之后,判断初始概率分布和目标概率分布是否满足一致性条件;若初始概率分布和目标概率分布不满足一致性条件,则推送报警信息,以提示用户可移动平台存在异常。
可选的,处理器1102具体用于:通过卡方检验方式判断初始概率分布和目标概率分布是否满足一致性条件。
可选的,处理器1102还用于:在根据各个取值点更新后的概率值,确定运动状态的目标概率分布之后,根据可移动平台的运动状态的取值点,和对应物体的运动状态的目标概率分布中的取值点,确定物体的运动状态的绝对取值。
可选的,多个传感器包括以下至少一种:激光雷达传感器、双目视觉传 感器,毫米波雷达,超声传感器。
可选的,运动状态包括以下至少一项:物体的位置参数、朝向参数、速度参数、加速度参数。
需要说明的是,本申请涉及的处理器可以是电机控制器MCU(Motor control unit,简称MCU)、中央处理单元(Central Processing Unit,简称:CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,简称:DSP)、专用集成电路(Application Specific Integrated Circuit,简称:ASIC)等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
本申请还提供一种计算机程序产品,该程序产品包括计算机指令,计算机指令用于实现上述各方法实施例的步骤。其内容和效果可参考方法实施例部分,对此不再赘述。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (27)

  1. 一种物体状态获取方法,其特征在于,可移动平台搭载多个传感器,所述传感器用于对可移动平台所处的环境进行数据采集,所述方法包括:
    获取所述环境中物体的运动状态的初始概率分布,所述初始概率分布是对多个所述传感器采集的数据的融合结果,所述初始概率分布包括所述运动状态对应的各个取值点的概率值;
    根据目标传感器采集的数据更新所述各个取值点的概率值;
    根据所述各个取值点更新后的概率值,确定所述运动状态的目标概率分布。
  2. 根据权利要求1所述的方法,其特征在于,所述根据目标传感器采集的数据更新所述各个取值点的概率值,包括:
    针对任一个所述取值点,根据目标传感器采集的数据,确定所述取值点的后验概率,所述取值点的所述后验概率是所述目标传感器采集的数据的采集条件下,得到所述取值点的概率;
    根据所述各个取值点的后验概率,更新所述各个取值点的概率值。
  3. 根据权利要求2所述的方法,其特征在于,所述根据目标传感器采集的数据,确定所述取值点的后验概率,包括:
    根据目标传感器采集的数据确定所述取值点的似然概率,所述取值点的似然概率是在所述取值点的得到条件下,所述目标传感器采集的数据的采集概率;
    计算所述取值点的概率值和所述似然概率的乘积,得到所述取值点的后验概率。
  4. 根据权利要求1所述的方法,其特征在于,还包括:
    以所述初始概率分布中,概率值最大的取值点为中心,以所述目标传感器的融合精度值为半径,设置取值范围;
    在所述取值范围中等间隔确定所述各个取值点。
  5. 根据权利要求1所述的方法,其特征在于,还包括:
    根据所述初始概率分布的概率密度在对应所述运动状态的值域内确定所述各个取值点,其中,概率密度越大的取值点与相邻取值点之间的间隔越小。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,
    所述目标传感器对所述环境采集的数据为点云数据,所述物体通过点云簇表示,所述各个取值点为所述点云簇中各个点云粒子的位置,所述各个取值点的概率值为所述各个点云粒子的位置的概率值。
  7. 根据权利要求6所述的方法,其特征在于,还包括:
    在所述物体的第一点云簇中确定待检点云粒子,所述待检点云粒子为概率值大于第一预设阈值的点云粒子;
    检测其他物体对应的第二点云簇中是否存在与所述待检测点云粒子的距离小于预设距离的点云粒子;
    若所述第二点云簇中存在与所述待检测点云粒子的距离小于预设距离的点云粒子,则根据第一点云簇中的点云粒子的概率分布和所述第二点云簇的点云粒子的概率分布,计算所述第一点云簇和第二点云簇的联合概率;
    根据所述联合概率大于第二预设阈值的点云粒子生成新的点云簇。
  8. 根据权利要求1-5任一项所述的方法,其特征在于,
    所述初始概率分布是基于多个所述传感器在当前帧和上一帧采集数据得到的;
    所述根据目标传感器采集的数据更新所述各个取值点的概率值,包括:
    根据所述目标传感器在当前帧和上一帧采集的数据更新所述取值点的概率值。
  9. 根据权利要求1-5任一项所述的方法,其特征在于,所述根据所述各个取值点更新后的概率值,确定所述运动状态的目标概率分布之后,还包括:
    判断所述初始概率分布和所述目标概率分布是否满足一致性条件;
    若所述初始概率分布和所述目标概率分布不满足一致性条件,则推送报 警信息,以提示用户所述可移动平台存在异常。
  10. 根据权利要求9所述的方法,其特征在于,所述判断所述初始概率分布和所述目标概率分布是否满足一致性条件,包括:
    通过卡方检验方式判断所述初始概率分布和所述目标概率分布是否满足一致性条件。
  11. 根据权利要求1-5任一项所述的方法,其特征在于,所述根据所述各个取值点更新后的概率值,确定所述运动状态的目标概率分布之后,还包括:
    根据所述可移动平台的运动状态的取值点,和对应所述物体的运动状态的所述目标概率分布中的取值点,确定所述物体的运动状态的绝对取值。
  12. 根据权利要求1-5任一项所述的方法,其特征在于,多个所述传感器包括以下至少一种:激光雷达传感器、双目视觉传感器,毫米波雷达,超声传感器。
  13. 根据权利要求1-5任一项所述的方法,其特征在于,所述运动状态包括以下至少一项:所述物体的位置参数、朝向参数、速度参数、加速度参数。
  14. 一种可移动平台,其特征在于,可移动平台搭载多个传感器,所述传感器用于对可移动平台所处的环境进行数据采集,所述可移动平台包括:处理器,用于:
    获取所述环境中物体的运动状态的初始概率分布,所述初始概率分布是对多个所述传感器采集的数据的融合结果,所述初始概率分布包括所述运动状态对应的各个取值点的概率值;
    根据目标传感器采集的数据更新所述各个取值点的概率值;
    根据所述各个取值点更新后的概率值,确定所述运动状态的目标概率分布。
  15. 根据权利要求14所述的可移动平台,其特征在于,所述处理器具体用于:
    针对任一个所述取值点,根据目标传感器采集的数据,确定所述取值点的后验概率,所述取值点的所述后验概率是所述目标传感器采集的数据的采集条件下,得到所述取值点的概率;
    根据所述各个取值点的后验概率,更新所述各个取值点的概率值。
  16. 根据权利要求15所述的可移动平台,其特征在于,所述处理器具体用于:
    根据目标传感器采集的数据确定所述取值点的似然概率,所述取值点的似然概率是在所述取值点的得到条件下,所述目标传感器采集的数据的采集概率;
    计算所述取值点的概率值和所述似然概率的乘积,得到所述取值点的后验概率。
  17. 根据权利要求14所述的可移动平台,其特征在于,所述处理器还用于:
    以所述初始概率分布中,概率值最大的取值点为中心,以所述目标传感器的融合精度值为半径,设置取值范围;
    在所述取值范围中等间隔确定所述各个取值点。
  18. 根据权利要求14所述的可移动平台,其特征在于,所述处理器还用于:
    根据所述初始概率分布的概率密度在对应所述运动状态的值域内确定所述各个取值点,其中,概率密度越大的取值点与相邻取值点之间的间隔越小。
  19. 根据权利要求14-18任一项所述的可移动平台,其特征在于,
    所述目标传感器对所述环境采集的数据为点云数据,所述物体通过点云簇表示,所述各个取值点为所述点云簇中各个点云粒子的位置,所述各个取 值点的概率值为所述各个点云粒子的位置的概率值。
  20. 根据权利要求19所述的可移动平台,其特征在于,所述处理器还用于:
    在所述物体的第一点云簇中确定待检点云粒子,所述待检点云粒子为概率值大于第一预设阈值的点云粒子;
    检测其他物体对应的第二点云簇中是否存在与所述待检测点云粒子的距离小于预设距离的点云粒子;
    若所述第二点云簇中存在与所述待检测点云粒子的距离小于预设距离的点云粒子,则根据第一点云簇中的点云粒子的概率分布和所述第二点云簇的点云粒子的概率分布,计算所述第一点云簇和第二点云簇的联合概率;
    根据所述联合概率大于第二预设阈值的点云粒子生成新的点云簇。
  21. 根据权利要求14-18任一项所述的可移动平台,其特征在于,
    所述初始概率分布是基于多个所述传感器在当前帧和上一帧采集数据得到的;
    所述处理器具体用于:
    根据所述目标传感器在当前帧和上一帧采集的数据更新所述取值点的概率值。
  22. 根据权利要求14-18任一项所述的可移动平台,所述处理器还用于:
    在根据目标传感器采集的数据更新所述各个取值点的概率值,得到所述运动状态的目标概率分布之后,判断所述初始概率分布和所述目标概率分布是否满足一致性条件;
    若所述初始概率分布和所述目标概率分布不满足一致性条件,则推送报警信息,以提示用户所述可移动平台存在异常。
  23. 根据权利要求22所述的可移动平台,其特征在于,所述处理器具体用于:
    通过卡方检验方式判断所述初始概率分布和所述目标概率分布是否满足 一致性条件。
  24. 根据权利要求14-18任一项所述的可移动平台,其特征在于,所述处理器还用于:
    在根据所述各个取值点更新后的概率值,确定所述运动状态的目标概率分布之后,根据所述可移动平台的运动状态的取值点,和对应所述物体的运动状态的所述目标概率分布中的取值点,确定所述物体的运动状态的绝对取值。
  25. 根据权利要求14-18任一项所述的可移动平台,其特征在于,多个所述传感器包括以下至少一种:激光雷达传感器、双目视觉传感器,毫米波雷达,超声传感器。
  26. 根据权利要求14-18任一项所述的可移动平台,其特征在于,所述运动状态包括以下至少一项:所述物体的位置参数、朝向参数、速度参数、加速度参数。
  27. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括计算机指令,所述计算机指令用于实现如权利要求1-13中任一项所述的方法。
PCT/CN2019/120911 2019-11-26 2019-11-26 物体状态获取方法、可移动平台及存储介质 WO2021102676A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980041121.1A CN112313536B (zh) 2019-11-26 2019-11-26 物体状态获取方法、可移动平台及存储介质
PCT/CN2019/120911 WO2021102676A1 (zh) 2019-11-26 2019-11-26 物体状态获取方法、可移动平台及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/120911 WO2021102676A1 (zh) 2019-11-26 2019-11-26 物体状态获取方法、可移动平台及存储介质

Publications (1)

Publication Number Publication Date
WO2021102676A1 true WO2021102676A1 (zh) 2021-06-03

Family

ID=74336330

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/120911 WO2021102676A1 (zh) 2019-11-26 2019-11-26 物体状态获取方法、可移动平台及存储介质

Country Status (2)

Country Link
CN (1) CN112313536B (zh)
WO (1) WO2021102676A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115253B (zh) * 2021-03-19 2022-08-23 西北大学 动态阻挡下毫米波无人机高度和密度部署估计方法及系统
CN113052907B (zh) * 2021-04-12 2023-08-15 深圳大学 一种动态环境移动机器人的定位方法
CN113997989B (zh) * 2021-11-29 2024-03-29 中国人民解放军国防科技大学 磁浮列车单点悬浮系统安全检测方法、装置、设备及介质

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040039498A1 (en) * 2002-08-23 2004-02-26 Mark Ollis System and method for the creation of a terrain density model
CN103472850A (zh) * 2013-09-29 2013-12-25 合肥工业大学 一种基于高斯分布预测的多无人机协同搜索方法
CN105425820A (zh) * 2016-01-05 2016-03-23 合肥工业大学 一种针对具有感知能力的运动目标的多无人机协同搜索方法
CN105678076A (zh) * 2016-01-07 2016-06-15 福州华鹰重工机械有限公司 点云测量数据质量评估优化的方法及装置
CN105700555A (zh) * 2016-03-14 2016-06-22 北京航空航天大学 一种基于势博弈的多无人机协同搜索方法
US20180012370A1 (en) * 2016-07-06 2018-01-11 Qualcomm Incorporated Systems and methods for mapping an environment
CN108509918A (zh) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 融合激光点云与图像的目标检测与跟踪方法
CN108717540A (zh) * 2018-08-03 2018-10-30 浙江梧斯源通信科技股份有限公司 基于2d激光雷达区分行人和车辆的方法及装置
CN108764168A (zh) * 2018-05-31 2018-11-06 合肥工业大学 用于成像卫星在多障碍物海面搜索移动目标的方法及系统
CN109523129A (zh) * 2018-10-22 2019-03-26 吉林大学 一种无人车多传感器信息实时融合的方法
CN110389595A (zh) * 2019-06-17 2019-10-29 中国工程物理研究院电子工程研究所 双属性概率图优化的无人机集群协同目标搜索方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102147468B (zh) * 2011-01-07 2013-02-27 西安电子科技大学 基于贝叶斯理论的多传感器检测跟踪联合处理方法
CN105717505B (zh) * 2016-02-17 2018-06-01 国家电网公司 利用传感网进行多目标跟踪的数据关联方法
WO2018119912A1 (zh) * 2016-12-29 2018-07-05 深圳大学 基于并行模糊高斯和粒子滤波的目标跟踪方法及装置
CN109118500B (zh) * 2018-07-16 2022-05-10 重庆大学产业技术研究院 一种基于图像的三维激光扫描点云数据的分割方法
CN109996205B (zh) * 2019-04-12 2021-12-07 成都工业学院 传感器数据融合方法、装置、电子设备及存储介质

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040039498A1 (en) * 2002-08-23 2004-02-26 Mark Ollis System and method for the creation of a terrain density model
CN103472850A (zh) * 2013-09-29 2013-12-25 合肥工业大学 一种基于高斯分布预测的多无人机协同搜索方法
CN105425820A (zh) * 2016-01-05 2016-03-23 合肥工业大学 一种针对具有感知能力的运动目标的多无人机协同搜索方法
CN105678076A (zh) * 2016-01-07 2016-06-15 福州华鹰重工机械有限公司 点云测量数据质量评估优化的方法及装置
CN105700555A (zh) * 2016-03-14 2016-06-22 北京航空航天大学 一种基于势博弈的多无人机协同搜索方法
US20180012370A1 (en) * 2016-07-06 2018-01-11 Qualcomm Incorporated Systems and methods for mapping an environment
CN108509918A (zh) * 2018-04-03 2018-09-07 中国人民解放军国防科技大学 融合激光点云与图像的目标检测与跟踪方法
CN108764168A (zh) * 2018-05-31 2018-11-06 合肥工业大学 用于成像卫星在多障碍物海面搜索移动目标的方法及系统
CN108717540A (zh) * 2018-08-03 2018-10-30 浙江梧斯源通信科技股份有限公司 基于2d激光雷达区分行人和车辆的方法及装置
CN109523129A (zh) * 2018-10-22 2019-03-26 吉林大学 一种无人车多传感器信息实时融合的方法
CN110389595A (zh) * 2019-06-17 2019-10-29 中国工程物理研究院电子工程研究所 双属性概率图优化的无人机集群协同目标搜索方法

Also Published As

Publication number Publication date
CN112313536B (zh) 2024-04-05
CN112313536A (zh) 2021-02-02

Similar Documents

Publication Publication Date Title
EP3384360B1 (en) Simultaneous mapping and planning by a robot
US8452535B2 (en) Systems and methods for precise sub-lane vehicle positioning
CN1940591B (zh) 使用传感器融合进行目标跟踪的系统和方法
JP2021523443A (ja) Lidarデータと画像データの関連付け
CN109446886B (zh) 基于无人车的障碍物检测方法、装置、设备以及存储介质
CN110286389B (zh) 一种用于障碍物识别的栅格管理方法
EP3875907B1 (en) Method, apparatus, computing device and computer-readable storage medium for positioning
KR20210111180A (ko) 위치 추적 방법, 장치, 컴퓨팅 기기 및 컴퓨터 판독 가능한 저장 매체
WO2021102676A1 (zh) 物体状态获取方法、可移动平台及存储介质
US11506502B2 (en) Robust localization
CN111201448B (zh) 用于产生反演传感器模型的方法和设备以及用于识别障碍物的方法
CN110632617B (zh) 一种激光雷达点云数据处理的方法及装置
US9002513B2 (en) Estimating apparatus, estimating method, and computer product
WO2018180338A1 (ja) 情報処理装置、サーバ装置、制御方法、プログラム及び記憶媒体
JP2017068700A (ja) 物体検出装置、物体検出方法、及びプログラム
CN110341621B (zh) 一种障碍物检测方法及装置
Baig et al. A robust motion detection technique for dynamic environment monitoring: A framework for grid-based monitoring of the dynamic environment
Schütz et al. Occupancy grid map-based extended object tracking
CN110426714B (zh) 一种障碍物识别方法
CN114528941A (zh) 传感器数据融合方法、装置、电子设备及存储介质
JP2024012160A (ja) 目標状態推定方法、装置、電子機器及び媒体
US11521027B2 (en) Method and device for fusion of measurements from different information sources
CN114282776A (zh) 车路协同评估自动驾驶安全性的方法、装置、设备和介质
CN113511194A (zh) 一种纵向避撞预警方法及相关装置
WO2023123325A1 (zh) 一种状态估计方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19954433

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19954433

Country of ref document: EP

Kind code of ref document: A1