CN112313536B - Object state acquisition method, movable platform and storage medium - Google Patents

Object state acquisition method, movable platform and storage medium Download PDF

Info

Publication number
CN112313536B
CN112313536B CN201980041121.1A CN201980041121A CN112313536B CN 112313536 B CN112313536 B CN 112313536B CN 201980041121 A CN201980041121 A CN 201980041121A CN 112313536 B CN112313536 B CN 112313536B
Authority
CN
China
Prior art keywords
value
probability
point
point cloud
probability distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980041121.1A
Other languages
Chinese (zh)
Other versions
CN112313536A (en
Inventor
吴显亮
陈进
李星河
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuoyu Technology Co ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112313536A publication Critical patent/CN112313536A/en
Application granted granted Critical
Publication of CN112313536B publication Critical patent/CN112313536B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/862Combination of radar systems with sonar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/86Combinations of sonar systems with lidar systems; Combinations of sonar systems with systems not using wave reflection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/66Tracking systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computational Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

An object state acquisition method, a movable platform (11) and a storage medium, wherein the movable platform (11) is provided with a plurality of sensors, and the sensors are used for collecting data of an environment where the movable platform (11) is located, and the method comprises the following steps: acquiring initial probability distribution of motion states of objects (12) in the environment (S201), wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors, and comprises probability values of various value points corresponding to the motion states; updating probability values of all the value points according to the data acquired by the target sensor (S202); the target probability distribution of the motion state is determined based on the updated probability values of the respective value points (S203). The method can solve the problem of loss of the original data acquired by the sensor in the initial probability distribution acquisition process of the motion state, thereby ensuring that the acquired target probability distribution is more accurate.

Description

Object state acquisition method, movable platform and storage medium
Technical Field
Embodiments of the present application relate to target tracking technologies, and in particular, to an object state acquiring method, a movable platform, and a storage medium.
Background
At present, in a dynamic environment where a movable platform such as an unmanned plane or an unmanned vehicle is located, a target tracking technology is an important research direction.
The method comprises the steps of effectively fusing observation information in a dynamic environment by a movable platform such as an unmanned plane or an unmanned vehicle, and updating state information of a plurality of objects in real time on line by utilizing the observation information, wherein the state information comprises the states such as the position, the orientation and the speed of the dynamic object and the association information of each target on different time sequences (namely whether the observations at different moments belong to the same target or not), so that the effect of tracking multiple targets is achieved.
In the prior art, the movable platform can predict the state of the object, preprocess the data acquired by the sensor, and update the state of the object through the preprocessed data to obtain the optimal state of the object. The above process relies excessively on the preprocessed data, which often results in the loss of the raw data collected by the sensor during the preprocessing process, thereby resulting in inaccurate state of the finally determined object.
Disclosure of Invention
The embodiment of the application provides an object state acquisition method, a movable platform and a storage medium. Thereby ensuring that the acquired target probability distribution is more accurate.
In a first aspect, the present application provides a method for acquiring an object state, where a mobile platform carries a plurality of sensors, where the sensors are used for acquiring data of an environment where the mobile platform is located, the method includes: acquiring initial probability distribution of a motion state of an object in an environment, wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors, and comprises probability values of each value point corresponding to the motion state; updating probability values of all the value points according to the data acquired by the target sensor; and determining target probability distribution of the motion state according to the updated probability value of each value point.
In a second aspect, the present application provides a mobile platform, the mobile platform carrying a plurality of sensors, the sensors being configured to perform data acquisition on an environment in which the mobile platform is located, the mobile platform comprising: the device comprises an acquisition module, an updating module and a first determining module. The acquisition module is used for acquiring initial probability distribution of the motion state of an object in the environment, wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors, and comprises probability values of each value point corresponding to the motion state; the updating module is used for updating the probability value of each value point according to the data acquired by the target sensor; the first determining module is used for determining target probability distribution of the motion state according to the updated probability value of each value point.
In a third aspect, the present application provides a mobile platform, the mobile platform carrying a plurality of sensors, the sensors being configured to perform data acquisition on an environment in which the mobile platform is located, the mobile platform comprising: a processor for: acquiring initial probability distribution of a motion state of an object in an environment, wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors, and comprises probability values of each value point corresponding to the motion state; updating probability values of all the value points according to the data acquired by the target sensor; and determining target probability distribution of the motion state according to the updated probability value of each value point.
In a fourth aspect, the present application provides a computer readable storage medium comprising computer instructions for implementing the method of the first aspect.
The application provides an object state acquisition method, a movable platform and a storage medium, wherein the movable platform is provided with a plurality of sensors, the sensors are used for collecting data of an environment where the movable platform is located, and the method comprises the following steps: acquiring initial probability distribution of a motion state of an object in an environment, wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors, and comprises probability values of each value point corresponding to the motion state; updating probability values of all the value points according to the data acquired by the target sensor; and determining target probability distribution of the motion state according to the updated probability value of each value point. The method can be called as a post-processing method, and the problem that the original data acquired by the sensor is lost in the initial probability distribution acquisition process of the motion state can be solved by the post-processing method, so that the acquired target probability distribution can be ensured to be more accurate.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is an application scenario diagram provided in an embodiment of the present application;
FIG. 2 is a flowchart of a method for obtaining an object state according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of a method for updating probability values of each value point according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a first likelihood function g (x) according to an embodiment of the present application;
FIG. 5 is a schematic diagram of likelihood functions of speed deviation provided by an embodiment of the present application;
FIG. 6 is a flowchart of a method for determining each point of value according to an embodiment of the present disclosure;
FIG. 7 is a flowchart of a method for generating a point cloud cluster according to an embodiment of the present application;
FIG. 8 is a flowchart of a method for obtaining an object state according to another embodiment of the present application;
FIG. 9 is a flowchart of a method for obtaining an object status according to still another embodiment of the present application;
FIG. 10 is a schematic diagram of a movable platform according to an embodiment of the present application;
fig. 11 is a schematic diagram of a movable platform according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Aiming at how to effectively fuse the observation information in the dynamic environment of a movable platform such as an unmanned plane or an unmanned vehicle and the like and utilizing the observation information to update the state information of a plurality of objects in real time on line, the state information comprises the states of the position, the orientation, the speed and the like of the dynamic object and the associated information of each target on different time sequences (namely whether the observations at different moments belong to the same target or not), thereby achieving the effect of tracking multiple targets.
The Kalman filtering or particle filtering algorithm is a state estimation technique for estimating a certain group of time sequence states by using online observation recursion. The system model is mainly used for restraining the relation of states among different time sequences, the actual operation stage is mainly used for forecasting the time sequences, the observation model is mainly used for restraining the relation between observation and states, and after the state forecast of a certain current time is obtained, the state can be updated and estimated by using the observation model and the corresponding observation. When the system model and the observation model are both linear, and the uncertainty in the observation model and the system model accords with zero-mean Gaussian distribution and the variance is known, kalman filtering is an optimal estimation method for state estimation in the case, when the system model or the observation model does not meet the linear relation, the system model or the observation model needs to be processed by using an extended Kalman filtering or unscented Kalman filtering technology, and if the uncertainty has Gaussian characteristics, the system model or the observation model can be sampled and estimated by using a particle filtering technology.
For the multi-target tracking problem, the state to be estimated is a combination of the states of each target, wherein the state of each object may include information such as position, speed, orientation, angular velocity, acceleration, etc. In general, the state of each object is assumed to be independent of each other, so that a plurality of filters can be used to estimate the state of each object individually. However, in the multi-target tracking problem, unlike single-target tracking, the premise of each object estimated by using a filter is to ensure that the observation and the object have a clear corresponding relationship, i.e. the observation is really the observation of the object and not the observation of other objects; if the relationship is unknown, a data correlation technique is used to correlate each observation with the object being estimated in the system before a filtering technique can be used to perform state estimation. At present, common association algorithms include Hungary allocation, multi-hypothesis tracking, joint probability data association technology and the like.
After the association information is obtained, a system model and an observation model are defined next. For the system model, if the estimated object is of a particular class, such as a vehicle, the model may be modeled using a vehicle dynamics model. For the observation model, the observation data source can be image, laser, millimeter wave radar, ultrasonic sensor information and the like. For images and lasers, the common practice is to pretreat the images and lasers by using a visual or point cloud processing technology to obtain two-dimensional or three-dimensional stereo detection frames, then use the frames as observations to update state information, and meanwhile assume that the observations have Gaussian distribution, so that the state estimation can be performed by using a standard extended Kalman filtering technology. Of course, the original observation can be directly used as an observation result, and the observation is not subjected to any pretreatment, so that the observation is often very slow to meet or approximately meet the Gaussian distribution assumption, and is difficult to process.
For the problem of multi-objective tracking, sampling update techniques such as particle filtering are available, but the determination of the number of particles and the phenomenon of particle degradation largely restrict the widespread use of this technique in industry, even if problems such as resampling have been alleviated. Meanwhile, because random sampling is required in a multidimensional state space, excessive calculation load is brought, and therefore cost impact is generated for practical application.
At present, more routes are selected to be preprocessed by using vision or point cloud processing technology, so that a high-quality detection result is obtained. Then, using these detection results as observations, most of these detection results are obtained in the form of three-dimensional or two-dimensional frames, which can be directly used as observations to update the state of the object, but these results obtained by preprocessing the algorithm often cannot give a very accurate uncertainty description, and in most cases, they cannot satisfy a single gaussian distribution, so that filtering using conditions based on strong gaussian distribution assumption often results in inaccuracy or even instability of the filtering system.
Another drawback of the above method is that the filtering algorithm is too dependent on the pre-processed detection result, which often results in the original data information being lost during the pre-processing step, resulting in unreliable final filtering results. For example, a multi-target tracking algorithm often requires a detection algorithm, but missing detection and false detection of the algorithm can introduce other unreliable factors into the original sensor data, lead to abnormal final results, and generate large inconsistencies with the original data.
On the other hand, in the multi-objective tracking problem, the problem of data association needs to be considered, and most of the current common data association technologies utilize a hungarian allocation algorithm. The algorithm is simple and effective, but only carries out hard allocation aiming at the observation condition of the current frame, if a certain frame is wrongly allocated, the observation of the wrongly allocated object cannot even update the state of the truly corresponding object, and meanwhile, the wrongly allocated object cannot be recovered.
As described above, in the prior art, the movable platform may predict the state of the object, and perform preprocessing on the data collected by the sensor, and update the state of the object according to the preprocessed data, so as to obtain the optimal state of the object. The above process relies excessively on the preprocessed data, which often results in the loss of the raw data collected by the sensor during the preprocessing process, thereby resulting in inaccurate state of the finally determined object.
In order to solve the above technical problems, the present application provides an object state acquiring method, a movable platform and a storage medium.
Illustratively, the present application may be applicable to the following application scenarios: fig. 1 is an application scenario diagram provided in an embodiment of the present application, as shown in fig. 1, a movable platform 11 is mounted with a plurality of sensors, each sensor is configured to collect data from an environment where the movable platform 11 is located, where the movable platform 11 is located further includes: at least one object 12, where the movable platform 11 in the present application may be an unmanned aerial vehicle, unmanned vehicle, etc., the sensor may be a lidar sensor, a binocular vision sensor, a millimeter wave radar, an ultrasonic sensor, etc., the plurality of sensors may be of the same type, such as all lidar sensors, or may be of different types, such as lidar sensors and millimeter wave radar. Further, the data collected by the different sensors for the environment where the movable platform is located are different. Such as: the system comprises a laser radar sensor and a binocular vision sensor, wherein data collected by millimeter wave radar are point cloud data, and data collected by an ultrasonic sensor are ultrasonic signals.
The main idea of the application is: the movable platform can predict the motion state of an object (namely any object in the environment where the movable platform is located), and the motion state is updated by combining the data acquired by the sensor. The movable platform can fuse the data acquired by the plurality of sensors through a certain algorithm so as to predict the motion state of the object. The algorithm may be a kalman filter, a single step particle filter, a brute force search or a neural network, which is not limited in this application.
It should be noted that, the process of the sensor may not be perfect, or other factors and noise may not be predicted or controlled by human, so that the data measured by the sensor may not be completely accurate. Thus, the motion state of the object obtained by fusing the data collected by the plurality of sensors can be understood as a random variable, and the random variable accords with a certain probability distribution, such as gaussian distribution, normal distribution, linear distribution, nonlinear distribution, non-gaussian distribution, and the like, which is not limited in the application.
The following describes the technical scheme of the application in detail:
Fig. 2 is a flowchart of a method for obtaining an object state according to an embodiment of the present application, where an execution subject of the method may be part or all of a movable platform, and the part of the movable platform may be a processor of the movable platform. As described above, the movable platform is provided with a plurality of sensors for collecting data of an environment in which the movable platform is located, and the method for acquiring the object state using the movable platform as an execution subject is described below, as shown in fig. 2, the method includes the following steps:
step S201: an initial probability distribution of the motion state of an object in the environment is obtained.
Step S202: and updating the probability value of each value point according to the data acquired by the target sensor.
Step S203: and determining target probability distribution of the motion state according to the updated probability value of each value point.
The initial probability distribution is a fusion result of data acquired by a plurality of sensors, and comprises probability values of each value point corresponding to a motion state.
In the present application, the motion state of the object includes at least one of: position parameters, orientation parameters, velocity parameters, acceleration parameters of the object. That is, the motion state of the object may be any one of a position parameter, an orientation parameter, a speed parameter, and an acceleration parameter of the object, and the probability distribution to which the motion state corresponds is the probability distribution to which the any one of the parameters corresponds. Alternatively, the motion state of the object may be a combination of at least two of a position parameter, an orientation parameter, a velocity parameter, and an acceleration parameter of the object. The probability distribution that the motion state corresponds to is also the probability distribution corresponding to the combination parameter. Such as: when the motion state includes: when the position parameter and the orientation parameter of the object are, the probability distribution which the motion state accords with refers to the probability distribution which corresponds to the position parameter and the orientation parameter simultaneously. When the motion state of the object is a parameter, the spatial dimension of the probability distribution can be reduced, and the calculation amount of the movable platform can be reduced.
After the movable platform acquires the initial probability distribution, the movable platform can select one sensor from the plurality of sensors as a target sensor, and update the probability value of each value point through the data acquired by the target sensor. The movable platform may select one sensor from the plurality of sensors as the target sensor at random, or may select one sensor with highest accuracy as the target sensor. For example: when the movable platform is used for fusing point cloud data acquired by a laser radar sensor, a binocular vision sensor and a millimeter wave radar, initial probability distribution of position parameters of an object is obtained. Further, assuming that the accuracy of the laser radar sensor is higher than that of the binocular vision sensor and the millimeter wave radar, the movable platform takes the laser radar sensor as a target sensor, and updates probability values of each value point in the initial probability distribution through the acquired point cloud data.
Optionally, the movable platform may select a plurality of target sensors, in this case, the movable platform updates the probability value of each value point in the initial probability distribution according to the data collected by one target sensor to obtain an updated probability value, and further, the movable platform updates the updated probability value according to the data collected by the next target sensor until the probability value of each value point is updated according to the data collected by all the target sensors.
It should be noted that, when the motion state of the object includes: the speed parameter and/or the acceleration parameter can be obtained directly by the laser radar sensor, or can be obtained by differentiating the position parameter and the orientation parameter of the current frame and the previous frame, and the acceleration parameter can be obtained by differentiating the position parameter and the orientation parameter of the current frame and the previous frame. The position parameter and the orientation parameter of the current frame can be determined by the point cloud data acquired by the laser radar sensor or the millimeter wave radar in the current frame, and the position parameter and the orientation parameter of the previous frame can be determined by the point cloud data acquired by the laser radar sensor or the millimeter wave radar in the previous frame. Thus, if the initial probability distribution is based on the plurality of sensors collecting data at the current and previous frames, the data collected by the target sensor is also the data collected at the current and previous frames. For example: the initial probability distribution corresponding to the speed parameter is obtained based on the point cloud data acquired by a plurality of sensors in the current frame and the last frame, so that the data acquired by the target sensor is also the point cloud data acquired by the target sensor in the current frame and the last frame.
When the motion state of the object is a position parameter or an orientation parameter, the position parameter and the orientation parameter of the current frame can be determined by the point cloud data acquired by the laser radar sensor or the millimeter wave radar in the current frame. Thus, if the initial probability distribution is based on the current frame data from multiple sensors, the data collected by the target sensor is also the data collected by the current frame. For example: the initial probability distribution corresponding to the position parameters is obtained based on the point cloud data of the plurality of sensors in the current frame, so that the data acquired by the target sensor are also the point cloud data acquired by the target sensor in the current frame.
Optionally, after the probability of each value point is updated by the movable platform, the movable platform may select at least one target value point whose probability value is greater than a preset threshold, and obtain a target probability distribution of the motion state according to the probability value of the at least one target value point. For example: the movable platform selects a plurality of target value points with probability values larger than a preset threshold value, takes the average value of the probability values of the target value points as the average value of target probability distribution, and takes the variance of the probability values of the target value points as the variance of the updated probability distribution. Or, the movable platform can select one target value point with the probability value larger than a preset threshold value, sample the target value point within the preset radius of the target value point to obtain a plurality of other target value points, and take the average value of the probability values of all the target value points as the average value of target probability distribution; and taking the variance of the probability values of all the target value points as the variance of the updated probability distribution.
When the movable platform cannot select at least one target value point with the probability value larger than the preset threshold value, the movable platform can send alarm information to prompt a user that the movable platform is abnormal. The alarm information can be voice alarm information or text alarm information, and is formed by flashing or flashing a warning lamp, which is not limited in the application.
Optionally, the target probability distribution determined by the mobile platform may also be utilized as a priori for the next frame.
In the application, the movable platform can update the probability value of each value point according to the data acquired by the target sensor, and determine the target probability distribution of the motion state according to the updated probability value of each value point. The method can be called as a post-processing method, and the problem that the original data acquired by the sensor is lost in the initial probability distribution acquisition process of the motion state can be solved by the post-processing method, so that the acquired target probability distribution can be ensured to be more accurate. Meanwhile, the technical scheme is also suitable for the situation that the motion state does not accord with Gaussian distribution or has larger nonlinearity.
The following details about the above step S202:
fig. 3 is a flowchart of a method for updating probability values of each value point according to an embodiment of the present application, as shown in fig. 3, the method includes the following steps:
step S301: and determining posterior probability of any value point according to the data acquired by the target sensor.
The movable platform can determine the likelihood probability of the value point according to the data acquired by the target sensor, and calculate the product of the probability value and the likelihood probability of the value point to obtain the posterior probability of the value point. The likelihood probability of the value point is the acquisition probability of the data acquired by the target sensor under the condition that the value point is obtained; the posterior probability of the value point is the probability of the value point obtained under the acquisition condition of the data acquired by the target sensor. For example: let a certain value point x i Has a probability value f i (x i ) The likelihood probability is f i (z i |x i ),z i Representing the data acquired by the target sensor, the posterior probability f of the value point is calculated according to the Bayesian theorem i (x i |z i )=f i (z i |x i )f i (x i )。
Or the movable platform can determine the likelihood probability of the value point according to the data acquired by the target sensor, calculate the product of the probability value of the value point and the likelihood probability to obtain a product result, and calculate the quotient of the product result and the normalization factor to obtain the posterior probability of the value point. For example: let a certain value point x i Has a probability value f i (x i ) The likelihood probability is f i (z i |x i ),z i Representing the data acquired by the target sensor, then the posterior probability f of the value point i (x i |z i )=f i (z i |x i )f i (x i ) Mu, wherein mu is a normalization factor.
It should be noted that, for the data collected by different target sensors, the likelihood probability of the movable platform for determining the value point is also different.
For example: if the target sensor is a laser sensor, the object is represented by a point cloud cluster, and the movable platform acquires a plurality of first likelihood probabilities of the value points in the point cloud cluster, wherein the first likelihood probabilities are acquisition probabilities of positions of one point cloud particle under the condition that the value points are acquired. Further, the movable platform integrates a plurality of first likelihood probabilities, namelyTo obtain likelihood probabilities for the value points. Wherein each z i,k Representing the position of the kth point cloud particle in the point cloud cluster, f (z i,k |x i ) Representing a first likelihood probability, i.e. at the point x i Z under the conditions of obtaining i,k And m+1 is the number of point cloud particles in the point cloud cluster. Assuming that the movable platform is according to z i,k Determining the distance between the object physical and the movable platform as r i,k Thus, the first likelihood of the above-mentioned value point can be defined as g (r i,k )=f(z i,k |x i )。
Fig. 4 is a schematic diagram of a first likelihood function g (x) according to an embodiment of the present application, as shown in fig. 4, when x=20m, the corresponding first likelihood probability is maximum, which is 0.4. When 0 < x < 20, the ideal distance between the movable platform and the object (i.e. 20) is far from the actual laser ranging (x), which may be caused by the object being blocked, the first likelihood probability given by such x may be a constant probability, if x > 20, which means that the ideal distance between the movable platform and the object is smaller than the actual laser ranging, which is contrary to the physical knowledge, and thus the first likelihood probability given by such x is close to 0, or equal to 0.
If the object sensor is a binocular vision sensor, the binocular vision sensor may acquire a plurality of images of the object, the movable platform may acquire the images, and process the images to obtain a point cloud cluster representing the object. Based on the above, the movable platform can sample sparse points in the point cloud cluster, and for the sparse points, the movable platform obtains likelihood probability of each sparse point by adopting the above-mentioned mode of determining likelihood probability of the valued point when the target sensor is a laser sensor.
If the target sensor is a millimeter wave radar, the millimeter wave radar may acquire point cloud data representing the object, and the movable platform may evaluate the speed of the object by using the radial speed, that is, comparing the speed component of the object pointing to the millimeter wave radar with the speed of the corresponding millimeter wave radar to calculate a likelihood function of the value point, fig. 5 is a schematic diagram of the likelihood function of the speed deviation provided in an embodiment of the present application, as shown in fig. 5, when the speed deviation is 0m/s 2 The corresponding likelihood probability is at most 0.8.
Step S302: and updating the probability value of each value point according to the posterior probability of each value point.
Optionally, the mobile platform uses the posterior probability of each value point as a new probability value of the value point. Alternatively, the mobile platform calculates the posterior probability of each value point and the average of the probability values of the value points. To obtain a new probability value of the value point.
In the application, the movable platform determines the posterior probability of any value point according to the data acquired by the target sensor aiming at any value point. The mobile platform can update the probability value of each value point according to the posterior probability of each value point. Therefore, the probability value of each value point is updated and the original data acquired by the sensor are combined. The method can solve the problem of loss of the original data acquired by the sensor in the initial probability distribution acquisition process of the motion state, thereby ensuring that the acquired target probability distribution is more accurate. Meanwhile, the technical scheme is also suitable for the situation that the motion state does not accord with Gaussian distribution or has larger nonlinearity. It should be noted that, when the motion state of the object includes: when the speed parameter and/or the acceleration parameter are/is determined, the movable platform can introduce a vehicle body dynamics model when the likelihood probability is determined, so that the obtained posterior probability accords with the vehicle body motion model, and the obtained probability value of the updated value point is more accurate.
How to determine the above-mentioned individual value points is described below:
alternative one: fig. 6 is a flowchart of a method for determining each value point according to an embodiment of the present application, as shown in fig. 6, the method includes the following steps:
step S601: and setting a value range by taking a value point with the maximum probability value in the initial probability distribution as a center and taking a fusion precision value of the target sensor as a radius.
Step S602: and determining each value point at equal intervals in the value range.
Taking the motion state of an object as a speed parameter as an example, assuming that in the initial probability distribution, the value point with the maximum probability value is 5m/s, and the fusion precision value corresponding to the speed parameter is 0.5, the obtained value range is [ 4.5,5.5 ]. Further, the movable platform performs equidistant partitioning on [ 4.5,5.5 ], such as: setting the interval to be 0.1, each value point determined by the movable platform in [ 4.5,5.5 ] is respectively: 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5.
Alternative two: and determining each value point in a value range of the corresponding motion state by the movable platform according to the probability density of the initial probability distribution, wherein the larger the probability density is, the smaller the interval between the value point and the adjacent value point is.
For example: assuming that the motion state of the object accords with Gaussian distribution, and the motion state is a combination of at least two parameters of position parameters, orientation parameters, speed parameters and acceleration parameters of the object, the Gaussian distribution is an ellipsoid, and the sampling density of the value points and the spatial distribution probability density are in a direct proportion relation, namely the larger the probability density is, the smaller the interval between the value points and the adjacent value points is, so that the obtained value points basically can ensure to accord with initial probability distribution.
In the application, the movable platform can sample at equal intervals or carry out deterministic sampling based on Gaussian distribution instead of random sampling, so that the use frequency of random sampling can be avoided as much as possible, and more accurate value points can be obtained.
Optionally, the data collected by the target sensor for the environment is point cloud data, the object is represented by a point cloud cluster, based on which each value point is a position of each point cloud particle in the point cloud cluster, and the probability value of each value point is a probability value of the position of each point cloud particle. The data measured by the sensor may not be entirely accurate, given that the process of the sensor may not be perfect, or that other factors and noise may not be artificially predicted or controlled. Therefore, the first point cloud cluster corresponding to the object and the second point cloud clusters corresponding to other objects may collide, that is, some point cloud particles belong to both the first point cloud cluster and the second point cloud cluster. Based on this, in order to solve such a conflict, a method of generating a point cloud cluster is described below:
Fig. 7 is a flowchart of a method for generating a point cloud cluster according to an embodiment of the present application, as shown in fig. 7, where the method includes the following steps:
step S701: and determining point cloud particles to be detected in a first point cloud cluster of the object, wherein the point cloud particles to be detected are point cloud particles with probability values larger than a first preset threshold value.
Step S702: and detecting whether point cloud particles with the distance smaller than a preset distance from the point cloud particles to be detected exist in the second point cloud clusters corresponding to other objects.
Step S703: if the second point cloud cluster has the point cloud particles with the distance from the point cloud particles to be detected being smaller than the preset distance, the movable platform calculates the joint probability of the first point cloud cluster and the second point cloud cluster according to the probability distribution of the point cloud particles in the first point cloud cluster and the probability distribution of the point cloud particles of the second point cloud cluster.
Step S704: and generating a new point cloud cluster according to the point cloud particles with the joint probability larger than a second preset threshold value.
It should be noted that the new point cloud cluster may still correspond to the object, and a new target object may be redetermined based on the new point cloud cluster.
The first preset threshold may be set according to actual situations, for example: the first preset threshold may be 0.6, 0.8, etc. The preset distance can also be set according to practical situations, for example: the preset distance is 10cm, 20cm, etc., and the second preset threshold may also be set according to actual situations, for example: the first preset threshold may be 0.6, 0.8, etc. The method and the device do not limit how to set the first preset threshold value, the second preset threshold value and the preset distance.
Optionally, assuming that the point cloud particle whose distance to the cloud particle to be detected is smaller than the preset distance is referred to as a first point cloud particle corresponding to the cloud particle to be detected, the movable platform may calculate a product of a probability value of the cloud particle to be detected and a probability value of the first point cloud particle to obtain a joint probability of the probability value of the cloud particle to be detected and the first point cloud particle. Because a plurality of cloud particles to be detected may exist in the first point cloud cluster, the movable platform may calculate the joint probability of each cloud particle to be detected and the corresponding first point cloud particle to obtain the joint probability of the first point cloud cluster and the second point cloud cluster.
Optionally, the movable platform may form the point cloud particles with the joint probability greater than the second preset threshold value into a new point cloud cluster corresponding to the object, and perform local optimal estimation on the new point cloud cluster through the joint probability, and reversely calculate the probability value of each point cloud particle in the new point cloud cluster in the first point cloud cluster, so as to update the probability distribution of the object.
It should be noted that, the steps S701 to S704 may be performed after the step S203, and thus, the point cloud particles to be detected in the step S701 refer to point cloud particles whose probability value is greater than the first preset threshold value in the target probability distribution. The probability distribution of the point cloud particles in the first point cloud cluster in step S703 refers to the target probability distribution of the point cloud particles in the first point cloud cluster. Correspondingly, the movable platform reversely pushes the probability value of each point cloud particle in the new point cloud cluster in the first point cloud cluster, so that the target probability distribution of the object is updated. Alternatively, the above steps S701 to S704 may be performed before step S203, and thus the point cloud particles to be inspected in step S701 refer to point cloud particles having a probability value greater than a first preset threshold in the initial probability distribution. The probability distribution of the point cloud particles in the first point cloud cluster in step S703 refers to the initial probability distribution of the point cloud particles in the first point cloud cluster. Correspondingly, the movable platform reversely pushes the probability value of each point cloud particle in the new point cloud cluster in the first point cloud cluster, so that the initial probability distribution of the object is updated.
The above steps S701 to S704 are described below with reference to examples:
when the laser radar sensor or the millimeter wave radar collects point cloud data of a truck, the truck head and the truck body may be identified as two objects due to a large gap between the truck head and the truck body, namely, the truck head is represented by a first point cloud cluster, and the truck body is represented by a second point cloud cluster. Based on the above, the movable platform can determine the point cloud particles to be detected in the first point cloud cluster of the object, and detect the first point cloud particles with the distance from the point cloud particles to be detected smaller than the preset distance, so that the first point cloud particles are the point cloud particles at the joint of the vehicle head and the vehicle body. Further, the mobile platform calculates the product of the probability value of each point cloud particle to be detected and the corresponding first point cloud particle to obtain the joint probability of the first point cloud cluster and the second point cloud cluster, and if the joint probability of a certain point cloud particle is larger than a second preset threshold value, the point cloud particle belongs to the first point cloud cluster and also belongs to the second point cloud cluster, and based on the product, the mobile platform can combine the point cloud particles into a new point cloud cluster. Based on the above, the mobile platform determines the joint probability distribution of the new point cloud cluster through the joint probability of the discrete point cloud particles, carries out local optimal estimation based on the joint probability distribution, and reversely pushes the probability value of each point cloud particle in the new point cloud cluster in the first point cloud cluster through the local optimal estimation, so as to update the probability distribution of the headstock.
In the application, when the object and other objects have collision, that is, there are point cloud particles with the joint probability greater than the second preset threshold, the movable platform may generate a new point cloud cluster according to the point cloud particles with the joint probability greater than the second preset threshold. Based on the probability distribution of the new point cloud clusters by the movable platform, carrying out local optimal estimation on the new point cloud clusters by the joint probability, and reversely pushing the probability value of each point cloud particle in the new point cloud clusters in the first point cloud clusters to update the probability distribution of the object, so that no point cloud particles with the distance smaller than the preset distance from the point cloud particles to be detected exist in the second point cloud clusters corresponding to other objects. Thereby resolving conflicts between objects and other objects.
Fig. 8 is a flowchart of an object state obtaining method according to another embodiment of the present application, as shown in fig. 8, after step S203, the object state obtaining method further includes the following steps:
step S801: and judging whether the initial probability distribution and the target probability distribution meet the consistency condition.
Step S802: if the initial probability distribution and the target probability distribution do not meet the consistency condition, the movable platform pushes alarm information to prompt a user that the movable platform is abnormal.
Optionally, whether the initial probability distribution and the target probability distribution meet the consistency condition is judged by a chi-square test mode. Or, the movable platform selects at least one first discrete point in the initial probability distribution, selects a second discrete point corresponding to the at least one first discrete point in the target probability distribution, calculates the probability value difference between each first discrete point and the second discrete point to obtain probability difference values, and sums all the probability difference values to obtain a summation result. If the summation result is larger than the preset result, the movable platform determines that the initial probability distribution and the target probability distribution do not meet the consistency condition, otherwise, determines that the initial probability distribution and the target probability distribution meet the consistency condition.
Further, if the initial probability distribution and the target probability distribution do not meet the consistency condition, the movable platform pushes the alarm information, and the alarm information can be voice alarm information or text alarm information, and is formed by flashing or flashing a warning lamp.
In the application, if the initial probability distribution and the target probability distribution do not meet the consistency condition, the initial probability distribution and the target probability distribution are far apart, and in this case, the movable platform pushes alarm information to prompt a user that the movable platform is abnormal, so that the reliability of the movable platform is improved.
Illustratively, fig. 9 is a flowchart of an object state acquiring method according to still another embodiment of the present application, as shown in fig. 9, after step S203, the object state acquiring method further includes the following steps:
step S901: and determining the absolute value of the motion state of the object according to the value point of the motion state of the movable platform and the value point in the target probability distribution of the motion state of the corresponding object.
The movable platform can obtain motion estimation information (ego-motion) through an inertial measurement unit (Inertial Measurement Unit, IMU), a global positioning system (Global Positioning System, GPS), a wheel encoder odometer (wheel odometer), a visual odometer (visual odometer) and the like, namely, the value point of the motion state of the movable platform can be actually a relative value of the value point in the target probability distribution of the motion state of the object, so that the movable platform can sum probability values corresponding to the value point of the motion state of the movable platform and the value point in the target probability distribution of the motion state of the object to obtain an absolute value of the motion state of the object.
For example: the position parameters of the object in the motion state of the object can be acquired through the GPS of the mobile platform, and the position parameters of the mobile platform can be understood as random variables which accord with certain probability distribution because the GPS of the mobile platform also has errors, so that the mobile platform can select the position parameters corresponding to the position parameters in the target probability distribution in the probability distribution and sum the corresponding position parameters to obtain the absolute position parameters of the object.
In unmanned aerial vehicle or unmanned vehicle application, if unmanned aerial vehicle or unmanned vehicle's self state is unknown, the absolute position and the speed of other side's object are also unobservable like this, can only estimate relative position and speed under this circumstances, of course, also can maintain a filter or positioning algorithm to this car state estimation, can go to fusion IMU, GPS, wheel odometer, high accuracy map, vision, sensor such as laser even millimeter wave radar is fixed a position, can obtain the absolute state estimation of object through coordinate conversion after obtaining these information to more natural introduction dynamics model and observation model.
In the application, the absolute value of the motion state of the object can be determined according to the value point of the motion state of the movable platform and the value point in the target probability distribution of the motion state of the corresponding object. It should be noted that, generally, the data collected by the sensor are all relative data, and for the speed parameter, because the relative speed change is large, the jump of the front frame and the back frame may occur, which is unfavorable for updating the probability value by the speed parameter collected by the sensor, so the absolute value can be calculated by using the motion parameter of the movable platform, and further the jump of the motion state value of the detected object is reduced.
Fig. 10 is a schematic diagram of a movable platform according to an embodiment of the present application, where the movable platform carries a plurality of sensors, and the sensors are used for collecting data of an environment where the movable platform is located, as shown in fig. 10, and the movable platform includes:
the obtaining module 1001 is configured to obtain an initial probability distribution of a motion state of an object in the environment, where the initial probability distribution is a fusion result of data collected by a plurality of sensors, and the initial probability distribution includes probability values of each value point corresponding to the motion state.
And an updating module 1002, configured to update probability values of the value points according to the data acquired by the target sensor.
The first determining module 1003 is configured to determine a target probability distribution of the motion state according to the updated probability values of the value points.
Optionally, the updating module 1002 includes: the device comprises a determining sub-module and an updating sub-module, wherein the determining sub-module is used for determining posterior probability of a value point according to data acquired by a target sensor aiming at any value point, and the posterior probability of the value point is the probability of the value point under the acquisition condition of the data acquired by the target sensor. The updating sub-module is used for updating the probability value of each value point according to the posterior probability of each value point.
Optionally, the update submodule is specifically configured to: and determining the likelihood probability of the value point according to the data acquired by the target sensor, wherein the likelihood probability of the value point is the acquisition probability of the data acquired by the target sensor under the condition that the value point is acquired. And calculating the product of the probability value and the likelihood probability of the value point to obtain the posterior probability of the value point.
Optionally, the movable platform further comprises: a setting module 1004 and a second determining module 1005. The setting module 1004 is configured to set a value range with a value point with a maximum probability value in the initial probability distribution as a center and a fusion precision value of the target sensor as a radius. The second determining module 1005 is configured to determine each value point at equal intervals in the value range.
Optionally, the movable platform further comprises: a third determining module 1006, configured to determine each value point in the value range corresponding to the motion state according to the probability density of the initial probability distribution, where the interval between the value point with the larger probability density and the adjacent value point is smaller.
Optionally, the data collected by the target sensor for the environment is point cloud data, the object is represented by a point cloud cluster, each value point is the position of each point cloud particle in the point cloud cluster, and the probability value of each value point is the probability value of the position of each point cloud particle.
Optionally, the movable platform further comprises:
a fourth determining module 1007 is configured to determine, in a first point cloud cluster of the object, point cloud particles to be detected, where the point cloud particles to be detected are point cloud particles with a probability value greater than a first preset threshold.
The detection module 1008 is configured to detect whether there are point cloud particles with a distance smaller than a preset distance from the point cloud particles to be detected in the second point cloud cluster corresponding to the other object.
The calculating module 1009 is configured to calculate, if there are point cloud particles in the second point cloud cluster, whose distance from the point cloud particle to be detected is smaller than the preset distance, a joint probability of the first point cloud cluster and the second point cloud cluster according to a probability distribution of the point cloud particles in the first point cloud cluster and a probability distribution of the point cloud particles in the second point cloud cluster.
And a generating module 1010, configured to generate a new point cloud cluster according to the point cloud particles with the joint probability greater than the second preset threshold.
Optionally, the initial probability distribution is based on a plurality of sensors collecting data at the current frame and the previous frame. The update module 1002 is specifically configured to: and updating the probability value of the value point according to the data acquired by the target sensor in the current frame and the last frame.
Optionally, the movable platform further comprises:
the judging module 1011 is configured to judge whether the initial probability distribution and the target probability distribution satisfy the consistency condition after the updating module updates the probability value of each value point according to the data acquired by the target sensor to obtain the target probability distribution of the motion state.
And the pushing module 1012 is configured to push the alarm information to prompt the user that the mobile platform is abnormal if the initial probability distribution and the target probability distribution do not meet the consistency condition.
Optionally, the judging module 1011 is specifically configured to: and judging whether the initial probability distribution and the target probability distribution meet the consistency condition or not through a chi-square test mode.
Optionally, the movable platform further comprises: and a fifth determining module 1013, configured to determine, after the first determining module determines the target probability distribution of the motion state according to the updated probability values of the respective value points, an absolute value of the motion state of the object according to the value point of the motion state of the movable platform and the value point in the target probability distribution of the motion state of the corresponding object.
Optionally, the plurality of sensors includes at least one of: laser radar sensor, binocular vision sensor, millimeter wave radar, ultrasonic sensor.
Optionally, the motion state includes at least one of: position parameters, orientation parameters, velocity parameters, acceleration parameters of the object.
In summary, the present application provides a mobile platform, which can execute the above method for obtaining the object state, and the content and effects thereof may refer to the embodiment of the method, which is not described herein.
Fig. 11 is a schematic view of a movable platform according to an embodiment of the present application, as shown in fig. 11, where the movable platform includes: a plurality of sensors 1101 for data acquisition of an environment in which the mobile platform is located, at least one processor 1102, and a memory 1103 communicatively coupled to the at least one processor. In fig. 11, two sensors 1101 and a processor 1102 are illustrated.
The processor 1102 is configured to: acquiring initial probability distribution of a motion state of an object in an environment, wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors, and comprises probability values of each value point corresponding to the motion state; updating probability values of all the value points according to the data acquired by the target sensor; and determining target probability distribution of the motion state according to the updated probability value of each value point.
Optionally, the processor 1102 is specifically configured to: determining posterior probability of the value point according to the data acquired by the target sensor aiming at any value point, wherein the posterior probability of the value point is the probability of the value point under the acquisition condition of the data acquired by the target sensor; and updating the probability value of each value point according to the posterior probability of each value point.
Optionally, the processor 1102 is specifically configured to: determining the likelihood probability of the value point according to the data acquired by the target sensor, wherein the likelihood probability of the value point is the acquisition probability of the data acquired by the target sensor under the condition that the value point is acquired; and calculating the product of the probability value and the likelihood probability of the value point to obtain the posterior probability of the value point.
Optionally, the processor 1102 is further configured to: setting a value range by taking a value point with the maximum probability value in the initial probability distribution as a center and taking a fusion precision value of a target sensor as a radius; and determining each value point at equal intervals in the value range.
Optionally, the processor 1102 is further configured to: and determining each value point in the value range of the corresponding motion state according to the probability density of the initial probability distribution, wherein the interval between the value point with the larger probability density and the adjacent value point is smaller.
Optionally, the data collected by the target sensor for the environment is point cloud data, the object is represented by a point cloud cluster, each value point is the position of each point cloud particle in the point cloud cluster, and the probability value of each value point is the probability value of the position of each point cloud particle.
Optionally, the processor 1102 is further configured to: determining point cloud particles to be detected in a first point cloud cluster of an object, wherein the point cloud particles to be detected are point cloud particles with probability values larger than a first preset threshold value; detecting whether point cloud particles with the distance smaller than a preset distance from the point cloud particles to be detected exist in second point cloud clusters corresponding to other objects; if the second point cloud cluster has the point cloud particles with the distance from the point cloud particles to be detected being smaller than the preset distance, calculating the joint probability of the first point cloud cluster and the second point cloud cluster according to the probability distribution of the point cloud particles in the first point cloud cluster and the probability distribution of the point cloud particles in the second point cloud cluster; and generating a new point cloud cluster according to the point cloud particles with the joint probability larger than a second preset threshold value.
Optionally, the initial probability distribution is based on a plurality of sensors collecting data at the current frame and the previous frame; the processor 1102 is specifically configured to: and updating the probability value of the value point according to the data acquired by the target sensor in the current frame and the last frame.
Optionally, the processor 1102 is further configured to: after the probability value of each value point is updated according to the data acquired by the target sensor to obtain the target probability distribution of the motion state, judging whether the initial probability distribution and the target probability distribution meet the consistency condition; if the initial probability distribution and the target probability distribution do not meet the consistency condition, pushing alarm information to prompt the user that the movable platform is abnormal.
Optionally, the processor 1102 is specifically configured to: and judging whether the initial probability distribution and the target probability distribution meet the consistency condition or not through a chi-square test mode.
Optionally, the processor 1102 is further configured to: after the target probability distribution of the motion state is determined according to the updated probability value of each value point, the absolute value of the motion state of the object is determined according to the value point of the motion state of the movable platform and the value point in the target probability distribution of the motion state of the corresponding object.
Optionally, the plurality of sensors includes at least one of: laser radar sensor, binocular vision sensor, millimeter wave radar, ultrasonic sensor.
Optionally, the motion state includes at least one of: position parameters, orientation parameters, velocity parameters, acceleration parameters of the object.
The processor referred to in the present application may be a motor controller MCU (Motor control unit, abbreviated as MCU), a central processing unit (Central Processing Unit, abbreviated as CPU), or other general purpose processors, a digital signal processor (Digital Signal Processor, abbreviated as DSP), an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
The present application also provides a computer program product comprising computer instructions for implementing the steps of the method embodiments described above. The content and effect thereof can be referred to the method embodiment part, and will not be described in detail.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (25)

1. An object state acquisition method, characterized in that a movable platform carries a plurality of sensors, the sensors are used for collecting data of an environment in which the movable platform is located, the method comprises:
acquiring initial probability distribution of a motion state of an object in the environment, wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors, and comprises probability values of each value point corresponding to the motion state;
Updating the probability value of each value point according to the data acquired by the target sensor;
determining target probability distribution of the motion state according to the updated probability value of each value point;
the method further comprises the steps of:
setting a value range by taking a value point with the maximum probability value in the initial probability distribution as a center and taking a fusion precision value of the target sensor as a radius;
and determining the value points at equal intervals in the value range.
2. The method of claim 1, wherein updating the probability value for each of the plurality of value points based on the data collected by the target sensor comprises:
determining posterior probability of the value point according to data acquired by a target sensor aiming at any value point, wherein the posterior probability of the value point is the probability of the value point under the acquisition condition of the data acquired by the target sensor;
and updating the probability value of each value point according to the posterior probability of each value point.
3. The method of claim 2, wherein determining the posterior probability of the value point based on the data collected by the target sensor comprises:
Determining likelihood probability of the value point according to data acquired by a target sensor, wherein the likelihood probability of the value point is acquisition probability of the data acquired by the target sensor under the condition that the value point is obtained;
and calculating the product of the probability value of the value point and the likelihood probability to obtain the posterior probability of the value point.
4. The method as recited in claim 1, further comprising:
and determining each value point in a value range corresponding to the motion state according to the probability density of the initial probability distribution, wherein the larger the probability density is, the smaller the interval between the value point and the adjacent value point is.
5. The method according to any one of claim 1 to 4, wherein,
the data collected by the target sensor for the environment are point cloud data, the object is represented by a point cloud cluster, each value point is the position of each point cloud particle in the point cloud cluster, and the probability value of each value point is the probability value of the position of each point cloud particle.
6. The method as recited in claim 5, further comprising:
determining point cloud particles to be detected in a first point cloud cluster of the object, wherein the point cloud particles to be detected are point cloud particles with probability values larger than a first preset threshold value;
Detecting whether point cloud particles with the distance smaller than a preset distance from the point cloud particles to be detected exist in a second point cloud cluster corresponding to other objects;
if the second point cloud cluster contains point cloud particles with the distance from the point cloud particles to be detected being smaller than the preset distance, calculating the joint probability of the first point cloud cluster and the second point cloud cluster according to the probability distribution of the point cloud particles in the first point cloud cluster and the probability distribution of the point cloud particles in the second point cloud cluster;
and generating a new point cloud cluster according to the point cloud particles with the joint probability larger than a second preset threshold value.
7. The method according to any one of claim 1 to 4, wherein,
the initial probability distribution is obtained based on the data acquired by a plurality of sensors in the current frame and the previous frame;
the updating the probability value of each value point according to the data collected by the target sensor comprises the following steps:
and updating the probability value of the value point according to the data acquired by the target sensor in the current frame and the last frame.
8. The method according to any one of claims 1-4, wherein after determining the target probability distribution of the motion state according to the updated probability values of the respective value points, further comprising:
Judging whether the initial probability distribution and the target probability distribution meet a consistency condition;
and if the initial probability distribution and the target probability distribution do not meet the consistency condition, pushing alarm information to prompt a user that the movable platform is abnormal.
9. The method of claim 8, wherein said determining whether said initial probability distribution and said target probability distribution satisfy a consistency condition comprises:
and judging whether the initial probability distribution and the target probability distribution meet the consistency condition or not through a chi-square test mode.
10. The method according to any one of claims 1-4, wherein after determining the target probability distribution of the motion state according to the updated probability values of the respective value points, further comprising:
and determining the absolute value of the motion state of the object according to the value point of the motion state of the movable platform and the value point in the target probability distribution corresponding to the motion state of the object.
11. The method of any one of claims 1-4, wherein a plurality of the sensors comprises at least one of: laser radar sensor, binocular vision sensor, millimeter wave radar, ultrasonic sensor.
12. The method of any one of claims 1-4, wherein the motion state comprises at least one of: the position parameter, the orientation parameter, the speed parameter and the acceleration parameter of the object.
13. A mobile platform, wherein the mobile platform carries a plurality of sensors for data acquisition to the environment in which the mobile platform is located, the mobile platform comprising: a processor for:
acquiring initial probability distribution of a motion state of an object in the environment, wherein the initial probability distribution is a fusion result of data acquired by a plurality of sensors, and comprises probability values of each value point corresponding to the motion state;
updating the probability value of each value point according to the data acquired by the target sensor;
determining target probability distribution of the motion state according to the updated probability value of each value point; the processor is further configured to:
setting a value range by taking a value point with the maximum probability value in the initial probability distribution as a center and taking a fusion precision value of the target sensor as a radius;
and determining the value points at equal intervals in the value range.
14. The mobile platform of claim 13, wherein the processor is specifically configured to:
determining posterior probability of the value point according to data acquired by a target sensor aiming at any value point, wherein the posterior probability of the value point is the probability of the value point under the acquisition condition of the data acquired by the target sensor;
and updating the probability value of each value point according to the posterior probability of each value point.
15. The mobile platform of claim 14, wherein the processor is specifically configured to:
determining likelihood probability of the value point according to data acquired by a target sensor, wherein the likelihood probability of the value point is acquisition probability of the data acquired by the target sensor under the condition that the value point is obtained;
and calculating the product of the probability value of the value point and the likelihood probability to obtain the posterior probability of the value point.
16. The mobile platform of claim 13, wherein the processor is further configured to:
and determining each value point in a value range corresponding to the motion state according to the probability density of the initial probability distribution, wherein the larger the probability density is, the smaller the interval between the value point and the adjacent value point is.
17. The mobile platform of any one of claims 13-16, wherein,
the data collected by the target sensor for the environment are point cloud data, the object is represented by a point cloud cluster, each value point is the position of each point cloud particle in the point cloud cluster, and the probability value of each value point is the probability value of the position of each point cloud particle.
18. The mobile platform of claim 17, wherein the processor is further configured to:
determining point cloud particles to be detected in a first point cloud cluster of the object, wherein the point cloud particles to be detected are point cloud particles with probability values larger than a first preset threshold value;
detecting whether point cloud particles with the distance smaller than a preset distance from the point cloud particles to be detected exist in a second point cloud cluster corresponding to other objects;
if the second point cloud cluster contains point cloud particles with the distance from the point cloud particles to be detected being smaller than the preset distance, calculating the joint probability of the first point cloud cluster and the second point cloud cluster according to the probability distribution of the point cloud particles in the first point cloud cluster and the probability distribution of the point cloud particles in the second point cloud cluster;
and generating a new point cloud cluster according to the point cloud particles with the joint probability larger than a second preset threshold value.
19. The mobile platform of any one of claims 13-16, wherein,
the initial probability distribution is obtained based on the data acquired by a plurality of sensors in the current frame and the previous frame;
the processor is specifically configured to:
and updating the probability value of the value point according to the data acquired by the target sensor in the current frame and the last frame.
20. The mobile platform of any one of claims 13-16, the processor further to:
after updating the probability value of each value point according to the data acquired by the target sensor to obtain the target probability distribution of the motion state, judging whether the initial probability distribution and the target probability distribution meet the consistency condition;
and if the initial probability distribution and the target probability distribution do not meet the consistency condition, pushing alarm information to prompt a user that the movable platform is abnormal.
21. The mobile platform of claim 20, wherein the processor is specifically configured to:
and judging whether the initial probability distribution and the target probability distribution meet the consistency condition or not through a chi-square test mode.
22. The mobile platform of any one of claims 13-16, wherein the processor is further configured to:
And determining the absolute value of the motion state of the object according to the value points of the motion state of the movable platform and the value points in the target probability distribution corresponding to the motion state of the object after determining the target probability distribution of the motion state according to the updated probability values of the value points.
23. The mobile platform of any one of claims 13-16, wherein a plurality of the sensors comprises at least one of: laser radar sensor, binocular vision sensor, millimeter wave radar, ultrasonic sensor.
24. The mobile platform of any one of claims 13-16, wherein the motion state comprises at least one of: the position parameter, the orientation parameter, the speed parameter and the acceleration parameter of the object.
25. A computer readable storage medium comprising computer instructions for implementing the method of any one of claims 1-12.
CN201980041121.1A 2019-11-26 2019-11-26 Object state acquisition method, movable platform and storage medium Active CN112313536B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/120911 WO2021102676A1 (en) 2019-11-26 2019-11-26 Object state acquisition method, mobile platform and storage medium

Publications (2)

Publication Number Publication Date
CN112313536A CN112313536A (en) 2021-02-02
CN112313536B true CN112313536B (en) 2024-04-05

Family

ID=74336330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980041121.1A Active CN112313536B (en) 2019-11-26 2019-11-26 Object state acquisition method, movable platform and storage medium

Country Status (2)

Country Link
CN (1) CN112313536B (en)
WO (1) WO2021102676A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115253B (en) * 2021-03-19 2022-08-23 西北大学 Method and system for estimating height and density deployment of millimeter wave unmanned aerial vehicle under dynamic blocking
CN113052907B (en) * 2021-04-12 2023-08-15 深圳大学 Positioning method of mobile robot in dynamic environment
CN113997989B (en) * 2021-11-29 2024-03-29 中国人民解放军国防科技大学 Safety detection method, device, equipment and medium for single-point suspension system of maglev train

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102147468A (en) * 2011-01-07 2011-08-10 西安电子科技大学 Bayesian theory-based multi-sensor detecting and tracking combined processing method
CN103472850A (en) * 2013-09-29 2013-12-25 合肥工业大学 Multi-unmanned aerial vehicle collaborative search method based on Gaussian distribution prediction
CN105717505A (en) * 2016-02-17 2016-06-29 国家电网公司 Data association method for utilizing sensing network to carry out multi-target tracking
WO2018119912A1 (en) * 2016-12-29 2018-07-05 深圳大学 Target tracking method and device based on parallel fuzzy gaussian and particle filter
CN109118500A (en) * 2018-07-16 2019-01-01 重庆大学产业技术研究院 A kind of dividing method of the Point Cloud Data from Three Dimension Laser Scanning based on image
CN109996205A (en) * 2019-04-12 2019-07-09 成都工业学院 Data Fusion of Sensor method, apparatus, electronic equipment and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6728608B2 (en) * 2002-08-23 2004-04-27 Applied Perception, Inc. System and method for the creation of a terrain density model
CN105425820B (en) * 2016-01-05 2016-12-28 合肥工业大学 A kind of multiple no-manned plane collaboratively searching method for the moving target with perception
CN105678076B (en) * 2016-01-07 2018-06-22 福州华鹰重工机械有限公司 The method and device of point cloud measurement data quality evaluation optimization
CN105700555B (en) * 2016-03-14 2018-04-27 北京航空航天大学 A kind of multiple no-manned plane collaboratively searching method based on gesture game
US9996944B2 (en) * 2016-07-06 2018-06-12 Qualcomm Incorporated Systems and methods for mapping an environment
CN108509918B (en) * 2018-04-03 2021-01-08 中国人民解放军国防科技大学 Target detection and tracking method fusing laser point cloud and image
CN108764168B (en) * 2018-05-31 2020-02-07 合肥工业大学 Method and system for searching moving target on multi-obstacle sea surface by imaging satellite
CN108717540B (en) * 2018-08-03 2024-02-06 浙江梧斯源通信科技股份有限公司 Method and device for distinguishing pedestrians and vehicles based on 2D laser radar
CN109523129B (en) * 2018-10-22 2021-08-13 吉林大学 Method for fusing information of multiple sensors of unmanned vehicle in real time
CN110389595B (en) * 2019-06-17 2022-04-19 中国工程物理研究院电子工程研究所 Dual-attribute probability map optimized unmanned aerial vehicle cluster cooperative target searching method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102147468A (en) * 2011-01-07 2011-08-10 西安电子科技大学 Bayesian theory-based multi-sensor detecting and tracking combined processing method
CN103472850A (en) * 2013-09-29 2013-12-25 合肥工业大学 Multi-unmanned aerial vehicle collaborative search method based on Gaussian distribution prediction
CN105717505A (en) * 2016-02-17 2016-06-29 国家电网公司 Data association method for utilizing sensing network to carry out multi-target tracking
WO2018119912A1 (en) * 2016-12-29 2018-07-05 深圳大学 Target tracking method and device based on parallel fuzzy gaussian and particle filter
CN109118500A (en) * 2018-07-16 2019-01-01 重庆大学产业技术研究院 A kind of dividing method of the Point Cloud Data from Three Dimension Laser Scanning based on image
CN109996205A (en) * 2019-04-12 2019-07-09 成都工业学院 Data Fusion of Sensor method, apparatus, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2021102676A1 (en) 2021-06-03
CN112313536A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN112313536B (en) Object state acquisition method, movable platform and storage medium
Aeberhard et al. High-level sensor data fusion architecture for vehicle surround environment perception
EP2657644B1 (en) Positioning apparatus and positioning method
CN111222568B (en) Vehicle networking data fusion method and device
KR101711964B1 (en) Free space map construction method, free space map construction system, foreground/background extraction method using the free space map, and foreground/background extraction system using the free space map
US20150036887A1 (en) Method of determining a ground plane on the basis of a depth image
KR101628155B1 (en) Method for detecting and tracking unidentified multiple dynamic object in real time using Connected Component Labeling
CN111742326A (en) Lane line detection method, electronic device, and storage medium
CN114450691A (en) Robust positioning
CN106291498B (en) A kind of detecting and tracking combined optimization method based on particle filter
CN106080397B (en) Self-adaption cruise system and mobile unit
CN111580116A (en) Method for evaluating target detection performance of vehicle-mounted system and electronic equipment
Baig et al. A robust motion detection technique for dynamic environment monitoring: A framework for grid-based monitoring of the dynamic environment
CN111007880B (en) Extended target tracking method based on automobile radar
CN110426714B (en) Obstacle identification method
CN113838125A (en) Target position determining method and device, electronic equipment and storage medium
CN111612818A (en) Novel binocular vision multi-target tracking method and system
EP2913999A1 (en) Disparity value deriving device, equipment control system, movable apparatus, robot, disparity value deriving method, and computer-readable storage medium
Rabe et al. Ego-lane estimation for downtown lane-level navigation
CN112965076B (en) Multi-radar positioning system and method for robot
CN114035187A (en) Perception fusion method of automatic driving system
CN117765508A (en) Method, device and equipment for detecting non-running area of vehicle
CN112630798B (en) Method and apparatus for estimating ground
CN115327529A (en) 3D target detection and tracking method fusing millimeter wave radar and laser radar
CN112344966B (en) Positioning failure detection method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240515

Address after: Building 3, Xunmei Science and Technology Plaza, No. 8 Keyuan Road, Science and Technology Park Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518057, 1634

Patentee after: Shenzhen Zhuoyu Technology Co.,Ltd.

Country or region after: China

Address before: 518057 Shenzhen Nanshan High-tech Zone, Shenzhen, Guangdong Province, 6/F, Shenzhen Industry, Education and Research Building, Hong Kong University of Science and Technology, No. 9 Yuexingdao, South District, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: SZ DJI TECHNOLOGY Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right