CN112433228B - Multi-laser radar decision-level fusion method and device for pedestrian detection - Google Patents

Multi-laser radar decision-level fusion method and device for pedestrian detection Download PDF

Info

Publication number
CN112433228B
CN112433228B CN202110005340.3A CN202110005340A CN112433228B CN 112433228 B CN112433228 B CN 112433228B CN 202110005340 A CN202110005340 A CN 202110005340A CN 112433228 B CN112433228 B CN 112433228B
Authority
CN
China
Prior art keywords
detection
pedestrian
score
probability
negative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110005340.3A
Other languages
Chinese (zh)
Other versions
CN112433228A (en
Inventor
叶磊
吴涛
胡骏
丁凯
李健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202110005340.3A priority Critical patent/CN112433228B/en
Publication of CN112433228A publication Critical patent/CN112433228A/en
Application granted granted Critical
Publication of CN112433228B publication Critical patent/CN112433228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application relates to a multi-laser radar decision level fusion method and device for pedestrian detection, computer equipment and storage media. The method comprises the following steps: carrying out pedestrian detection on point cloud data of a detection target acquired by each laser radar on the unmanned vehicle through a trained AdaBoost algorithm to obtain a pedestrian detection score of a single laser radar; and carrying out decision-level fusion on detection results of radar pairs formed by combining two laser radars on the unmanned vehicle through Bayesian rules to obtain pedestrian detection results of the radar pairs, and then obtaining final pedestrian detection results according to the pedestrian detection results of all the radar pairs in the multiple laser radars. Because the single laser radar can respectively and independently make decisions firstly and then fuse the decisions of the multiple laser radars, the data-level fusion or the characteristic-level fusion of multiple sensors is avoided, and the data acquisition of the laser radars is not required to be completely synchronous, the method has the advantages of less calculation amount and lower time sequence requirement on the original data of the laser radars.

Description

Multi-laser radar decision-level fusion method and device for pedestrian detection
Technical Field
The present application relates to the field of unmanned driving technologies, and in particular, to a method and an apparatus for multi-lidar decision-level fusion for pedestrian detection, a computer device, and a storage medium.
Background
As an emerging technology combining artificial intelligence and automation technology, the unmanned technology has gradually become an important driving force for promoting the upgrade of the automobile industry and the penetration of the robot technology into the home of common people. At present, in order to improve the accuracy of detection and reduce the blind area of detection, a large number of multiline laser radars are adopted to realize pedestrian detection in the research of unmanned automobiles.
Conventional lidar data fusion methods include data-level and feature-level fusion. The data level fusion is to directly fuse the original data collected by the sensors, and the feature level fusion is to extract features (such as shape features, motion features and the like) of the data of each sensor respectively and then fuse the features to obtain a comprehensive feature. Because the data processing before the fusion is less, the traditional method reserves the original information of the data to a greater extent, but also has the defects of large data processing capacity, poor algorithm real-time performance and the like.
Disclosure of Invention
In view of the above, there is a need to provide a multi-lidar decision-level fusion method, apparatus, computer device and storage medium for pedestrian detection, which can reduce the amount of computation and improve the real-time performance of the algorithm.
A multi-lidar decision-level fusion method for pedestrian detection, the method comprising:
acquiring a radar pair of a plurality of laser radars on the unmanned vehicle in a combined mode; the radar pair includes: a first laser radar and a second laser radar;
respectively acquiring first point cloud data acquired by the first laser radar aiming at a detection target and second point cloud data acquired by the second laser radar aiming at the detection target, and performing pedestrian detection on the first point cloud data and the second point cloud data through a trained AdaBoost model to respectively obtain a first detection score of the first laser radar and a second detection score of the second laser radar; the AdaBoost model is obtained by training a training sample; the information of the training sample comprises a sample score; the training samples comprise positive samples and negative samples; the sample scores of the positive and negative samples both approximately conform to a Gaussian distribution;
calculating a first positive sample conditional probability of the AdaBoost model outputting the first detection score when the detection target is a pedestrian and a second positive sample conditional probability of the AdaBoost model outputting the second detection score when the detection target is a pedestrian according to a Gaussian distribution formula corresponding to the positive samples; calculating a first negative sample conditional probability of the AdaBoost model outputting the first detection score when the detection target is not a pedestrian and a second negative sample conditional probability of the AdaBoost model outputting the second detection score when the detection target is not a pedestrian according to a Gaussian distribution formula corresponding to the negative sample;
according to the first positive sample conditional probability, the second positive sample conditional probability and the positive prior probability that the detection target is a pedestrian, carrying out decision-level fusion on output information of the multi-laser radar through a Bayes rule to obtain a positive judgment probability that the detection target is a pedestrian under the conditions of a first detection score and a second detection score; according to the first negative sample condition probability, the second negative sample condition probability and the negative prior probability that the detection target is not a pedestrian, carrying out decision-level fusion on output information of the multi-laser radar through a Bayes rule to obtain a negative judgment probability that the detection target is not a pedestrian under the conditions of a first detection score and a second detection score; the positive prior probability and the negative prior probability are obtained according to a preset initial value or a positive judgment probability and a negative judgment probability obtained in the last time sequence;
obtaining a pedestrian detection result of the radar for decision-level fusion according to the positive judgment probability and the negative judgment probability;
and fusing corresponding pedestrian detection results of all radars combined by the multiple laser radars, and outputting pedestrian detection results fused at decision levels of the multiple laser radars.
In one embodiment, the method further comprises the following steps: acquiring point cloud data acquired by a laser radar on an unmanned vehicle on a detection target, and generating a plurality of original candidate frames for pedestrian detection through traversing search according to central point cloud density characteristics of the point cloud data by a sliding window algorithm;
according to the point cloud position characteristics in the original candidate frame, scoring the original candidate frame through a single classification support vector machine, and determining the best candidate frame of the detection target according to the score of the original candidate frame;
extracting the combination characteristics of the point cloud data in the optimal candidate frame, and realizing pedestrian detection through an AdaBoost model for pedestrian detection according to the combination characteristics;
and obtaining a first detection score through the first point cloud data of the first laser radar, and obtaining a second detection score through the second point cloud data of the second laser radar.
In one embodiment, the method further comprises the following steps: acquiring point cloud data of a single laser radar on an unmanned vehicle, and generating a plurality of original candidate frames for pedestrian detection through traversing search according to central point cloud density characteristics of the point cloud data by a sliding window algorithm; the density characteristics of the central point cloud are as follows:
Figure 177593DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 314308DEST_PATH_IMAGE002
representing the central point cloud density feature;
Figure 828466DEST_PATH_IMAGE003
representing the total number of points in the sliding window;
Figure 692517DEST_PATH_IMAGE004
representing the number of points falling into the corresponding grid;
Figure 975730DEST_PATH_IMAGE005
the horizontal and vertical numbers of the grids divided in the sliding window are shown.
In one embodiment, the method further comprises the following steps: and according to the point cloud density distribution characteristics and the point cloud height difference distribution characteristics in the original candidate frame, scoring the original candidate frame through a single classification support vector machine, and determining the best candidate frame of the detection target according to the score of the original candidate frame.
In one embodiment, the method further comprises the following steps: extracting the combination characteristics of the point cloud data in the optimal candidate frame, and realizing pedestrian detection through an AdaBoost model for pedestrian detection according to the combination characteristics; the combined features are point number, distance from a mass center to the unmanned vehicle, maximum height difference, point cloud three-dimensional covariance matrix, three-dimensional covariance matrix eigenvalue, inertia tensor and rotation projection statistical features.
In one embodiment, the method further comprises the following steps: obtaining a positive sample mean value and a positive sample variance of Gaussian distribution corresponding to the positive sample according to the positive sample, and obtaining a negative sample mean value and a negative sample variance of Gaussian distribution corresponding to the negative sample according to the negative sample;
calculating a first positive sample conditional probability of the AdaBoost model outputting the first detection score when the detection target is a pedestrian according to a Gaussian distribution formula corresponding to the positive sample, and calculating a second positive sample conditional probability of the AdaBoost model outputting the second detection score when the detection target is a pedestrian according to a Gaussian distribution formula corresponding to the positive sample, wherein the first positive sample conditional probability is that:
Figure 532614DEST_PATH_IMAGE006
Figure 783335DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 716656DEST_PATH_IMAGE008
indicating that the detection target is a pedestrian;
Figure 854376DEST_PATH_IMAGE009
representing the first detection score;
Figure 582161DEST_PATH_IMAGE010
representing the second detection score;
Figure 70911DEST_PATH_IMAGE011
representing the first positive sample conditional probability;
Figure 276765DEST_PATH_IMAGE012
representing the second positive sample conditional probability;
Figure 16794DEST_PATH_IMAGE013
representing the positive sample variance;
Figure 181059DEST_PATH_IMAGE014
representing the positive sample mean;
calculating a first negative sample conditional probability of the AdaBoost model outputting the first detection score when the detection target is not a pedestrian according to a Gaussian distribution formula corresponding to the negative sample, and calculating a second negative sample conditional probability of the AdaBoost model outputting the second detection score when the detection target is not a pedestrian, wherein the first negative sample conditional probability is as follows:
Figure 891526DEST_PATH_IMAGE015
Figure 901071DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure 13383DEST_PATH_IMAGE017
indicating that the detection target is not a pedestrian;
Figure 332238DEST_PATH_IMAGE018
representing the first negative sample conditional probability;
Figure 795580DEST_PATH_IMAGE019
representing the second negative sample conditional probability;
Figure 343236DEST_PATH_IMAGE020
representing the negative sample variance;
Figure 575634DEST_PATH_IMAGE021
representing the negative sample mean.
In one embodiment, the method further comprises the following steps: according to the first positive sample conditional probability, the second positive sample conditional probability and the positive prior probability that the detection target is a pedestrian, carrying out decision-level fusion on output information of the multi-laser radar through a Bayesian rule to obtain a first detection score and a positive judgment probability that the detection target is a pedestrian under the second detection score, wherein the first detection score and the second detection score are as follows:
Figure 81702DEST_PATH_IMAGE022
wherein, the first and the second end of the pipe are connected with each other,
Figure 501182DEST_PATH_IMAGE023
representing the positive judgment probability;
Figure 603261DEST_PATH_IMAGE024
representing the positive prior probability;
Figure 690166DEST_PATH_IMAGE025
Figure 367135DEST_PATH_IMAGE026
the first pedestrian detection score and the second pedestrian detection score are respectively expressed by a molecular item of a Bayesian rule, and are obtained by presetting;
according to the first negative sample conditional probability, the second negative sample conditional probability and the negative prior probability that the detection target is not a pedestrian, carrying out decision-level fusion on output information of the multi-laser radar through a Bayes rule to obtain a first detection score and a negative judgment probability that the detection target is not a pedestrian under the second detection score, wherein the negative judgment probability is that:
Figure 273911DEST_PATH_IMAGE027
wherein, the first and the second end of the pipe are connected with each other,
Figure 428949DEST_PATH_IMAGE028
representing the negative judgment probability;
Figure 619628DEST_PATH_IMAGE029
Figure 467498DEST_PATH_IMAGE030
representing the negative prior probability.
A multi-lidar decision-level fusion apparatus for pedestrian detection, the apparatus comprising:
the radar pair acquisition module is used for acquiring radar pairs of multiple laser radars on the unmanned vehicle in a combined mode; the radar pair includes: a first laser radar and a second laser radar;
the detection score acquisition module is used for respectively acquiring first point cloud data acquired by the first laser radar aiming at a detection target and second point cloud data acquired by the second laser radar aiming at the detection target, and carrying out pedestrian detection on the first point cloud data and the second point cloud data through a trained AdaBoost model to respectively obtain a first detection score of the first laser radar and a second detection score of the second laser radar; the AdaBoost model is obtained by training a training sample; the information of the training sample comprises a sample score; the training samples comprise positive samples and negative samples; the sample scores of the positive and negative samples both approximately conform to a Gaussian distribution;
a conditional probability obtaining module, configured to calculate, according to a gaussian distribution formula corresponding to the positive sample, a first positive sample conditional probability that the AdaBoost model outputs the first detection score when the detection target is a pedestrian, and a second positive sample conditional probability that the AdaBoost model outputs the second detection score when the detection target is a pedestrian; calculating a first negative sample conditional probability of the AdaBoost model outputting the first detection score when the detection target is not a pedestrian and a second negative sample conditional probability of the AdaBoost model outputting the second detection score when the detection target is not a pedestrian according to a Gaussian distribution formula corresponding to the negative samples;
the judgment probability acquisition module is used for carrying out decision-level fusion on the output information of the multi-laser radar through a Bayesian rule according to the first positive sample conditional probability, the second positive sample conditional probability and the positive prior probability that the detection target is a pedestrian, so as to obtain a positive judgment probability that the detection target is a pedestrian under the conditions of a first detection score and a second detection score; according to the first negative sample conditional probability, the second negative sample conditional probability and the negative prior probability that the detection target is not a pedestrian, carrying out decision-level fusion on output information of the multi-laser radar through a Bayes rule to obtain a negative judgment probability that the detection target is not a pedestrian under the conditions of a first detection score and a second detection score; the positive prior probability and the negative prior probability are obtained according to a preset initial value or a positive judgment probability and a negative judgment probability obtained in the last time sequence;
the radar-to-pedestrian detection result acquisition module is used for acquiring a pedestrian detection result of the multi-laser radar decision-level fusion of the detection target according to the positive judgment probability and the negative judgment probability;
and the multi-laser radar pedestrian detection result acquisition module is used for fusing corresponding pedestrian detection results of all radars combined by the multi-laser radar and outputting the pedestrian detection results fused in a multi-laser radar decision level.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a radar pair of a plurality of laser radars on the unmanned vehicle in a combined mode; the radar pair includes: a first lidar and a second lidar;
respectively acquiring first point cloud data acquired by the first laser radar aiming at a detection target and second point cloud data acquired by the second laser radar aiming at the detection target, and performing pedestrian detection on the first point cloud data and the second point cloud data through a trained AdaBoost model to respectively obtain a first detection score of the first laser radar and a second detection score of the second laser radar; the AdaBoost model is obtained by training a training sample; the information of the training samples comprises sample scores; the training samples comprise positive samples and negative samples; the sample scores of the positive and negative samples both approximately fit a gaussian distribution;
calculating a first positive sample conditional probability of the AdaBoost model outputting the first detection score when the detection target is a pedestrian and a second positive sample conditional probability of the AdaBoost model outputting the second detection score when the detection target is a pedestrian according to a Gaussian distribution formula corresponding to the positive samples; calculating a first negative sample conditional probability of the AdaBoost model outputting the first detection score when the detection target is not a pedestrian and a second negative sample conditional probability of the AdaBoost model outputting the second detection score when the detection target is not a pedestrian according to a Gaussian distribution formula corresponding to the negative samples;
according to the first positive sample conditional probability, the second positive sample conditional probability and the positive prior probability that the detection target is a pedestrian, carrying out decision-level fusion on output information of the multi-laser radar through a Bayesian rule to obtain a positive judgment probability that the detection target is a pedestrian under the conditions of a first detection score and a second detection score; according to the first negative sample condition probability, the second negative sample condition probability and the negative prior probability that the detection target is not a pedestrian, carrying out decision-level fusion on output information of the multi-laser radar through a Bayes rule to obtain a negative judgment probability that the detection target is not a pedestrian under the conditions of a first detection score and a second detection score; the positive prior probability and the negative prior probability are obtained according to a preset initial value or a positive judgment probability and a negative judgment probability obtained in the last time sequence;
obtaining a pedestrian detection result of the radar pair decision-level fusion according to the positive judgment probability and the negative judgment probability;
and fusing corresponding pedestrian detection results of all radars combined by the multiple laser radars, and outputting pedestrian detection results fused at decision levels of the multiple laser radars.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a radar pair of a plurality of laser radars on the unmanned vehicle in a combined mode; the radar pair includes: a first lidar and a second lidar;
respectively acquiring first point cloud data acquired by the first laser radar aiming at a detection target and second point cloud data acquired by the second laser radar aiming at the detection target, and performing pedestrian detection on the first point cloud data and the second point cloud data through a trained AdaBoost model to respectively obtain a first detection score of the first laser radar and a second detection score of the second laser radar; the AdaBoost model is obtained by training a training sample; the information of the training samples comprises sample scores; the training samples comprise positive samples and negative samples; the sample scores of the positive and negative samples both approximately fit a gaussian distribution;
calculating a first positive sample conditional probability of the AdaBoost model outputting the first detection score when the detection target is a pedestrian and a second positive sample conditional probability of the AdaBoost model outputting the second detection score when the detection target is a pedestrian according to a Gaussian distribution formula corresponding to the positive samples; calculating a first negative sample conditional probability of the AdaBoost model outputting the first detection score when the detection target is not a pedestrian and a second negative sample conditional probability of the AdaBoost model outputting the second detection score when the detection target is not a pedestrian according to a Gaussian distribution formula corresponding to the negative sample;
according to the first positive sample conditional probability, the second positive sample conditional probability and the positive prior probability that the detection target is a pedestrian, carrying out decision-level fusion on output information of the multi-laser radar through a Bayesian rule to obtain a positive judgment probability that the detection target is a pedestrian under the conditions of a first detection score and a second detection score; according to the first negative sample condition probability, the second negative sample condition probability and the negative prior probability that the detection target is not a pedestrian, carrying out decision-level fusion on output information of the multi-laser radar through a Bayes rule to obtain a negative judgment probability that the detection target is not a pedestrian under the conditions of a first detection score and a second detection score; the positive prior probability and the negative prior probability are obtained according to a preset initial value or a positive judgment probability and a negative judgment probability obtained in the last time sequence;
obtaining a pedestrian detection result of the radar for decision-level fusion according to the positive judgment probability and the negative judgment probability;
and fusing corresponding pedestrian detection results of all radars combined by the multiple laser radars, and outputting pedestrian detection results fused at decision levels of the multiple laser radars.
According to the multi-laser radar decision-level fusion method and device for pedestrian detection, the computer equipment and the storage medium, the pedestrian detection is carried out on the point cloud data of the detection target acquired by each laser radar on the unmanned vehicle through the trained AdaBoost algorithm, and the pedestrian detection score of the single laser radar is obtained; according to the statistical characteristics of positive sample scores and negative sample scores contained in sample information, the conditional probability of AdaBoost outputting specific detection scores when detection targets are pedestrians and non-pedestrians is obtained, according to the conditional probability obtained by a single laser radar and the prior probability of whether the detection targets are pedestrians, decision-level fusion is carried out on the detection results of a radar pair formed by combining two laser radars on an unmanned vehicle through a Bayes rule, the pedestrian detection results of the radar pair are obtained, and then the final multi-laser radar pedestrian detection results are obtained according to the pedestrian detection results of all the radar pairs in the multi-laser radar. Because the single laser radar can respectively and independently make decisions firstly and then fuse the decisions of the multiple laser radars, the data-level fusion or the characteristic-level fusion of multiple sensors is avoided, and the data acquisition of the laser radars is not required to be completely synchronous, the method has the advantages of less calculation amount, lower time sequence requirement on the original data of the laser radars and high algorithm execution efficiency.
Drawings
FIG. 1 is a schematic flow diagram of a multi-lidar decision-level fusion method for pedestrian detection in one embodiment;
FIG. 2 is a schematic flow chart illustrating the process of obtaining a first detection score and a second detection score according to one embodiment;
FIG. 3 is a diagram illustrating the points where a detection target falls in a grid in one embodiment;
FIG. 4 is a block diagram of a multi-lidar decision-level fusion device for pedestrian detection in one embodiment;
FIG. 5 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
The multi-laser radar decision-level fusion method for pedestrian detection can be applied to the following application environments. The unmanned vehicle is provided with a plurality of laser radars, decision-level fusion is carried out according to a Bayesian rule on a radar pair formed by combining any two laser radars, and then a pedestrian detection result of a final detection target is output according to pedestrian detection results of all the radar pairs. The prior probability in the Bayesian rule is obtained through a preset initial value or a detection result of a detection target at the last moment, and the conditional probability in the Bayesian rule is obtained through data calculation of a positive sample and a negative sample which accord with Gaussian distribution.
In one embodiment, as shown in fig. 1, there is provided a multi-lidar decision-level fusion method for pedestrian detection, comprising the steps of:
102, acquiring a radar pair of multiple laser radars on the unmanned vehicle in a combined mode; the radar pair includes: a first lidar and a second lidar.
In the research of unmanned vehicles, a plurality of laser radars are generally adopted in order to improve the accuracy of detection and reduce the blind area of detection. The laser radar can be arranged at the front part, the left side, the right side and the like of the automobile, and a plurality of laser radars can be arranged in one direction. The combination is one of combinatory in mathematics, and radar pairs of multiple laser radars on the unmanned vehicle are obtained in a combined mode, for example, the unmanned vehicle has three laser radars of A, B and C, and can have three radar pairs of AB, AC and BC.
104, respectively acquiring first point cloud data acquired by a first laser radar aiming at a detection target and second point cloud data acquired by a second laser radar aiming at the detection target, and performing pedestrian detection on the first point cloud data and the second point cloud data through a trained AdaBoost model to respectively obtain a first detection score of the first laser radar and a second detection score of the second laser radar; the AdaBoost model is obtained by training a training sample; the information of the training sample comprises a sample score; the training samples comprise positive samples and negative samples; the sample scores for both positive and negative samples fit approximately to a gaussian distribution.
The AdaBoost algorithm obtains the final output of the strong classifier by weighting and summing the output of the weak classifier, and can be used for improving the performance of the algorithm. AdaBoost is a continuous iterative process, a new classifier is added in each iteration, and the self-adaption method is characterized in that the weight of the wrongly classified samples in the current iteration is increased, and the weight of the correctly classified samples is reduced. When designing the classifier, the algorithm will choose the threshold that minimizes the error rate, so the error rate will decrease continuously, and the iteration ends when the error rate reaches the set conditions.
The positive sample in the training samples is a sample set which judges that a detection target is a pedestrian according to the fact that the detection score output by the AdaBoost model is higher than or equal to a set threshold value; the negative sample in the training sample refers to a sample set which judges that the detection target is not a pedestrian according to the condition that the detection score output by the AdaBoost model is lower than a set threshold value. According to statistics, for both positive and negative samples, the sample scores output by the AdaBoost model approximately conform to gaussian distribution.
Step 106, calculating a first positive sample conditional probability of outputting a first detection score by the AdaBoost model when the detection target is a pedestrian and a second positive sample conditional probability of outputting a second detection score by the AdaBoost model when the detection target is a pedestrian according to a Gaussian distribution formula corresponding to the positive samples; and calculating a first negative sample conditional probability of the AdaBoost model outputting a first detection score when the detection target is not the pedestrian and a second negative sample conditional probability of the AdaBoost model outputting a second detection score when the detection target is not the pedestrian according to a Gaussian distribution formula corresponding to the negative samples.
The sample scores of the positive sample and the negative sample approximately accord with Gaussian distribution, and corresponding parameters of the Gaussian distribution including a sample mean value and a sample variance can be calculated through statistical analysis, so that the conditional probability of the specified detection score can be obtained when the detection target is a pedestrian or a non-pedestrian according to a Gaussian distribution formula.
Step 108, according to the first positive sample conditional probability, the second positive sample conditional probability and the positive prior probability that the detection target is a pedestrian, performing decision-level fusion on output information of the multi-laser radar through a Bayes rule to obtain a positive judgment probability that the detection target is a pedestrian under the conditions of the first detection score and the second detection score; according to the first negative sample conditional probability, the second negative sample conditional probability and the negative prior probability that the detection target is not a pedestrian, carrying out decision-making fusion on the output information of the multi-laser radar through a Bayes rule to obtain a negative judgment probability that the detection target is not a pedestrian under the conditions of the first detection score and the second detection score; the positive prior probability and the negative prior probability are obtained according to a preset initial value or a positive judgment probability and a negative judgment probability obtained in the last time sequence.
Let X and Y represent the outputs of the two sub-lidar,
Figure 127150DEST_PATH_IMAGE031
the tag that indicates a pedestrian is present,
Figure 820299DEST_PATH_IMAGE032
a tag that is not a pedestrian. Bayes rule can be described as follows:
Figure 616217DEST_PATH_IMAGE033
since each sub-lidar detects independently, X and Y are independent, then the bayes rule can be written as:
Figure 634988DEST_PATH_IMAGE034
wherein the content of the first and second substances,
Figure 264160DEST_PATH_IMAGE035
Figure 761000DEST_PATH_IMAGE036
for the prior probability, in the initial operation of the method, the prior probability at the first moment is a preset initial value, such as 0.5, and then the probability value obtained at the last moment is adopted
Figure 411424DEST_PATH_IMAGE037
Or
Figure 69939DEST_PATH_IMAGE038
As the prior probability of the next time instant, specifically,
Figure 704182DEST_PATH_IMAGE039
correspond to
Figure 253981DEST_PATH_IMAGE040
Figure 24491DEST_PATH_IMAGE041
Correspond to
Figure 119486DEST_PATH_IMAGE042
Figure 975447DEST_PATH_IMAGE043
And
Figure 79669DEST_PATH_IMAGE044
a conditional probability of a specified detection score, which may be obtained from step 106, indicating that the detection target is a pedestrian; due to the fact that
Figure 704685DEST_PATH_IMAGE045
And
Figure 721314DEST_PATH_IMAGE046
in the calculation formula (2), the denominators are all
Figure 64570DEST_PATH_IMAGE047
Thus, by comparison
Figure 972484DEST_PATH_IMAGE048
And
Figure 452007DEST_PATH_IMAGE049
to determine whether the detection target is a pedestrian, it may not be specifically calculated
Figure 154383DEST_PATH_IMAGE050
By comparison of values of
Figure 968624DEST_PATH_IMAGE051
And
Figure 414649DEST_PATH_IMAGE052
the value of (2) is sufficient. When the method is implemented, the method can be implemented by setting
Figure 748679DEST_PATH_IMAGE053
Is a constant, get
Figure 621957DEST_PATH_IMAGE054
And
Figure 939806DEST_PATH_IMAGE055
as the prior probability of pedestrian detection at the next moment.
And step 110, obtaining a pedestrian detection result of the radar for decision-level fusion according to the positive judgment probability and the negative judgment probability.
When in use
Figure 658363DEST_PATH_IMAGE056
Judging that the detection target is a pedestrian; when in use
Figure 883718DEST_PATH_IMAGE057
And (4) judging that the detection target is not a pedestrian.
And step 112, fusing the corresponding pedestrian detection results of all the radars combined by the multiple laser radars, and outputting the pedestrian detection results fused by the multiple laser radars at decision levels.
And voting for pedestrians and non-pedestrians on corresponding pedestrian detection results of all radars, wherein the result with a large number of votes is the final pedestrian detection result of multi-laser radar decision-level fusion.
In the multi-laser radar decision-level fusion method for pedestrian detection, the pedestrian detection is carried out by using the trained AdaBoost algorithm on the point cloud data of the detection target acquired by each laser radar on the unmanned vehicle, so as to obtain the pedestrian detection score of the single laser radar; according to the statistical characteristics of positive sample scores and negative sample scores contained in sample information, the conditional probability of AdaBoost outputting specific detection scores when detection targets are pedestrians and non-pedestrians is obtained, according to the conditional probability obtained by a single laser radar and the prior probability of whether the detection targets are pedestrians, decision-level fusion is carried out on the detection results of a radar pair formed by combining two laser radars on an unmanned vehicle through a Bayes rule, the pedestrian detection results of the radar pair are obtained, and then the final multi-laser radar pedestrian detection results are obtained according to the pedestrian detection results of all the radar pairs in the multi-laser radar. Because the single laser radar can respectively and independently make decisions firstly and then fuse the decisions of the multiple laser radars, the data-level fusion or the characteristic-level fusion of multiple sensors is avoided, and the data acquisition of the laser radars is not completely synchronous, the method has the advantages of less calculation amount, lower time sequence requirement on the original data of the laser radars and high algorithm execution efficiency.
In one embodiment, the step of obtaining the first detection score and the second detection score comprises:
202, acquiring point cloud data acquired by a laser radar on the unmanned vehicle on a detection target, and generating a plurality of original candidate frames for pedestrian detection through traversing search according to the central point cloud density characteristics of the point cloud data by a sliding window algorithm;
204, scoring the original candidate frame through a single classification support vector machine according to the point cloud position characteristics in the original candidate frame, and determining the best candidate frame of the detection target according to the score of the original candidate frame;
step 206, extracting the combination characteristics of the point cloud data in the best candidate frame, and realizing pedestrian detection through an AdaBoost model for pedestrian detection according to the combination characteristics;
and 208, obtaining a first detection score through the first point cloud data of the first laser radar, and obtaining a second detection score through the second point cloud data of the second laser radar.
Traversing the search space using a sliding window algorithm and generating candidate boxes from the point cloud, in order to accelerate the sliding window process and reduce false alarms, two features are used: and central point cloud density characteristics and point cloud position characteristics. The central point cloud density feature is used for constructing a candidate frame filter, a non-pedestrian target can be rapidly filtered in an early stage to obtain an original candidate frame, and one suspected target may correspond to a plurality of original candidate frames; the point cloud position features are used for training a rough classifier and scoring each candidate target, and the score is used as a basis for screening the candidate targets in a subsequent Non-Maximum Suppression (NMS) stage, namely, a candidate frame with the highest score is selected from a plurality of original candidate frames of one suspected target through the point cloud position features to serve as an optimal candidate frame, and one suspected target is ensured to have only one optimal candidate frame.
In one embodiment, the method further comprises the following steps: acquiring point cloud data of a single laser radar on an unmanned vehicle, and generating a plurality of original candidate frames for pedestrian detection through traversing search according to central point cloud density characteristics of the point cloud data by a sliding window algorithm; the density characteristics of the central point cloud are as follows:
Figure 927898DEST_PATH_IMAGE058
wherein the content of the first and second substances,
Figure 733043DEST_PATH_IMAGE059
representing the density characteristics of the central point cloud;
Figure 255291DEST_PATH_IMAGE060
representing the total number of points in the sliding window;
Figure 829492DEST_PATH_IMAGE061
representing the number of points falling into the corresponding grid, as shown in FIG. 3;
Figure 28261DEST_PATH_IMAGE062
the horizontal and vertical numbers of the grids divided in the sliding window are shown.
In one embodiment, the method further comprises the following steps: and according to the point cloud density distribution characteristics and the point cloud height difference distribution characteristics in the original candidate frame, scoring the original candidate frame through a single classification support vector machine, and determining the optimal candidate frame of the detection target according to the score of the original candidate frame.
Assuming that most pedestrians are upright, the shape resembles a cylinder, and the ideal three-dimensional bounding box should try to center the extracted target; furthermore, the extracted point cloud should be complete and avoid containing surrounding irrelevant points. Based on the above criteria, point cloud location features, including density distribution and height difference distribution of the point cloud, are used to assess the quality of the generated candidate targets.
In one embodiment, the method further comprises the following steps: extracting the combination characteristics of the point cloud data in the optimal candidate frame, and realizing pedestrian detection through an AdaBoost model for pedestrian detection according to the combination characteristics; the combined features are point number, distance from the mass center to the unmanned vehicle, maximum height difference, point cloud three-dimensional covariance matrix, three-dimensional covariance matrix eigenvalue, inertia tensor and rotating projection statistical features.
After a candidate window which may be a pedestrian is selected, in order to identify a pedestrian target more accurately, stronger features are needed to classify a sample, so that more complex combined features are used to describe point cloud, and the combined features are shown in table 1:
TABLE 1 characterization and dimensionality
Figure 55122DEST_PATH_IMAGE063
In one embodiment, the method further comprises the following steps: obtaining a positive sample mean value and a positive sample variance of Gaussian distribution corresponding to the positive sample according to the positive sample, and obtaining a negative sample mean value and a negative sample variance of Gaussian distribution corresponding to the negative sample according to the negative sample;
calculating a first positive sample conditional probability of the AdaBoost model outputting a first detection score when the detection target is a pedestrian according to a Gaussian distribution formula corresponding to the positive sample, and calculating a second positive sample conditional probability of the AdaBoost model outputting a second detection score when the detection target is a pedestrian, wherein the first positive sample conditional probability is as follows:
Figure 381062DEST_PATH_IMAGE064
Figure 809769DEST_PATH_IMAGE065
wherein the content of the first and second substances,
Figure 195751DEST_PATH_IMAGE066
indicating that the detection target is a pedestrian;
Figure 709909DEST_PATH_IMAGE067
representing a first detection score;
Figure 590271DEST_PATH_IMAGE068
representing a second detection score;
Figure 607906DEST_PATH_IMAGE069
representing a first positive sample conditional probability;
Figure 430368DEST_PATH_IMAGE070
representing a second positive sample conditional probability;
Figure 166243DEST_PATH_IMAGE071
represents the positive sample variance;
Figure 99564DEST_PATH_IMAGE072
represents the positive sample mean;
calculating a first negative sample conditional probability of the AdaBoost model outputting a first detection score when the detection target is not a pedestrian according to a Gaussian distribution formula corresponding to the negative sample, and calculating a second negative sample conditional probability of the AdaBoost model outputting a second detection score when the detection target is not a pedestrian, wherein the second negative sample conditional probability is as follows:
Figure 237284DEST_PATH_IMAGE073
Figure 214337DEST_PATH_IMAGE074
wherein the content of the first and second substances,
Figure 703087DEST_PATH_IMAGE075
indicating that the detection target is not a pedestrian;
Figure 908940DEST_PATH_IMAGE076
representing a first negative sample conditional probability;
Figure 166746DEST_PATH_IMAGE077
representing a second negative sample conditional probability;
Figure 331011DEST_PATH_IMAGE078
represents the negative sample variance;
Figure 523702DEST_PATH_IMAGE079
representing the negative sample mean.
In one embodiment, the method further comprises the following steps: according to the first positive sample conditional probability, the second positive sample conditional probability and the positive prior probability that the detection target is the pedestrian, carrying out decision-level fusion on the output information of the multi-laser radar through a Bayesian rule, and obtaining a positive judgment probability that the detection target is the pedestrian under the conditions of a first detection score and a second detection score:
Figure 533246DEST_PATH_IMAGE080
wherein the content of the first and second substances,
Figure 911138DEST_PATH_IMAGE081
representing a positive judgment probability;
Figure 980725DEST_PATH_IMAGE082
represents a positive prior probability;
Figure 178488DEST_PATH_IMAGE083
Figure 991723DEST_PATH_IMAGE084
the pedestrian detection score is a molecular item of a Bayesian rule, and respectively represents the probability of the occurrence of a first pedestrian detection score and the probability of the occurrence of a second pedestrian detection score, and the probabilities are obtained by presetting;
according to the first negative sample condition probability, the second negative sample condition probability and the negative prior probability that the detection target is not a pedestrian, carrying out decision-level fusion on the output information of the multi-laser radar through a Bayesian rule, and obtaining the negative judgment probability that the detection target is not a pedestrian under the conditions of the first detection score and the second detection score as follows:
Figure 207810DEST_PATH_IMAGE085
wherein, the first and the second end of the pipe are connected with each other,
Figure 448298DEST_PATH_IMAGE086
representing a negative judgment probability;
Figure 133358DEST_PATH_IMAGE087
Figure 750284DEST_PATH_IMAGE088
representing a negative prior probability.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 4, there is provided a multi-lidar decision-level fusion apparatus for pedestrian detection, comprising: a radar pair obtaining module 402, a detection score obtaining module 404, a conditional probability obtaining module 406, a judgment probability obtaining module 408, a radar-to-pedestrian detection result obtaining module 410, and a multi-lidar pedestrian detection result obtaining module 412, wherein:
a radar pair obtaining module 402, configured to obtain a radar pair of multiple lidar in an unmanned vehicle in a combined manner; the radar pair includes: a first lidar and a second lidar;
a detection score obtaining module 404, configured to obtain first point cloud data collected by a first laser radar for a detection target and second point cloud data collected by a second laser radar for the detection target, respectively, and perform pedestrian detection on the first point cloud data and the second point cloud data through a trained Adaboost model to obtain a first detection score of the first laser radar and a second detection score of the second laser radar, respectively; the Adaboost model is obtained by training a training sample; the information of the training sample comprises a sample score; the training samples comprise positive samples and negative samples; the sample scores of the positive sample and the negative sample are in accordance with Gaussian distribution;
a conditional probability obtaining module 406, configured to calculate, according to a gaussian distribution formula corresponding to the positive sample, a first positive sample conditional probability that the Adaboost model outputs a first detection score when the detection target is a pedestrian, and a second positive sample conditional probability that the Adaboost model outputs a second detection score when the detection target is a pedestrian; calculating a first negative sample conditional probability of a first detection score output by the Adaboost model when the detection target is not a pedestrian and a second negative sample conditional probability of a second detection score output by the Adaboost model when the detection target is not a pedestrian according to a Gaussian distribution formula corresponding to the negative samples;
the judgment probability obtaining module 408 is configured to perform decision-level fusion on output information of the multi-laser radar through a bayesian rule according to the first positive sample conditional probability, the second positive sample conditional probability and the positive prior probability that the detection target is a pedestrian, so as to obtain a positive judgment probability that the detection target is a pedestrian under the conditions of the first detection score and the second detection score; according to the first negative sample conditional probability, the second negative sample conditional probability and the negative prior probability that the detection target is not a pedestrian, carrying out decision-making fusion on the output information of the multi-laser radar through a Bayes rule to obtain a negative judgment probability that the detection target is not a pedestrian under the conditions of the first detection score and the second detection score; the positive prior probability and the negative prior probability are obtained according to a preset initial value or a positive judgment probability and a negative judgment probability obtained in the last time sequence;
a radar-to-pedestrian detection result acquisition module 410, configured to obtain a pedestrian detection result of multi-lidar decision-level fusion of the detection target according to the positive determination probability and the negative determination probability;
and a multi-lidar pedestrian detection result acquisition module 412, configured to fuse corresponding pedestrian detection results of all radars combined by multiple lidar, and output a multi-lidar decision-level fused pedestrian detection result.
The detection score obtaining module 404 is further configured to obtain point cloud data acquired by the laser radar on the unmanned vehicle for the detection target, and generate a plurality of original candidate frames for pedestrian detection through traversal search according to the central point cloud density feature of the point cloud data by using a sliding window algorithm; according to the point cloud position characteristics in the original candidate frame, scoring the original candidate frame through a single classification support vector machine, and determining the best candidate frame of the detection target according to the score of the original candidate frame; extracting the combination characteristics of the point cloud data in the optimal candidate frame, and realizing pedestrian detection through an AdaBoost model for pedestrian detection according to the combination characteristics; and obtaining a first detection score through the first point cloud data of the first laser radar, and obtaining a second detection score through the second point cloud data of the second laser radar.
The detection score acquisition module 404 is further configured to acquire point cloud data of the single laser radar on the unmanned vehicle, and generate a plurality of original candidate frames for pedestrian detection through traversal search according to central point cloud density features of the point cloud data by using a sliding window algorithm; the density characteristics of the central point cloud are as follows:
Figure 837188DEST_PATH_IMAGE089
wherein the content of the first and second substances,
Figure 514157DEST_PATH_IMAGE090
representing the density characteristics of the central point cloud;
Figure 171666DEST_PATH_IMAGE091
representing the total number of points in the sliding window;
Figure 326704DEST_PATH_IMAGE092
representing the number of points falling into the corresponding grid;
Figure 533694DEST_PATH_IMAGE093
the horizontal and vertical numbers of the grids divided in the sliding window are shown.
The detection score obtaining module 404 is further configured to score the original candidate frame through a single classification support vector machine according to the point cloud density distribution feature and the point cloud height difference distribution feature in the original candidate frame, and determine the best candidate frame of the detection target according to the score of the original candidate frame.
The detection score obtaining module 404 is further configured to extract a combination feature of the best candidate intra-frame point cloud data, and implement pedestrian detection through an AdaBoost model for pedestrian detection according to the combination feature; the combined features are point number, distance from a mass center to the unmanned vehicle, maximum height difference, point cloud three-dimensional covariance matrix, three-dimensional covariance matrix eigenvalue, inertia tensor and rotation projection statistical features.
The conditional probability obtaining module 406 is further configured to obtain a positive sample mean and a positive sample variance of gaussian distribution corresponding to the positive sample according to the positive sample, and obtain a negative sample mean and a negative sample variance of gaussian distribution corresponding to the negative sample according to the negative sample;
calculating a first positive sample conditional probability of the AdaBoost model outputting a first detection score when the detection target is a pedestrian according to a Gaussian distribution formula corresponding to the positive sample, and calculating a second positive sample conditional probability of the AdaBoost model outputting a second detection score when the detection target is a pedestrian, wherein the first positive sample conditional probability is as follows:
Figure 850406DEST_PATH_IMAGE094
Figure 244478DEST_PATH_IMAGE095
wherein the content of the first and second substances,
Figure 452475DEST_PATH_IMAGE096
representing that the detection target is a pedestrian;
Figure 513972DEST_PATH_IMAGE097
representing a first detection score;
Figure 267164DEST_PATH_IMAGE098
representing a second detection score;
Figure 414111DEST_PATH_IMAGE099
representing a first positive sample conditional probability;
Figure 910952DEST_PATH_IMAGE100
representing a second positive sample conditional probability;
Figure 561376DEST_PATH_IMAGE101
represents the positive sample variance;
Figure 498852DEST_PATH_IMAGE102
represents the positive sample mean;
calculating a first negative sample conditional probability of the AdaBoost model outputting a first detection score when the detection target is not the pedestrian according to a Gaussian distribution formula corresponding to the negative sample, and calculating a second negative sample conditional probability of the AdaBoost model outputting a second detection score when the detection target is not the pedestrian, wherein the first negative sample conditional probability is as follows:
Figure 382363DEST_PATH_IMAGE103
Figure 417315DEST_PATH_IMAGE104
wherein the content of the first and second substances,
Figure 922246DEST_PATH_IMAGE105
indicating that the detection target is not a pedestrian;
Figure 282820DEST_PATH_IMAGE106
representing a first negative sample conditional probability;
Figure 388048DEST_PATH_IMAGE107
representing a second negative sample conditional probability;
Figure 226691DEST_PATH_IMAGE108
represents the negative sample variance;
Figure 117287DEST_PATH_IMAGE109
representing the negative sample mean.
For specific definition of the multi-lidar decision-level fusion device for pedestrian detection, reference may be made to the definition of the multi-lidar decision-level fusion method for pedestrian detection, and details are not repeated here. The various modules in the multi-lidar decision-level fusion device described above for pedestrian detection may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The judgment probability obtaining module 408 is further configured to perform decision-level fusion on the output information of the multi-laser radar through a bayesian rule according to the first positive sample conditional probability, the second positive sample conditional probability and the positive prior probability that the detection target is a pedestrian, so as to obtain a positive judgment probability that the detection target is a pedestrian under the conditions of the first detection score and the second detection score:
Figure 383183DEST_PATH_IMAGE110
wherein, the first and the second end of the pipe are connected with each other,
Figure 726440DEST_PATH_IMAGE111
representing a positive judgment probability;
Figure 634353DEST_PATH_IMAGE112
represents a positive prior probability;
Figure 861678DEST_PATH_IMAGE113
Figure 298476DEST_PATH_IMAGE114
the first pedestrian detection score and the second pedestrian detection score are respectively expressed by a molecular item of a Bayesian rule, and are obtained by presetting;
according to the first negative sample condition probability, the second negative sample condition probability and the negative prior probability that the detection target is not a pedestrian, carrying out decision-level fusion on the output information of the multi-laser radar through a Bayesian rule, and obtaining the negative judgment probability that the detection target is not a pedestrian under the conditions of the first detection score and the second detection score as follows:
Figure 129029DEST_PATH_IMAGE115
wherein the content of the first and second substances,
Figure 309474DEST_PATH_IMAGE116
representing a negative judgment probability;
Figure 909083DEST_PATH_IMAGE117
Figure 782361DEST_PATH_IMAGE118
representing a negative prior probability.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 5. The computer device comprises a processor, a memory, a network interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program when executed by a processor implements a multi-lidar decision-level fusion method for pedestrian detection. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configuration shown in fig. 5 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct Rambus Dynamic RAM (DRDRAM), and Rambus Dynamic RAM (RDRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.

Claims (8)

1. A multi-lidar decision-level fusion method for pedestrian detection, the method comprising:
acquiring a radar pair of a plurality of laser radars on the unmanned vehicle in a combined mode; the radar pair includes: a first laser radar and a second laser radar;
respectively acquiring first point cloud data acquired by the first laser radar aiming at a detection target and second point cloud data acquired by the second laser radar aiming at the detection target, and performing pedestrian detection on the first point cloud data and the second point cloud data through a trained AdaBoost model to respectively obtain a first detection score of the first laser radar and a second detection score of the second laser radar; the AdaBoost model is obtained by training a training sample; the information of the training samples comprises sample scores; the training samples comprise positive samples and negative samples; the sample scores of the positive and negative samples both approximately conform to a Gaussian distribution;
obtaining a positive sample mean value and a positive sample variance of Gaussian distribution corresponding to the positive sample according to the positive sample, and obtaining a negative sample mean value and a negative sample variance of Gaussian distribution corresponding to the negative sample according to the negative sample;
calculating a first positive sample conditional probability of the AdaBoost model outputting the first detection score when the detection target is a pedestrian according to a Gaussian distribution formula corresponding to the positive sample, and calculating a second positive sample conditional probability of the AdaBoost model outputting the second detection score when the detection target is a pedestrian according to a Gaussian distribution formula corresponding to the positive sample, wherein the first positive sample conditional probability is that:
Figure FDA0003856258260000011
Figure FDA0003856258260000012
wherein Ped represents that the detection target is a pedestrian; score 1 Representing the first detection score; score 2 Representing the second detection score; p (Score) 1 /Ped) represents the first positive sample conditional probability;
P(Score 2 /Ped) represents the second positive sample conditional probability; sigma pos Representing the positive sample variance; mu.s pos Representing the positive sample mean;
calculating a first negative sample conditional probability of the AdaBoost model outputting the first detection score when the detection target is not a pedestrian according to a Gaussian distribution formula corresponding to the negative sample, and calculating a second negative sample conditional probability of the AdaBoost model outputting the second detection score when the detection target is not a pedestrian, wherein the first negative sample conditional probability is as follows:
Figure FDA0003856258260000021
Figure FDA0003856258260000022
wherein the content of the first and second substances,
Figure FDA0003856258260000023
indicating that the detection target is not a pedestrian;
Figure FDA0003856258260000024
representing the first negative sample conditional probability;
Figure FDA0003856258260000025
representing the second negative sample conditional probability; sigma neg Representing the negative sample variance; mu.s neg Representing the negative sample mean;
according to the first positive sample conditional probability, the second positive sample conditional probability and the positive prior probability that the detection target is a pedestrian, carrying out decision-level fusion on output information of the multi-laser radar through a Bayes rule to obtain a first detection score and a positive judgment probability that the detection target is a pedestrian under the condition of a second detection score, wherein the first detection score is as follows:
Figure FDA0003856258260000026
wherein, P (Ped/Score) 1 ,Score 2 ) Representing the positive judgment probability; p (Ped) represents the positive prior probability; p (Score) 1 )、P(Score 2 ) The pedestrian detection score is a molecular item of a Bayesian rule, and respectively represents the probability of the occurrence of a first pedestrian detection score and the probability of the occurrence of a second pedestrian detection score, and the probabilities are obtained by presetting;
according to the first negative sample condition probability, the second negative sample condition probability and the negative prior probability that the detection target is not a pedestrian, carrying out decision-level fusion on output information of the multi-laser radar through a Bayesian rule to obtain a first detection score and a negative judgment probability that the detection target is not a pedestrian under the second detection score, wherein the negative judgment probability is that:
Figure FDA0003856258260000027
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003856258260000028
representing the negative decision probability; p (Ped),
Figure FDA0003856258260000029
Representing the negative prior probability; the positive prior probability and the negative prior probability are obtained according to a preset initial value or a positive judgment probability and a negative judgment probability obtained in the last time sequence;
obtaining a pedestrian detection result of the radar for decision-level fusion according to the positive judgment probability and the negative judgment probability;
and fusing corresponding pedestrian detection results of all radars combined by the multiple laser radars, and outputting pedestrian detection results fused at decision levels of the multiple laser radars.
2. The method of claim 1, wherein the respectively obtaining first point cloud data collected by the first laser radar for a detection target and second point cloud data collected by the second laser radar for the detection target, and performing pedestrian detection on the first point cloud data and the second point cloud data through a trained AdaBoost model to respectively obtain a first detection score of the first laser radar and a second detection score of the second laser radar comprises:
acquiring point cloud data acquired by a laser radar on an unmanned vehicle on a detection target, and generating a plurality of original candidate frames for pedestrian detection through traversing search according to central point cloud density characteristics of the point cloud data by a sliding window algorithm;
according to the point cloud position characteristics in the original candidate frame, scoring the original candidate frame through a single classification support vector machine, and determining the best candidate frame of the detection target according to the score of the original candidate frame;
extracting the combination characteristics of the point cloud data in the optimal candidate frame, and realizing pedestrian detection through an AdaBoost model for pedestrian detection according to the combination characteristics;
and obtaining a first detection score through the first point cloud data of the first laser radar, and obtaining a second detection score through the second point cloud data of the second laser radar.
3. The method of claim 2, wherein the obtaining of the point cloud data of the lidar on the unmanned vehicle, the traversing search according to the central point cloud density feature of the point cloud data through a sliding window algorithm, and the generating of the plurality of original candidate frames for pedestrian detection comprise:
acquiring point cloud data of a single laser radar on an unmanned vehicle, and generating a plurality of original candidate frames for pedestrian detection through traversing search according to the central point cloud density characteristics of the point cloud data by a sliding window algorithm; the density characteristics of the central point cloud are as follows:
Figure FDA0003856258260000031
wherein F represents the center point cloud density feature; n represents the total number of points in the sliding window; n is ij Representing the number of points falling into the corresponding grid; i, j represent the horizontal and vertical numbers of the grids divided in the sliding window.
4. The method of claim 3, wherein the original candidate frame is scored by a single classification support vector machine according to the point cloud location features in the original candidate frame, and the best candidate frame of the detection target is determined according to the score of the original candidate frame, comprising:
and according to the point cloud density distribution characteristics and the point cloud height difference distribution characteristics in the original candidate frame, scoring the original candidate frame through a single classification support vector machine, and determining the best candidate frame of the detection target according to the score of the original candidate frame.
5. The method according to claim 4, wherein the step of extracting combined features of the best candidate in-frame point cloud data and realizing pedestrian detection through an AdaBoost model for pedestrian detection according to the combined features comprises the steps of:
extracting the combination characteristics of the point cloud data in the optimal candidate frame, and realizing pedestrian detection through an AdaBoost model for pedestrian detection according to the combination characteristics; the combined features are point number, distance from a mass center to the unmanned vehicle, maximum height difference, point cloud three-dimensional covariance matrix, three-dimensional covariance matrix eigenvalue, inertia tensor and rotation projection statistical features.
6. A multi-lidar decision-level fusion apparatus for pedestrian detection, the apparatus comprising:
the radar pair acquisition module is used for acquiring radar pairs of multiple laser radars on the unmanned vehicle in a combined mode; the radar pair includes: a first laser radar and a second laser radar;
a detection score acquisition module, configured to respectively acquire first point cloud data acquired by the first laser radar with respect to a detection target and second point cloud data acquired by the second laser radar with respect to the detection target, and perform pedestrian detection on the first point cloud data and the second point cloud data through a trained AdaBoost model to respectively obtain a first detection score of the first laser radar and a second detection score of the second laser radar; the AdaBoost model is obtained by training a training sample; the information of the training sample comprises a sample score; the training samples comprise positive samples and negative samples; the sample scores of the positive and negative samples both approximately fit a gaussian distribution;
the conditional probability obtaining module is used for obtaining a positive sample mean value and a positive sample variance of Gaussian distribution corresponding to the positive sample according to the positive sample, and obtaining a negative sample mean value and a negative sample variance of Gaussian distribution corresponding to the negative sample according to the negative sample;
calculating a first positive sample conditional probability of the AdaBoost model outputting the first detection score when the detection target is a pedestrian according to a Gaussian distribution formula corresponding to the positive sample, and calculating a second positive sample conditional probability of the AdaBoost model outputting the second detection score when the detection target is a pedestrian according to a Gaussian distribution formula corresponding to the positive sample, wherein the first positive sample conditional probability is that:
Figure FDA0003856258260000051
Figure FDA0003856258260000052
wherein Ped represents that the detection target is a pedestrian; score 1 Representing the first detection score; score 2 Representing the second detection score; p (Score) 1 /Ped) represents the first positive sample conditional probability;
P(Score 2 /Ped) represents the second positive sample conditional probability; sigma pos Representing the positive sample variance; mu.s pos Representing the positive sample mean;
calculating a first negative sample conditional probability of the AdaBoost model outputting the first detection score when the detection target is not a pedestrian according to a Gaussian distribution formula corresponding to the negative sample, and calculating a second negative sample conditional probability of the AdaBoost model outputting the second detection score when the detection target is not a pedestrian, wherein the second negative sample conditional probability is as follows:
Figure FDA0003856258260000053
Figure FDA0003856258260000054
wherein the content of the first and second substances,
Figure FDA0003856258260000055
indicating that the detection target is not a pedestrian;
Figure FDA0003856258260000056
representing the first negative sample conditional probability;
Figure FDA0003856258260000057
representing the second negative sample conditional probability; sigma neg To representThe negative sample variance; mu.s neg Representing the negative sample mean;
a judgment probability obtaining module, configured to perform decision-level fusion on output information of the multi-laser radar according to the first positive sample conditional probability, the second positive sample conditional probability, and the positive prior probability that the detection target is a pedestrian, by using a bayesian rule, to obtain a first detection score and a second detection score, where the positive judgment probability that the detection target is a pedestrian is:
Figure FDA0003856258260000061
wherein, P (Ped/Score) 1 ,Score 2 ) Representing the positive judgment probability; p (Ped) represents the positive prior probability; p (Score) 1 )、P(Score 2 ) The pedestrian detection score is a molecular item of a Bayesian rule, and respectively represents the probability of the occurrence of a first pedestrian detection score and the probability of the occurrence of a second pedestrian detection score, and the probabilities are obtained by presetting;
according to the first negative sample condition probability, the second negative sample condition probability and the negative prior probability that the detection target is not a pedestrian, carrying out decision-level fusion on output information of the multi-laser radar through a Bayesian rule to obtain a first detection score and a negative judgment probability that the detection target is not a pedestrian under the second detection score, wherein the negative judgment probability is that:
Figure FDA0003856258260000062
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003856258260000063
representing the negative judgment probability; p (Ped),
Figure FDA0003856258260000064
Representing the negative prior probability;
the radar-to-pedestrian detection result acquisition module is used for acquiring a pedestrian detection result of the multi-laser radar decision-level fusion of the detection target according to the positive judgment probability and the negative judgment probability;
and the multi-laser radar pedestrian detection result acquisition module is used for fusing corresponding pedestrian detection results of all radars combined by the multi-laser radar and outputting the pedestrian detection results fused in a multi-laser radar decision level.
7. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program performs the steps of the method according to any of claims 1 to 5.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN202110005340.3A 2021-01-05 2021-01-05 Multi-laser radar decision-level fusion method and device for pedestrian detection Active CN112433228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110005340.3A CN112433228B (en) 2021-01-05 2021-01-05 Multi-laser radar decision-level fusion method and device for pedestrian detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110005340.3A CN112433228B (en) 2021-01-05 2021-01-05 Multi-laser radar decision-level fusion method and device for pedestrian detection

Publications (2)

Publication Number Publication Date
CN112433228A CN112433228A (en) 2021-03-02
CN112433228B true CN112433228B (en) 2023-02-03

Family

ID=74697156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110005340.3A Active CN112433228B (en) 2021-01-05 2021-01-05 Multi-laser radar decision-level fusion method and device for pedestrian detection

Country Status (1)

Country Link
CN (1) CN112433228B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113504528B (en) * 2021-07-05 2022-04-29 武汉大学 Atmospheric level detection method based on multi-scale hypothesis test

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414418A (en) * 2019-07-25 2019-11-05 电子科技大学 A kind of Approach for road detection of image-lidar image data Multiscale Fusion

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590433A (en) * 2017-08-04 2018-01-16 湖南星云智能科技有限公司 A kind of pedestrian detection method based on millimetre-wave radar and vehicle-mounted camera
CN111257866B (en) * 2018-11-30 2022-02-11 杭州海康威视数字技术股份有限公司 Target detection method, device and system for linkage of vehicle-mounted camera and vehicle-mounted radar
CN111551957B (en) * 2020-04-01 2023-02-03 上海富洁科技有限公司 Park low-speed automatic cruise and emergency braking system based on laser radar sensing
CN111985427A (en) * 2020-08-25 2020-11-24 深圳前海微众银行股份有限公司 Living body detection method, living body detection apparatus, and readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414418A (en) * 2019-07-25 2019-11-05 电子科技大学 A kind of Approach for road detection of image-lidar image data Multiscale Fusion

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Cascaded_Sliding_Window_Based_Real-Time_3D_Region_Proposal_for_Pedestrian_Detection;Jun Hu 等;《Proceeding of the IEEE·International Conference on Robotics and Biomimetics》;20191231;第708-713页 *
一种基于高度差异的点云数据分类方法;马东岭 等;《测绘通报》;20181231(第6期);第46-49页 *
基于Adaboost和Bayes算法的行人检测研究;段成伟;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150515(第05期);第4,10-11,15-16,20-21,34-38,43-45页 *
基于似然域背景差分的行人检测和跟踪方法;田联房等;《计算机工程与设计》;20200116(第01期);第71-77页 *
基于最大局部密度间隔的特征选择方法;娄睿 等;《计算机工程与设计》;20190331;第40卷(第3期);第699-705页 *
基于贝叶斯决策的多方法融合跟踪算法;周旭 等;《电脑知识与技术》;20160731;第12卷(第21期);第265-268页 *

Also Published As

Publication number Publication date
CN112433228A (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN111476756B (en) Method for identifying casting DR image loosening defect based on improved YOLOv network model
CN106803071B (en) Method and device for detecting object in image
CN111640089B (en) Defect detection method and device based on feature map center point
CN111079674B (en) Target detection method based on global and local information fusion
CN107633226B (en) Human body motion tracking feature processing method
CN111275044A (en) Weak supervision target detection method based on sample selection and self-adaptive hard case mining
CN111046787A (en) Pedestrian detection method based on improved YOLO v3 model
CN112836639A (en) Pedestrian multi-target tracking video identification method based on improved YOLOv3 model
CN112052802B (en) Machine vision-based front vehicle behavior recognition method
CN111950488B (en) Improved Faster-RCNN remote sensing image target detection method
CN113609895A (en) Road traffic information acquisition method based on improved Yolov3
CN112433228B (en) Multi-laser radar decision-level fusion method and device for pedestrian detection
CN110097067B (en) Weak supervision fine-grained image classification method based on layer-feed feature transformation
CN114219936A (en) Object detection method, electronic device, storage medium, and computer program product
CN113128564A (en) Typical target detection method and system based on deep learning under complex background
CN111815609A (en) Pathological image classification method and system based on context awareness and multi-model fusion
CN112560894A (en) Improved 3D convolutional network hyperspectral remote sensing image classification method and device
CN111639688A (en) Local interpretation method of Internet of things intelligent model based on linear kernel SVM
CN106909936B (en) Vehicle detection method based on double-vehicle deformable component model
CN110751623A (en) Joint feature-based defect detection method, device, equipment and storage medium
CN114445716A (en) Key point detection method, key point detection device, computer device, medium, and program product
CN112418358A (en) Vehicle multi-attribute classification method for strengthening deep fusion network
CN115546780B (en) License plate recognition method, model and device
CN112949634B (en) Railway contact net nest detection method
CN117036966B (en) Learning method, device, equipment and storage medium for point feature in map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant