CN111310840B - Data fusion processing method, device, equipment and storage medium - Google Patents

Data fusion processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111310840B
CN111310840B CN202010112841.7A CN202010112841A CN111310840B CN 111310840 B CN111310840 B CN 111310840B CN 202010112841 A CN202010112841 A CN 202010112841A CN 111310840 B CN111310840 B CN 111310840B
Authority
CN
China
Prior art keywords
measurement result
combination
candidate
measured object
elements
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010112841.7A
Other languages
Chinese (zh)
Other versions
CN111310840A (en
Inventor
徐铎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010112841.7A priority Critical patent/CN111310840B/en
Publication of CN111310840A publication Critical patent/CN111310840A/en
Application granted granted Critical
Publication of CN111310840B publication Critical patent/CN111310840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers

Abstract

The embodiment of the application discloses a data fusion processing method, a device, equipment and a storage medium, relates to the field of data fusion, and can be used for automatic driving. The specific implementation scheme is as follows: determining at least two candidate measurement result combinations according to the measurement results of different sensors on the measured object; the candidate measurement result combination comprises at least one element; the element comprises at least two measurements; and determining a target measurement result combination of which the elements are associated with the same measured object from the at least two candidate measurement result combinations by adopting a random forest model, and taking the elements in the target measurement result combination as fusion results. According to the embodiment of the application, the measurement results related to the same measured object are determined through the random forest model, and the whole process does not need to be based on normal distribution and manual parameter adjustment, so that the accuracy and the accurate recall rate of the finally obtained fusion result are improved.

Description

Data fusion processing method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the field of data processing, in particular to the technical field of data fusion, and particularly relates to a data fusion processing method, device, equipment and storage medium, which can be used for automatic driving.
Background
The data fusion of the sensors is to combine the data characteristics of the same object acquired by different sensors to obtain the high-precision information of each attribute of the object. In the process of realizing sensor data fusion, whether the data features acquired in different sensors are related to the same measured object or not needs to be judged, so that the accuracy of the subsequent sensor data fusion is ensured.
In the prior art, normal distribution and manually set standard deviation are generally adopted to simulate measurement error distribution of data features under all attributes, so that whether different sensors detect the same object or not is judged, and data fusion of the different sensors is completed according to judgment results. However, the measurement errors of the sensors do not always follow normal distribution, and the accuracy of the standard deviation manually debugged is low, so that the error in the judgment process of whether different sensors detect the same object is large, and the accuracy of the post-fusion result is seriously influenced.
Disclosure of Invention
The embodiment of the application discloses a data fusion processing method, a device, equipment and a storage medium, which can improve the accuracy and the accurate recall rate of a finally obtained fusion result.
In a first aspect, an embodiment of the present application discloses a data fusion processing method, including:
Determining at least two candidate measurement result combinations according to the measurement results of different sensors on the measured object; the candidate measurement result combination comprises at least one element; the element comprises at least two measurements;
and determining a target measurement result combination of which the elements are associated with the same measured object from the at least two candidate measurement result combinations by adopting a random forest model, and taking the elements in the target measurement result combination as fusion results.
One embodiment of the above application has the following advantages or benefits: and determining a target measurement result combination of which the elements are associated with the same measured object from candidate measurement result combinations formed by measurement results of different sensors on the measured object by adopting a random forest model, and taking the elements in the target measurement result combination as fusion results. According to the scheme of the embodiment, the measurement results related to the same measured object are determined through the random forest model, normal distribution and manual parameter adjustment are not needed in the whole process, accuracy of the judgment result is improved, and accuracy and accurate recall rate of the finally obtained fusion result are improved.
In addition, the data fusion processing method according to the above embodiment of the present application may further have the following additional technical features:
Optionally, the number of elements in the candidate measurement combination is equal to the number of measurement values of the first sensor; the number of candidate measurement result combinations is the quotient of the first factorization and the second factorization; the first factorization is factorization of a quantity value of a measurement result of the second sensor; the second factorization is factorization of a difference in number of measurements of the second sensor and the first sensor; wherein the number of measurement results of the first sensor is less than or equal to the number of measurement results of the second sensor.
One embodiment of the above application has the following advantages or benefits: a candidate measurement result combination and a determination mode of the number of elements in the candidate measurement result combination are provided, so that the comprehensiveness of the generated candidate measurement result combination is guaranteed, and the target measurement result combination can be determined more accurately later.
Optionally, determining a target measurement result combination of the same measured object associated with the element from the at least two candidate measurement result combinations by using a random forest model, including:
inputting elements in the combination of the at least two candidate measurement results into a random forest model to obtain the probability that the elements are associated with the same measured object;
Determining the overall similarity of the at least two candidate measurement result combinations according to the probability that elements in the at least two candidate measurement result combinations are associated with the same measured object;
and taking the candidate measurement result combination with the highest overall similarity as a target measurement result combination.
One embodiment of the above application has the following advantages or benefits: and calculating the probability that each element in each candidate measurement result combination is associated with the same measured object by adopting a random forest model, further determining the overall similarity of each candidate measurement result combination according to the probability that each element in each candidate measurement result combination is associated with the same measured object, and taking the candidate measurement result combination with the highest overall similarity as a target measurement result combination. The whole process does not need normal distribution and manual parameter adjustment, and the accuracy of the combination of target measurement results is improved.
Optionally, before determining the target measurement result combination of the same measured object associated with the element from the at least two candidate measurement result combinations by using the random forest model, the method further includes:
acquiring sample measurement result sets acquired by different sensors, adding classification labels for the sample measurement result sets, and constructing a training sample set;
And training a random forest model by adopting the training sample set.
Optionally, after determining a target measurement result combination of which elements are associated with the same measured object from the at least two candidate measurement result combinations by adopting a random forest model, taking elements in the target measurement result combination as fusion results, the method further comprises:
and optimizing and updating the random forest model according to the combination of the at least two candidate measurement results and the fusion result.
One embodiment of the above application has the following advantages or benefits: according to sample measurement results acquired by different sensors, a training sample set is constructed, and a random forest model is trained through a large number of training sample sets, so that the accuracy of the output result of the random forest model is ensured. In addition, after data fusion is completed each time, the parameters in the random forest model are continuously optimized and updated according to the fusion result, and the accuracy of the output probability of the random forest model is improved.
Optionally, the different sensors are any two of a laser radar, a millimeter wave radar and an image collector; the measurement is a speed measurement and/or a position measurement.
Optionally, if the different sensors include an image collector, the measurement result further includes: a projection result;
wherein the projection result includes: the method comprises the steps of acquiring at least one of label frame information of a detected object in an image acquired by an image acquisition device, the number of points of the detected object acquired by a laser radar or a millimeter wave radar, and projection data of the points of the detected object acquired by the laser radar or the millimeter wave radar in the image of the detected object acquired by the image acquisition device.
One embodiment of the above application has the following advantages or benefits: the types of different sensors and the content of the measurement results are introduced, the measurement results in the embodiment have more related dimensions, and the embodiment can realize the judgment of whether the measurement results with different dimensions are related to the same measured object or not. And whether elements in the measurement result combination are related to the same measured object is judged through the multi-dimensional measurement result, so that the judgment accuracy is further improved.
In a second aspect, an embodiment of the present application provides a data fusion processing apparatus, including:
the measuring result combination module is used for determining at least two candidate measuring result combinations according to measuring results of different sensors on the measured object; the candidate measurement result combination comprises at least one element; the element comprises at least two measurements;
And the fusion result determining module is used for determining a target measurement result combination of which the elements are associated with the same measured object from the at least two candidate measurement result combinations by adopting a random forest model, and taking the elements in the target measurement result combination as fusion results.
In a third aspect, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the data fusion processing method according to any of the embodiments of the present application.
In a fourth aspect, an embodiment of the present application provides a non-transitory computer readable storage medium storing computer instructions for causing a computer to execute the data fusion processing method according to any embodiment of the present application.
One embodiment of the above application has the following advantages or benefits: and determining a target measurement result combination of which the elements are associated with the same measured object from candidate measurement result combinations formed by measurement results of different sensors on the measured object by adopting a random forest model, and taking the elements in the target measurement result combination as fusion results. According to the scheme of the embodiment, the measurement results related to the same measured object are determined through the random forest model, normal distribution and manual parameter adjustment are not needed in the whole process, accuracy of the judgment result is improved, and accuracy and accurate recall rate of the finally obtained fusion result are improved.
Other effects of the above alternative will be described below in connection with specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present application and are not to be construed as limiting the application. Wherein:
FIG. 1 is a flow chart of a data fusion processing method according to a first embodiment of the present application;
FIG. 2 is a flow chart of a data fusion processing method according to a second embodiment of the present application;
fig. 3 is a schematic structural diagram of a data fusion processing device according to a fourth embodiment of the present application;
fig. 4 is a block diagram of an electronic device for implementing a data fusion processing method according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present application are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
First embodiment
Fig. 1 is a flowchart of a data fusion processing method according to a first embodiment of the present application, where the present embodiment is applicable to a case where data fusion is performed on measurement results acquired by different sensors for different measured objects, the method may be performed by a data fusion processing apparatus, and the apparatus may be implemented in software and/or hardware, and is preferably configured in an electronic device. As shown in fig. 1, the method specifically includes the following steps:
S101, determining at least two candidate measurement result combinations according to measurement results of different sensors on the measured object.
In the present application, the types of sensors may include a laser radar, a millimeter wave radar, an image collector, and the like. The different sensors are at least two sensors of different types. Preferably, the present embodiment selects two different types of sensors, namely any two of a laser radar, a millimeter wave radar, and an image collector.
The measured object in the application can be determined according to the current actual acquisition scene and the purpose of fusing data. For example, if the collected scene is a street scene, and the fused data is used to assist the autopilot vehicle in determining information about obstacles (such as pedestrians and vehicles) on the street, then the object to be measured may be an obstacle on the street. The number of the objects to be measured in the present application may be one or more. Alternatively, since the collection ranges or collection positions of the different sensors are different, the number of detected objects detected in the same scene at the same time may be the same or may be different, for example, if there are three detected objects in the current scene, at this time, the laser radar and the image collector may both detect the three detected objects, or the laser radar may detect the three detected objects, and the image collector only detects two detected objects.
The measurement result of the sensor on the measured object in the application can be the data characteristics of the measured object under different measurement dimensions, which are obtained by the sensor in the process of carrying out data detection on the measured object. Alternatively, the types of measurements may include, but are not limited to: speed measurements, position measurements, projection results, etc. Alternatively, the types of sensors in this embodiment are different, and the types of measurement results of the sensors on the measured object are also different. Specifically, the correspondence between the sensor type and the measurement result type of the object to be measured will be described in detail in the following embodiments.
The candidate measurement result combinations can be obtained by arranging and combining the measurement results of at least one measured object detected by different sensors according to a certain rule, wherein the number of the candidate measurement result combinations is a plurality, each candidate measurement result combination comprises at least one element, and each element comprises at least two measurement results, namely, each element is formed by combining one measurement result of each different sensor. Specifically, in the embodiment of the present application, according to the measurement results of different sensors on the measured object, the manner of determining each candidate measurement result combination of at least two candidate measurement result combinations may be to select one measurement result combination from all measurement results of each sensor to form the first element in the candidate measurement result combination; and selecting one measurement result from the residual measurement results of each sensor to form a second element in the candidate measurement result combination, and so on until the residual measurement result of one sensor is 0, and determining that the candidate measurement result combination is finished. According to the method, all the measurement results of different sensors are arranged and combined, so that the generated candidate measurement results comprise all possible situations after the arrangement and combination.
Optionally, when the number of different sensors is larger, the number of measurement results related to the element in each candidate measurement result combination is larger, which may result in that the accuracy of judging whether the measurement results are related to the same measured object is affected, so in order to ensure the accuracy of the fusion result, the different sensors of the present application are preferably two different sensors, namely, the first sensor and the second sensor. If the number m of the measurement results of the first sensor is smaller than or equal to the number n of the measurement results of the second sensor, the number of elements in the candidate measurement result combination determined in the step is equal to the number m of the measurement results of the first sensor; the number of candidate measurement combinations is the quotient of the first factorization and the second factorization (i.e., n|/(n-m) |); the first factorization is the factorization of the value of the number of measurements of the second sensor n-! The method comprises the steps of carrying out a first treatment on the surface of the The second factorization is a factorization (n-m) of the difference in the number of measurements of the second sensor and the first sensor-! .
By way of example, assuming that the different sensors are a lidar and a millimeter wave radar (radar), the objects to be measured are objects 1 and 2, the measurement results collected by the lidar for the objects 1 and 2 are l1 and l2, and the measurement results collected by the millimeter wave radar for the objects 1 and 2 are r1 and r2, two candidate measurement result combinations can be determined according to the method described above: (l 1r1, l2r 2) and (l 1r2, l2r 1). Wherein, each candidate measurement result combination comprises two elements, such as candidate measurement result combinations (l 1r1, l2r 2) comprise two elements l1r1 and l2r2, and each element is formed by combining one measurement result selected from all measurement results of the measured object detected by the laser radar and the millimeter wave radar.
S102, determining a target measurement result combination of the same measured object by element association from at least two candidate measurement result combinations by adopting a random forest model, and taking elements in the target measurement result combination as fusion results.
In the application, the random forest model can be a neural network model constructed based on a random forest algorithm, and the random forest model can be used for judging whether each element in the candidate measurement result combination is associated with the same measured object. For example, for the element l1r1, the random forest model may determine whether l1 and r1 in the element are associated with the same object to be measured, that is, determine whether the measurement result l1 of the laser radar and the measurement result r1 of the millimeter wave radar correspond to the same object. The random forest model is obtained through training a large amount of data in advance, and the training process of the random forest model is described in detail in the following embodiments. And will not be described in detail herein.
Alternatively, in the present application, a random forest model may be used to analyze elements in each candidate measurement result, and find out a candidate measurement result, in which each element included in the candidate measurement result is associated with the same measured object, as a target measurement result. The specific implementation method can comprise the following three substeps:
S1021, inputting elements in the combination of at least two candidate measurement results into the random forest model to obtain the probability that the elements are associated with the same measured object.
Specifically, the substep may be to input each element in each candidate measurement result combination into a random forest model trained in advance, so that the random forest model analyzes the input elements based on an algorithm during training, and calculates and outputs probabilities that the input elements are associated with the same measured object. Illustratively, it is assumed that there are two candidate measurement combinations, (l 1r1, l2r 2) and (l 1r2, l2r 1), respectively, the elements included in the two candidate measurement combinations are: inputting the four elements into a random forest model in sequence, namely obtaining the probability that l1 and r1 in the l1r1 of the four elements are associated with the same object as the probability 1; the probability that l2 and r2 in l2r2 are related to the same object is probability 2; the probability that l1 and r2 in l1r2 are related to the same object is probability 3; the probability that l2 and r1 in l2r1 are associated with the same object is probability 4.
S1022, determining the overall similarity of the at least two candidate measurement result combinations according to the probability that the elements in the at least two candidate measurement result combinations are associated with the same measured object.
Specifically, this substep may be to accumulate the probabilities of the elements contained in the candidate measurement result combinations obtained in S1021 for each candidate measurement result combination, so as to obtain the overall similarity of the candidate measurement result combinations. Illustratively, the probability 1 and the probability 2 obtained in S1021 are summed to obtain the overall similarity of the candidate measurement result combination (l 1r1, l2r 2), i.e., the overall similarity 1, and the probability 3 and the probability 4 obtained in S1021 are summed to obtain the overall similarity of the candidate measurement result combination (l 1r2, l2r 1), i.e., the overall similarity 2.
And S1023, taking the candidate measurement result combination with the highest overall similarity as a target measurement result combination.
Specifically, in this substep, the overall similarity of each candidate measurement result combination calculated in S1022 is compared and analyzed, and the candidate measurement result combination with the highest overall similarity is selected as the target measurement result combination. For example, if the overall similarity 1 calculated in S1022 is greater than the overall similarity 2, the candidate measurement result combination (l 1r1, l2r 2) corresponding to the overall similarity 1 is taken as the target measurement result combination.
In the application, the process of data fusion of the measurement results of different sensors is essentially to combine the data characteristics of the same measured object acquired by different sensors to obtain high-precision information of each attribute of the measured object. And each element in the target measurement result combination determined in the step is a combination of measurement results related to the same measured object, so that after the target measurement result combination is determined in the step, the element in the target measurement result combination can be directly used as a fusion result of the measured results. For example, assuming that the target measurement results are combined as (l 1r1, l2r 2), it is possible to use the element l1r1 contained therein as a fusion result of the object 1 to be measured and the element l2r2 as a fusion result of the object 2 to be measured.
According to the technical scheme, a random forest model is adopted, and from candidate measurement result combinations formed by measurement results of different sensors on the measured object, target measurement result combinations of the same measured object with elements associated with the same measured object are determined, and then elements in the target measurement result combinations are used as fusion results. According to the scheme of the embodiment, the measurement results related to the same measured object are determined through the random forest model, normal distribution and manual parameter adjustment are not needed in the whole process, accuracy of the judgment result is improved, and accuracy and accurate recall rate of the finally obtained fusion result are improved.
Second embodiment
Fig. 2 is a flowchart of a data fusion processing method according to a second embodiment of the present application, which is further optimized based on the first embodiment, and specifically illustrates a training process of a random forest model. As shown in fig. 2, the method specifically includes the following steps:
s201, acquiring sample measurement result sets acquired by different sensors, adding classification labels for the sample measurement result sets, and constructing a training sample set.
In the embodiment of the application, the sample measurement result set may be a set formed by a large number of measurement results acquired for a plurality of different measured objects by different sensors in advance. It should be noted that, the types of the sample measurement results collected by the different sensors are the same as the types of the measurement results collected by the different sensors for the measured object when the data fusion is actually performed subsequently.
Specifically, in this embodiment, the different sensors may be controlled in advance to collect measurement results for a plurality of different measured objects, and the collected results are obtained as a sample measurement result set, when a classification label is added to the obtained sample measurement result set, one measurement result may be selected from the sample measurement result set of the different sensors to be combined, and then, whether the measurement results belong to the same measured object is added to the combined result, that is, whether the measurement results of the combined result are measurement results of the different sensors for the same measured object is determined first, if yes, a classification label associated with the same measured object is added to the combined result, and if not, a classification label not associated with the same measured object is added to the combined result. The combined result after the classification labels are added can be used as one training sample in the training sample set, and the sample measurement results of different sensors are arranged and combined according to the method, so that the training sample set containing a large number of training samples can be obtained.
S202, training a random forest model by adopting a training sample set.
In the embodiment of the application, the process of training the random forest model by adopting the training sample set can be that an initial random forest model is firstly constructed based on a random forest algorithm, then each group of training samples in the training sample set are sequentially input into the initial random forest model, the random forest model is trained, and parameters in the model are optimized. After one-stage training is completed (for example, training can be performed for a certain period of time, training sample sets with preset numbers can be performed), the accuracy of the trained random forest model can be checked by adopting test samples, if the check is passed, that is, the accuracy of the probability of the random forest model output by the trained random forest model aiming at the input test samples and related to the same detected object is greater than a preset requirement, the current random forest model training is completed, otherwise, the accuracy of the random forest model after training at the moment is not required, and a new training sample set needs to be acquired again to continuously train the random forest model until the accuracy of the random forest model reaches the preset requirement.
S203, determining at least two candidate measurement result combinations according to the measurement results of different sensors on the measured object.
Wherein the candidate measurement combination includes at least one element; the element includes at least two measurements.
S204, determining a target measurement result combination of the same measured object with the element association from at least two candidate measurement result combinations by adopting a random forest model, and taking the element in the target measurement result combination as a fusion result.
And S205, optimizing and updating the random forest model according to the combination and fusion result of at least two candidate measurement results.
In order to continuously improve the accuracy of the random forest model in the use process of the random forest model, in the embodiment of the application, after the data fusion is completed by adopting the random forest model each time, the current random forest model is optimized and updated according to the fusion result, and the specific optimization and update process can be as follows: and according to the combination of at least two candidate measurement results and the obtained fusion result, judging the elements in each candidate measurement result combination manually or in other modes, judging whether the probability of the same detected object associated with the elements output by the current random forest model is accurate, and then optimizing and updating each parameter in the current random forest model through the back propagation of the neural network according to each element in the at least two candidate measurement results, the probability of the same detected object associated with the random forest model output by each element in the fusion process and the actual condition whether the probability of the random forest model output is correct.
According to the technical scheme of the embodiment, before data fusion is carried out, training sample sets are constructed according to sample measurement results acquired by different sensors, a random forest model is trained through a large number of training sample sets, and accuracy of output results of the random forest model is guaranteed. When data fusion is carried out, a random forest model is adopted, and from candidate measurement result combinations formed by measurement results of different sensors on the measured object, a target measurement result combination of which the elements are related to the same measured object is determined, and then the elements in the target measurement result combination are used as fusion results. After data fusion is carried out, parameters in the random forest model are continuously optimized and updated according to the fusion result of the time, and the accuracy of the output probability of the random forest model is improved. According to the scheme, through training before data fusion and optimization updating after data fusion, accuracy of output probability of a random forest model is guaranteed, the problem that data fusion is completed according to normal distribution and manual parameter adjustment in the prior art, accuracy is low, and accuracy and accurate recall rate of a fusion result finally obtained are guaranteed.
Third embodiment
The embodiment of the application explains the types of measurement results of different sensors for the measured object on the basis of the above embodiments. The types of different sensors in embodiments of the present application may include, but are not limited to: laser radar, millimeter wave radar, and image collectors (e.g., cameras), etc. In order to improve the accuracy of the subsequent judgment that the measurement results of different sensors are associated with the same measured result, in the embodiment of the application, the different sensors are preferably any two of a laser radar, a millimeter wave radar and an image collector; the measurements of the different sensors may include speed measurements and/or position measurements. Among other things, the speed and position measurements in the present application may include measurements in multiple directional dimensions, for example, the speed and position measurements may include measurements in the x-coordinate direction and measurements in the y-coordinate direction.
Optionally, if the different sensors include an image collector, the measurement result further includes: a projection result; wherein the projection result includes: the method comprises the steps of acquiring at least one of label frame information of a detected object in an image acquired by an image acquisition device, the number of points of the detected object acquired by a laser radar or a millimeter wave radar, and projection data of the points of the detected object acquired by the laser radar or the millimeter wave radar in the image of the detected object acquired by the image acquisition device. The labeling frame of the detected object in the image acquired by the image acquisition unit can be a boundary frame of the detected object contained in the image acquired by the frame selection image acquisition unit.
Next, the present embodiment describes the types of measurement results of the object to be measured collected by the present embodiment from the following three different combinations of sensors:
in a first case, if the different sensors of the present application are a laser radar and a millimeter wave radar, the measurement results acquired by the two different sensors on the measured object include:
1) The x coordinate of each measured object detected by the laser radar;
2) The y coordinate of each measured object detected by the laser radar;
3) The speed of each measured object detected by the laser radar in the x direction;
4) The speed of each measured object detected by the laser radar in the y direction;
5) The x coordinate of each detected object detected by the millimeter wave radar;
6) The y coordinate of each detected object detected by the millimeter wave radar;
7) The velocity in the x direction of each object to be detected by the millimeter wave radar;
8) The millimeter wave radar detects the speed of each object to be measured in the y direction.
In the first case, the measurement results include only the speed measurement result and the position measurement result, and the projection result is not included. Namely, 1) to 2) are the position measurement results of the laser radar on the measured object, and 3) to 4) are the speed measurement results of the laser radar on the measured object; 5) to 6) are the position measurement results of the millimeter wave radar on the object to be measured, and 7) to 8) are the speed measurement results of the millimeter wave radar on the object to be measured.
In a second aspect, if the different sensors of the present application are a lidar and an image collector, the measurement results collected by the two different sensors on the measured object include:
1) The x coordinate of each measured object detected by the laser radar;
2) The y coordinate of each measured object detected by the laser radar;
3) The speed of each measured object detected by the laser radar in the x direction;
4) The speed of each measured object detected by the laser radar in the y direction;
5) The x coordinate of each detected object detected by the image collector;
6) The y coordinate of each detected object detected by the image collector;
7) The speed of each detected object in the x direction detected by the image collector;
8) The speed of each detected object in the y direction detected by the image collector;
9) The number of points of each measured object detected by the laser radar;
10 The number of points of the measured object collected by the laser radar in the labeling frame of the measured object collected by the image collector;
11 X coordinate of the upper left corner of the labeling frame of the measured object acquired by the image acquisition device;
12 Y coordinates of the upper left corner of the labeling frame of the measured object acquired by the image acquisition device;
13 X coordinate of the lower right corner of the labeling frame of the measured object acquired by the image acquisition device;
14 Y coordinates of the lower right corner of the labeling frame of the measured object acquired by the image acquisition device;
15 The minimum value of the x coordinate of the two-dimensional point of the three-dimensional point of the measured object collected by the laser radar projected onto the image of the measured object collected by the image collector;
16 A y coordinate minimum value of a two-dimensional point of the detected object collected by the laser radar projected onto the image of the detected object collected by the image collector;
17 The three-dimensional point of the measured object collected by the laser radar is projected to the maximum value of the x-coordinate of the two-dimensional point on the image of the measured object collected by the image collector;
18 A three-dimensional point of the measured object collected by the laser radar is projected to a y-coordinate maximum value of a two-dimensional point on an image of the measured object collected by the image collector.
In the second case, the measurement results include not only the speed measurement result and the position measurement result but also the projection result. Namely, the 1) and 2) are the position measurement results of the laser radar on the measured object, and the 3) to 4) are the speed measurement results of the laser radar on the measured object; 5) -6) are the position measurement results of the image collector on the measured object, and 7) -8) are the speed measurement results of the image collector on the measured object; the 9) is the number of points of the object to be detected acquired by the laser radar; 11) -14) are marking frame information of the measured object in the image acquired by the image acquisition device; 10), 15) -18) are projection data of points of the measured object collected by the laser radar in the image of the measured object collected by the image collector.
In the third aspect, if the different sensors of the present application are a millimeter wave radar and an image collector, the measurement results collected by the two different sensors on the measured object include:
1) The x coordinate of each detected object detected by the image collector;
2) The y coordinate of each detected object detected by the image collector;
3) The speed of each detected object in the x direction detected by the image collector;
4) The speed of each detected object in the y direction detected by the image collector;
5) The x coordinate of each detected object detected by the millimeter wave radar;
6) The y coordinate of each detected object detected by the millimeter wave radar;
7) The velocity in the x direction of each object to be detected by the millimeter wave radar;
8) The velocity of each measured object detected by the millimeter wave radar in the y direction;
9) The width of a marking frame of the measured object in the image acquired by the image acquisition device;
10 The height of a labeling frame of the measured object in the image acquired by the image acquisition device;
11 The point of the detected object collected by the millimeter wave radar is projected onto the image collected by the image collector, and the distance between the point and the nearest mark frame of the detected object is in the x direction;
12 The point of the detected object collected by the millimeter wave radar is projected onto the image collected by the image collector, and the distance between the point and the nearest detected object marking frame in the y direction is the same.
In the third case, the measurement results include not only the speed measurement result and the position measurement result but also the projection result. Namely, the 1) and 2) are the position measurement results of the image collector on the measured object, and the 3) to 4) are the speed measurement results of the image collector on the measured object; 5) -6) are the position measurement results of the millimeter wave radar on the measured object, and 7) -8) are the speed measurement results of the millimeter wave radar on the measured object; the 9) to 10) are marking frame information of the measured object in the image acquired by the image acquisition device; 11) to 12) are projection data of points of the object to be measured collected by the millimeter wave radar into an image of the object to be measured collected by the image collector.
In the above three cases described in this embodiment, the measurement results include a position measurement result and a velocity measurement result, or the position measurement result, the velocity measurement result, and the projection measurement result, and the content included in the measurement results is relatively comprehensive. However, in the actual data fusion process, a certain part of the above given measurement results may be selected, and it is not limited to that each case necessarily includes all kinds of measurement results described above, and the selection may be performed according to the actual data fusion requirement.
Optionally, based on the above three cases, the embodiment of the present application may correspondingly train a random forest model for each case, select a corresponding random forest model according to the types of different sensors, and select a target measurement result combination from at least two candidate measurement result combinations composed of the measurement results, and further use elements in the target measurement result combination as a fusion result. The method may also be that a random forest model capable of being applied to the three situations simultaneously is trained to perform data fusion on measurement results of the measured object by different sensors introduced in the three situations, and this embodiment is not limited. The above embodiments of the specific data fusion execution process have been described, and the details of this embodiment are not repeated here.
According to the technical scheme, types of different sensors and contents of measurement results are introduced, the dimension related to the measurement results in the embodiment is comprehensive, the comprehensive multi-dimensional measurement results can be applied to the data fusion processing method introduced in each embodiment, whether the same measured object is related to the measurement results in various different dimensions is judged by adopting a random forest model, the judgment accuracy is improved, and in addition, the comprehensive fusion result obtained through the comprehensive multi-dimensional measurement results can be improved.
Fourth embodiment
Fig. 3 is a schematic structural diagram of a data fusion processing device according to a fourth embodiment of the present application, where the present embodiment is applicable to a case where data fusion is performed on measurement results collected by different sensors for different measured objects, and the device may implement the data fusion processing method according to any embodiment of the present application. The apparatus 300 specifically includes the following:
a measurement result combination module 301, configured to determine at least two candidate measurement result combinations according to measurement results of different sensors on the measured object; the candidate measurement result combination comprises at least one element; the element comprises at least two measurements;
The fusion result determining module 302 is configured to determine, from the at least two candidate measurement result combinations, a target measurement result combination in which elements are associated with the same measured object, using a random forest model, and use elements in the target measurement result combination as fusion results.
According to the technical scheme, a random forest model is adopted, and from candidate measurement result combinations formed by measurement results of different sensors on the measured object, target measurement result combinations of the same measured object with elements associated with the same measured object are determined, and then elements in the target measurement result combinations are used as fusion results. According to the scheme of the embodiment, the measurement results related to the same measured object are determined through the random forest model, normal distribution and manual parameter adjustment are not needed in the whole process, accuracy of the judgment result is improved, and accuracy and accurate recall rate of the finally obtained fusion result are improved.
Further, the number of elements in the candidate measurement combination is equal to the number of measurement values of the first sensor; the number of candidate measurement result combinations is the quotient of the first factorization and the second factorization; the first factorization is factorization of a quantity value of a measurement result of the second sensor; the second factorization is factorization of a difference in number of measurements of the second sensor and the first sensor; wherein the number of measurement results of the first sensor is less than or equal to the number of measurement results of the second sensor.
Further, the above-mentioned fusion result determining module 302 is specifically configured to, when determining, from the at least two candidate measurement result combinations, a target measurement result combination of elements associated with the same measured object by using a random forest model:
inputting elements in the combination of the at least two candidate measurement results into a random forest model to obtain the probability that the elements are associated with the same measured object;
determining the overall similarity of the at least two candidate measurement result combinations according to the probability that elements in the at least two candidate measurement result combinations are associated with the same measured object;
and taking the candidate measurement result combination with the highest overall similarity as a target measurement result combination.
Further, the apparatus further includes a model training module, where the model training module is configured to:
acquiring sample measurement result sets acquired by different sensors, adding classification labels for the sample measurement result sets, and constructing a training sample set;
and training a random forest model by adopting the training sample set.
Further, the device further comprises a model updating module, wherein the model updating module is used for:
and optimizing and updating the random forest model according to the combination of the at least two candidate measurement results and the fusion result.
Further, the different sensors are any two of a laser radar, a millimeter wave radar and an image collector; the measurement is a speed measurement and/or a position measurement.
Further, if the different sensors include an image collector, the measurement result further includes: a projection result;
wherein the projection result includes: the method comprises the steps of acquiring at least one of label frame information of a detected object in an image acquired by an image acquisition device, the number of points of the detected object acquired by a laser radar or a millimeter wave radar, and projection data of the points of the detected object acquired by the laser radar or the millimeter wave radar in the image of the detected object acquired by the image acquisition device.
Fifth embodiment
According to an embodiment of the present application, the present application also provides an electronic device and a readable storage medium.
As shown in fig. 4, a block diagram of an electronic device according to a data fusion processing method according to an embodiment of the present application is shown. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 4, the electronic device includes: one or more processors 401, memory 402, and interfaces for connecting the components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of a graphical user interface (Graphical User Interface, GUI) on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations, e.g., as a server array, a set of blade servers, or a multiprocessor system. One processor 401 is illustrated in fig. 4.
Memory 402 is a non-transitory computer readable storage medium provided by the present application. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the data fusion processing method provided by the application. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of data fusion processing provided by the present application.
The memory 402 is used as a non-transitory computer readable storage medium, and may be used to store a non-transitory software program, a non-transitory computer executable program, and modules, such as program instructions/modules corresponding to the data fusion processing method in the embodiment of the present application, for example, the measurement result combining module 301 and the fusion result determining module 302 shown in fig. 3. The processor 401 executes various functional applications of the server and data processing, i.e., implements the data fusion processing method in the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 402.
Memory 402 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the electronic device of the data fusion processing method, and the like. In addition, memory 402 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 402 may optionally include memory located remotely from processor 401, which may be connected to the electronic device of the data fusion processing method via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the data fusion processing method may further include: an input device 403 and an output device 404. The processor 401, memory 402, input device 403, and output device 404 may be connected by a bus or otherwise, for example in fig. 4.
The input device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device of the data fusion processing method, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, etc. input devices. The output means 404 may include a display device, auxiliary lighting means, such as light emitting diodes (Light Emitting Diode, LEDs), tactile feedback means, and the like; haptic feedback devices such as vibration motors and the like. The display device may include, but is not limited to, a liquid crystal display (Liquid Crystal Display, LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be implemented in digital electronic circuitry, integrated circuitry, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs, also referred to as programs, software applications, or code, include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device for providing machine instructions and/or data to a programmable processor, e.g., magnetic discs, optical disks, memory, programmable logic devices (Programmable Logic Device, PLD), including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device for displaying information to a user, for example, a Cathode Ray Tube (CRT) or an LCD monitor; and a keyboard and pointing device, such as a mouse or trackball, by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here, or that includes any combination of such background, middleware, or front-end components. The components of the system may be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include: local area network (Local Area Network, LAN), wide area network (Wide Area Network, WAN), the internet and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, a random forest model is adopted, and from candidate measurement result combinations formed by measurement results of different sensors on the measured object, a target measurement result combination of which the elements are associated with the same measured object is determined, and then the elements in the target measurement result combination are used as fusion results. According to the scheme of the embodiment, the measurement results related to the same measured object are determined through the random forest model, normal distribution and manual parameter adjustment are not needed in the whole process, accuracy of the judgment result is improved, and accuracy and accurate recall rate of the finally obtained fusion result are improved.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed embodiments are achieved, and are not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (14)

1. A data fusion processing method, the method comprising:
determining at least two candidate measurement result combinations according to the measurement results of different sensors on the measured object; the candidate measurement result combination comprises at least one element; the element comprises at least two measurements;
determining a target measurement result combination of which the elements are associated with the same measured object from the at least two candidate measurement result combinations by adopting a random forest model, and taking the elements in the target measurement result combination as fusion results;
the method for determining the target measurement result combination of the same measured object by element association from the at least two candidate measurement result combinations by adopting a random forest model comprises the following steps:
inputting elements in the combination of the at least two candidate measurement results into a random forest model to obtain the probability that the elements are associated with the same measured object;
determining the overall similarity of the at least two candidate measurement result combinations according to the probability that elements in the at least two candidate measurement result combinations are associated with the same measured object;
and taking the candidate measurement result combination with the highest overall similarity as a target measurement result combination.
2. The method of claim 1, wherein the number of elements in the candidate measurement combination is equal to the number of measurements of the first sensor; the number of candidate measurement result combinations is the quotient of the first factorization and the second factorization; the first factorization is factorization of a quantity value of a measurement result of the second sensor; the second factorization is factorization of a difference in number of measurements of the second sensor and the first sensor; wherein the number of measurement results of the first sensor is less than or equal to the number of measurement results of the second sensor.
3. The method of claim 1, further comprising, prior to determining a target measurement combination of elements associated with the same object under test from the at least two candidate measurement combinations using a random forest model:
acquiring sample measurement result sets acquired by different sensors, adding classification labels for the sample measurement result sets, and constructing a training sample set;
and training a random forest model by adopting the training sample set.
4. A method according to claim 3, characterized in that after determining a target measurement combination of elements associated with the same measured object from the at least two candidate measurement combinations using a random forest model and taking the elements of the target measurement combination as fusion results, it further comprises:
And optimizing and updating the random forest model according to the combination of the at least two candidate measurement results and the fusion result.
5. A method according to any one of claims 1-3, wherein the different sensors are any two of a lidar, a millimeter wave radar and an image collector; the measurement is a speed measurement and/or a position measurement.
6. The method of claim 5, wherein if the different sensor includes an image collector, the measuring result further includes: a projection result;
wherein the projection result includes: the method comprises the steps of acquiring at least one of label frame information of a detected object in an image acquired by an image acquisition device, the number of points of the detected object acquired by a laser radar or a millimeter wave radar, and projection data of the points of the detected object acquired by the laser radar or the millimeter wave radar in the image of the detected object acquired by the image acquisition device.
7. A data fusion processing apparatus, the apparatus comprising:
the measuring result combination module is used for determining at least two candidate measuring result combinations according to measuring results of different sensors on the measured object; the candidate measurement result combination comprises at least one element; the element comprises at least two measurements;
The fusion result determining module is used for determining a target measurement result combination of which the elements are associated with the same measured object from the at least two candidate measurement result combinations by adopting a random forest model, and taking the elements in the target measurement result combination as fusion results;
the fusion result determining module is specifically configured to:
inputting elements in the combination of the at least two candidate measurement results into a random forest model to obtain the probability that the elements are associated with the same measured object;
determining the overall similarity of the at least two candidate measurement result combinations according to the probability that elements in the at least two candidate measurement result combinations are associated with the same measured object;
and taking the candidate measurement result combination with the highest overall similarity as a target measurement result combination.
8. The apparatus of claim 7, wherein the number of elements in the candidate measurement combination is equal to the number of measurements of the first sensor; the number of candidate measurement result combinations is the quotient of the first factorization and the second factorization; the first factorization is factorization of a quantity value of a measurement result of the second sensor; the second factorization is factorization of a difference in number of measurements of the second sensor and the first sensor; wherein the number of measurement results of the first sensor is less than or equal to the number of measurement results of the second sensor.
9. The apparatus of claim 7, further comprising a model training module, in particular for:
acquiring sample measurement result sets acquired by different sensors, adding classification labels for the sample measurement result sets, and constructing a training sample set;
and training a random forest model by adopting the training sample set.
10. The apparatus according to claim 9, further comprising a model update module, in particular for:
and optimizing and updating the random forest model according to the combination of the at least two candidate measurement results and the fusion result.
11. The apparatus of any one of claims 7-9, wherein the different sensors are any two of a lidar, a millimeter wave radar, and an image collector; the measurement is a speed measurement and/or a position measurement.
12. The apparatus of claim 11, wherein if the different sensor includes an image collector, the measurement result further includes: a projection result;
wherein the projection result includes: the method comprises the steps of acquiring at least one of label frame information of a detected object in an image acquired by an image acquisition device, the number of points of the detected object acquired by a laser radar or a millimeter wave radar, and projection data of the points of the detected object acquired by the laser radar or the millimeter wave radar in the image of the detected object acquired by the image acquisition device.
13. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the data fusion processing method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the data fusion processing method of any one of claims 1-6.
CN202010112841.7A 2020-02-24 2020-02-24 Data fusion processing method, device, equipment and storage medium Active CN111310840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010112841.7A CN111310840B (en) 2020-02-24 2020-02-24 Data fusion processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010112841.7A CN111310840B (en) 2020-02-24 2020-02-24 Data fusion processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111310840A CN111310840A (en) 2020-06-19
CN111310840B true CN111310840B (en) 2023-10-17

Family

ID=71158341

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010112841.7A Active CN111310840B (en) 2020-02-24 2020-02-24 Data fusion processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111310840B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111830502B (en) * 2020-06-30 2021-10-12 广州小鹏自动驾驶科技有限公司 Data set establishing method, vehicle and storage medium
CN111985578A (en) * 2020-09-02 2020-11-24 深圳壹账通智能科技有限公司 Multi-source data fusion method and device, computer equipment and storage medium
CN112147601B (en) * 2020-09-03 2023-05-26 南京信息工程大学 Sea surface small target detection method based on random forest
CN112214531B (en) * 2020-10-12 2021-11-05 海南大学 Cross-data, information and knowledge multi-modal feature mining method and component
CN112733907A (en) * 2020-12-31 2021-04-30 上海商汤临港智能科技有限公司 Data fusion method and device, electronic equipment and storage medium
CN113343458B (en) * 2021-05-31 2023-07-18 潍柴动力股份有限公司 Engine sensor selection method and device, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10324897A1 (en) * 2003-05-30 2004-12-23 Robert Bosch Gmbh Object detection method for a vehicle driver assistance system based on maximum a-posteriori estimation of Bayesian probability functions for independently obtained measurement values
CN102323323A (en) * 2011-07-12 2012-01-18 南京医科大学 Preparation method for 17 beta-estradiol molecular imprinting film electrochemical sensor
CN107358142A (en) * 2017-05-15 2017-11-17 西安电子科技大学 Polarimetric SAR Image semisupervised classification method based on random forest composition
CN108304490A (en) * 2018-01-08 2018-07-20 有米科技股份有限公司 Text based similarity determines method, apparatus and computer equipment
CN108333569A (en) * 2018-01-19 2018-07-27 杭州电子科技大学 A kind of asynchronous multiple sensors fusion multi-object tracking method based on PHD filtering
CN108614601A (en) * 2018-04-08 2018-10-02 西北农林科技大学 A kind of facility luminous environment regulation and control method of fusion random forests algorithm
CN109473142A (en) * 2018-10-10 2019-03-15 深圳韦格纳医学检验实验室 The construction method of sample data sets and its hereditary birthplace prediction technique
CN110231156A (en) * 2019-06-26 2019-09-13 山东大学 Service robot kinematic system method for diagnosing faults and device based on temporal aspect
CN110334767A (en) * 2019-07-08 2019-10-15 重庆大学 A kind of improvement random forest method for air quality classification
CN110641472A (en) * 2018-06-27 2020-01-03 百度(美国)有限责任公司 Safety monitoring system for autonomous vehicle based on neural network
CN110704543A (en) * 2019-08-19 2020-01-17 上海机电工程研究所 Multi-type multi-platform information data self-adaptive fusion system and method
CN110703732A (en) * 2019-10-21 2020-01-17 北京百度网讯科技有限公司 Correlation detection method, device, equipment and computer readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8112368B2 (en) * 2008-03-10 2012-02-07 The Boeing Company Method, apparatus and computer program product for predicting a fault utilizing multi-resolution classifier fusion
US9501693B2 (en) * 2013-10-09 2016-11-22 Honda Motor Co., Ltd. Real-time multiclass driver action recognition using random forests
US11315045B2 (en) * 2016-12-29 2022-04-26 Intel Corporation Entropy-based weighting in random forest models

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10324897A1 (en) * 2003-05-30 2004-12-23 Robert Bosch Gmbh Object detection method for a vehicle driver assistance system based on maximum a-posteriori estimation of Bayesian probability functions for independently obtained measurement values
CN102323323A (en) * 2011-07-12 2012-01-18 南京医科大学 Preparation method for 17 beta-estradiol molecular imprinting film electrochemical sensor
CN107358142A (en) * 2017-05-15 2017-11-17 西安电子科技大学 Polarimetric SAR Image semisupervised classification method based on random forest composition
CN108304490A (en) * 2018-01-08 2018-07-20 有米科技股份有限公司 Text based similarity determines method, apparatus and computer equipment
CN108333569A (en) * 2018-01-19 2018-07-27 杭州电子科技大学 A kind of asynchronous multiple sensors fusion multi-object tracking method based on PHD filtering
CN108614601A (en) * 2018-04-08 2018-10-02 西北农林科技大学 A kind of facility luminous environment regulation and control method of fusion random forests algorithm
CN110641472A (en) * 2018-06-27 2020-01-03 百度(美国)有限责任公司 Safety monitoring system for autonomous vehicle based on neural network
CN109473142A (en) * 2018-10-10 2019-03-15 深圳韦格纳医学检验实验室 The construction method of sample data sets and its hereditary birthplace prediction technique
CN110231156A (en) * 2019-06-26 2019-09-13 山东大学 Service robot kinematic system method for diagnosing faults and device based on temporal aspect
CN110334767A (en) * 2019-07-08 2019-10-15 重庆大学 A kind of improvement random forest method for air quality classification
CN110704543A (en) * 2019-08-19 2020-01-17 上海机电工程研究所 Multi-type multi-platform information data self-adaptive fusion system and method
CN110703732A (en) * 2019-10-21 2020-01-17 北京百度网讯科技有限公司 Correlation detection method, device, equipment and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
叶建芳 ; 刘强 ; 李雪莹 ; .基于随机森林的疲劳驾驶检测识别模型的优化研究.汽车实用技术.2018,(第13期),全文. *
李林超 ; 曲栩 ; 张健 ; 王永岗 ; 李汉初 ; 冉斌 ; .基于特征级融合的高速公路异质交通流数据修复方法.东南大学学报(自然科学版).2018,(第05期),全文. *

Also Published As

Publication number Publication date
CN111310840A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN111310840B (en) Data fusion processing method, device, equipment and storage medium
CN111401208B (en) Obstacle detection method and device, electronic equipment and storage medium
US20210312209A1 (en) Vehicle information detection method, electronic device and storage medium
CN110488234B (en) External parameter calibration method, device, equipment and medium for vehicle-mounted millimeter wave radar
CN111324115B (en) Obstacle position detection fusion method, obstacle position detection fusion device, electronic equipment and storage medium
CN111563450B (en) Data processing method, device, equipment and storage medium
KR20210052409A (en) Lane line determination method and apparatus, lane line positioning accuracy evaluation method and apparatus, device, and program
CN111368760B (en) Obstacle detection method and device, electronic equipment and storage medium
CN111753765A (en) Detection method, device and equipment of sensing equipment and storage medium
CN112270669B (en) Human body 3D key point detection method, model training method and related devices
CN111324945B (en) Sensor scheme determining method, device, equipment and storage medium
CN111638528B (en) Positioning method, positioning device, electronic equipment and storage medium
CN111401251B (en) Lane line extraction method, lane line extraction device, electronic equipment and computer readable storage medium
CN111079079B (en) Data correction method, device, electronic equipment and computer readable storage medium
CN111881908B (en) Target detection model correction method, detection device, equipment and medium
CN110660219A (en) Parking lot parking prediction method and device
CN111402326B (en) Obstacle detection method, obstacle detection device, unmanned vehicle and storage medium
CN112147632A (en) Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
CN111612852A (en) Method and apparatus for verifying camera parameters
CN111666876A (en) Method and device for detecting obstacle, electronic equipment and road side equipment
CN111539347B (en) Method and device for detecting target
CN110866504B (en) Method, device and equipment for acquiring annotation data
CN111767843A (en) Three-dimensional position prediction method, device, equipment and storage medium
CN111462072B (en) Point cloud picture quality detection method and device and electronic equipment
CN112859829A (en) Vehicle control method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant