CN113947158A - Data fusion method and device for intelligent vehicle - Google Patents

Data fusion method and device for intelligent vehicle Download PDF

Info

Publication number
CN113947158A
CN113947158A CN202111245591.5A CN202111245591A CN113947158A CN 113947158 A CN113947158 A CN 113947158A CN 202111245591 A CN202111245591 A CN 202111245591A CN 113947158 A CN113947158 A CN 113947158A
Authority
CN
China
Prior art keywords
data
fusion
mth
moment
mth moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111245591.5A
Other languages
Chinese (zh)
Inventor
尚进
丛炜
刘森玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoqi Intelligent Control Beijing Technology Co Ltd
Original Assignee
Guoqi Intelligent Control Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoqi Intelligent Control Beijing Technology Co Ltd filed Critical Guoqi Intelligent Control Beijing Technology Co Ltd
Priority to CN202111245591.5A priority Critical patent/CN113947158A/en
Publication of CN113947158A publication Critical patent/CN113947158A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a data fusion method and a data fusion device for an intelligent vehicle, wherein the method comprises the following steps: acquiring initial data of the Mth moment based on a plurality of sensing devices of the intelligent vehicle; wherein M is an integer greater than 1; fusing the initial data at the Mth moment to acquire fused data at the Mth moment; processing the fusion data based on a Kalman filtering model to obtain an estimated value x of the motion state at the Mth momentM(ii) a Estimation value x based on XGboost modelMAnd processing to obtain a solution of the motion state of the target. According to the technical scheme, the data acquired by the vehicle-mounted environment sensing sensor and the V2X equipment are subjected to data fusion, the data estimation is optimized based on the Kalman filtering model and the XGboost model, the data is fully fused, the precision of the data fusion is improved, and the problem of the prior art that the data fusion is high is solvedThe estimated value data fusion precision is not high and the data fusion is not sufficient.

Description

Data fusion method and device for intelligent vehicle
Technical Field
The invention belongs to the technical field of automatic driving, and particularly relates to a data fusion method and device for an intelligent vehicle.
Background
Due to the diversity and complexity of road environment and weather environment and the motion characteristics of the intelligent driving vehicle, perfect sensing equipment is not available, the equipment is not ideal and is in a normal state, and a perfect and inexhaustible data fusion algorithm does not exist; however, for different driving tasks, different sensing device types and types are needed, and the sensing device with the most complete, most expensive configuration is not needed to complete the driving task; because the vehicle-mounted sensing equipment carried by the autonomous automatic driving is diversified, the sensing range of the single vehicle-mounted sensing equipment to the environment is small, the sensing capability to the environment is limited, if the data of the multisource sensing equipment is not efficiently and accurately fused, the misjudgment is easy to occur, and the situation can bring great potential safety hazard to the driver. Therefore, the development target of the intelligent internet-connected automobile is to be achieved by taking task requirements as guidance and fully utilizing data acquired by sensing equipment carried by the autonomous automatic driving system, and only mutual complementation of sensing data of vehicle-mounted sensing equipment and exchange data of V2X cooperative communication to realize integration of autonomous automatic driving and internet-connected driving, especially full integration of data.
However, in the prior art, a sensor in automatic driving works independently, and when a vehicle is used under different environmental conditions, different sensor data are sensed independently, so that the problems that output sensing results under different environments are not stable and reliable enough, the self-adaption of environmental sensing is poor, and the robustness of a system is poor exist.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art. Therefore, the invention aims to provide a data fusion method and device for an intelligent vehicle.
In order to solve the above technical problem, an embodiment of the present invention provides the following technical solutions:
in a first aspect, an embodiment of the present application provides a data fusion method for a smart vehicle, including:
acquiring initial data of the Mth moment based on a plurality of sensing devices of the intelligent vehicle; wherein M is an integer greater than 1;
fusing the initial data at the Mth moment to acquire fused data at the Mth moment;
processing the fusion data based on a Kalman filtering model to obtain an estimated value x of the motion state at the Mth momentM
Estimation value x based on XGboost modelMAnd processing to obtain a solution of the motion state of the target.
Specifically, the intelligent vehicle is provided with a plurality of sensing devices, and senses the surrounding environment of the intelligent vehicle and the motion state of the intelligent vehicle based on the plurality of sensing devices, acquires sensing data, and reports the acquired sensing data.
The sensing devices may specifically include sensors and V2X devices, where the sensors are used to detect the state of the external environment of the smart vehicle at present, and the V2X device communication data is used to determine situations such as sensing detection delay, which may provide the reliability of each sensing device.
The acquired sensing data can be divided into sensing data at a plurality of moments according to time, and subsequent processing is performed on the sensing data at each moment.
According to the embodiment of the application, data fusion is carried out on data acquired by the vehicle-mounted environment perception sensor and the V2X device, data evaluation is optimized based on the Kalman filtering model and the XGboost model, data are finally fully fused, the precision of data fusion is improved, and the problems that evaluation value data fusion precision is not high and data fusion is not sufficient in the prior art are solved.
In one possible implementation manner, the acquiring initial data of an mth moment by a plurality of sensing devices based on a smart vehicle includes:
extracting the characteristics of the initial data to obtain characteristic data;
and detecting the characteristic data, determining the equipment type of the characteristic data, and acquiring sensor data and V2X equipment data.
In a possible implementation manner, the performing feature extraction on the initial data to obtain feature data includes:
acquiring first image information of a current frame of a video based on the initial data; wherein the first image information comprises a first timestamp and first location information;
acquiring the Nth image information of the current frame; wherein the Nth image information comprises an Nth timestamp and Nth position information of the current frame; wherein N is an integer greater than 1;
comparing the Nth timestamp with the first timestamp to obtain a first comparison result;
comparing the Nth position information with the first position information to obtain a second comparison result;
correcting the image information of the current Nth frame based on the first comparison result and the second comparison result to obtain correction data;
and acquiring the characteristic data based on the correction data.
The embodiment of the application aims at the sensing equipment when collecting data information, especially video data information has certain time delay, the inaccurate condition of fusion data is optimized, the timestamp of each frame is obtained while the video information is obtained, the position information of each frame is obtained through the positioning module, the data information obtained by the sensing equipment is comprehensively checked, the problem of inaccurate data caused by delay of the video data is solved, and the precision of the fusion data is improved.
In a possible implementation manner, the fusing the initial data at the mth time to obtain fused data at the mth time includes:
fusing the sensor data at the Mth moment to acquire the sensor fusion data at the Mth moment;
fusing the V2X device data at the Mth moment to obtain V2X device fused data at the Mth moment;
and fusing the sensor fusion data at the Mth moment and the V2X device fusion data at the Mth moment to acquire the fusion data at the Mth moment.
In a possible implementation manner, the fusion data are processed based on the kalman filtering model to obtain an estimated value x of the motion state at the mth momentMThe method comprises the following steps:
obtaining initial estimated value w of motion stateM
Based on the initial estimate wMAnd the fusion data, obtaining the estimated value xM
In one possible implementation, the wM is calculated by the following formula:
wM=FMwM-1+BMuM+aM
in the formula uMIs the motion vector at time M, FMFor the state transmission matrix at the M-th time, BMIs a control input value at the M-th time, aMIs the noise information at the M-th time.
In one possible implementation, the xMObtained by the following formula:
xM=(z-yM)*HM*PM
wherein z is an actual measurement parameter, (z-y)M) Is a data deviation value, HMIs a projection of the motion vector at time M, PMThe matrix is transmitted for the updated state at time M.
According to the embodiment of the application, the noise optimization is carried out on the result obtained by the Kalman filtering model, and the precision of data fusion of the Kalman filtering model is improved.
In a possible implementation manner, the processing the estimated value xM based on the XGBoost model to obtain a solution of the motion state of the target includes:
based on the xMEstablishing a Boosted Tree;
acquiring a target function based on the Boosted Tree;
processing the target function to obtain the result function;
and acquiring a solution of the motion state of the target based on the result function.
The method and the device solve the problem that the maximum likelihood estimation algorithm needs to obtain the analytic or empirical model of the multiple sensors to provide the prior distribution, calculate the likelihood equation, ignore the variance of the distribution, and is influenced by the uncertainty of the data of the multiple sensors, so that the bias is brought.
In a second aspect, an embodiment of the present application provides a data fusion apparatus for a smart vehicle, including:
the acquisition module is used for acquiring initial data of the Mth moment based on a plurality of sensing devices of the intelligent vehicle; wherein M is an integer greater than 1;
the fusion module is used for fusing the initial data at the Mth moment to acquire fused data at the Mth moment;
a processing module for processing the fusion data based on a Kalman filtering model to obtain an estimated value x of the motion state at the Mth momentM
An optimization module for estimating the value x based on the XGboost modelMAnd processing to obtain a solution of the motion state of the target.
In a third aspect, embodiments of the present application provide a computer storage medium storing a computer program that, when executed by a processor, implements the method described above.
In a fourth aspect, embodiments of the present application provide a computer program comprising instructions which, when executed by a computer, cause the computer to perform the method as described above.
In a fifth aspect, an embodiment of the present application provides an intelligent vehicle, including a processor, a memory and a communication interface, wherein the memory is used for storing information transmission program codes, and the processor is used for calling the vehicle running control program codes to execute the method of the first aspect.
In a sixth aspect, an embodiment of the present application provides a chip system, where the chip system includes a processor, configured to support a service device to implement the functions referred to in the first aspect, for example, to generate or process information referred to in the method of the first aspect. In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the data transmission device. The chip system may be constituted by a chip, or may include a chip and other discrete devices.
Drawings
Fig. 1 is a schematic flowchart of a data fusion method for an intelligent vehicle according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of feature fusion provided by an embodiment of the present application;
FIG. 3 is a schematic flow chart of image information correction provided by an embodiment of the present application;
FIG. 4 is a schematic flow chart of Kalman filtering fusion provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a data fusion device for an intelligent vehicle according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings.
The terms "first" and "second," and the like in the description and claims of this application and in the drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
As used in this specification, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between 2 or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
First, some terms in the present application are explained so as to be easily understood by those skilled in the art.
(1) XGboost: the eXtreme Gradient Boosting is a tool for large-scale parallel Boosting trees, is the fastest and best open source Boosting Tree tool kit at present, and is more than 10 times faster than common tool kits.
(2) V2X: the general names of V2V (vehicle-to-vehicle communication vehicloVehicle), V2I (vehicle-to-Infrastructure communication vehicloInfrastrastruture), V2P (vehicloPedesstrian) and the like are that modern communication and network technology are fused by carrying advanced vehicle-mounted sensors, controllers, actuators and other devices, so that the exchange and sharing of intelligent information of vehicles and X (people, vehicles, roads, backgrounds and the like) is realized, a series of traffic data of real-time road conditions, roads, pedestrians and the like are obtained, an environment signal beyond the visual range is brought, and meanwhile, the vehicle-to-vehicle communication and network technology can interact with surrounding infrastructures such as traffic lights, road signs and the like, and the vehicle-to-vehicle communication and control system has the functions of complex environment perception, intelligent decision making, cooperative control, execution and the like, provides a safer, more energy-saving, more environment-friendly and more comfortable traveling mode, and is an important application of the Internet of things in the driving situation of vehicles.
Referring to fig. 1, fig. 1 is a schematic flow chart of a data fusion method for an intelligent vehicle according to an embodiment of the present application, including:
step S1: acquiring initial data of the Mth moment based on a plurality of sensing devices of the intelligent vehicle; wherein M is an integer greater than 1;
specifically, referring to fig. 2, the intelligent vehicle is provided with a plurality of sensing devices, and senses the surrounding environment of the intelligent vehicle and the motion state of the intelligent vehicle based on the plurality of sensing devices, acquires sensing data, and reports the acquired sensing data.
The sensing devices (including a plurality of, for example, the first sensing device, the second sensing device, and the third sensing device … …, the nth sensing device) may specifically include a plurality of sensors and a V2X device, where the sensors are used to detect a state of an external environment of the smart vehicle at present, and V2X device communication data is used to judge a sensing detection delay, and the like, and may provide a reliability of each sensing device.
The acquired sensing data can be divided into sensing data at a plurality of moments according to time, and subsequent processing is performed on the sensing data at each moment.
Optionally, the obtaining, by the multiple sensing devices based on the smart vehicle, initial data at an mth time includes:
extracting the characteristics of the initial data to obtain characteristic data; and detecting the characteristic data, determining the equipment type of the characteristic data, and acquiring sensor data and V2X equipment data.
Specifically, the feature data is detected, and the device type corresponding to the feature data is determined by detecting data related to the device type based on the feature data.
Referring to fig. 3, optionally, the performing feature extraction on the initial data to obtain feature data includes:
acquiring first image information of a current frame of a video based on the initial data; wherein the first image information comprises a first timestamp and first location information; acquiring the Nth image information of the current frame; wherein the Nth image information comprises an Nth timestamp and Nth position information of the current frame; wherein N is an integer greater than 1; comparing the Nth timestamp with the first timestamp to obtain a first comparison result; comparing the Nth position information with the first position information to obtain a second comparison result; correcting the image information of the current Nth frame based on the first comparison result and the second comparison result to obtain correction data; and acquiring the characteristic data based on the correction data.
The embodiment of the application aims at the sensing equipment when collecting data information, especially video data information has certain time delay, the inaccurate condition of fusion data is optimized, the timestamp of each frame is obtained while the video information is obtained, the position information of each frame is obtained through the positioning module, the data information obtained by the sensing equipment is comprehensively checked, the problem of inaccurate data caused by delay of the video data is solved, and the precision of the fusion data is improved.
Step S2: fusing the initial data of the Mth moment to obtain fused data z of the Mth momentM
Specifically, the fusing the initial data at the mth time to obtain fused data at the mth time includes:
fusing the sensor data at the Mth moment to acquire the sensor fusion data at the Mth moment; fusing the V2X device data at the Mth moment to obtain V2X device fused data at the Mth moment; and fusing the sensor fusion data at the Mth moment and the V2X device fusion data at the Mth moment to acquire the fusion data at the Mth moment.
Step S3: processing the fusion data based on a Kalman filtering model to obtain an estimated value x of the motion state at the Mth momentM
Specifically, referring to fig. 4, kalman filtering is an efficient recursive filter model, which can estimate the state of a dynamic system from a series of data completely containing noise, and update the estimation of state variables by using the estimation value at the previous time and the observation value at the current time, so as to obtain the most reliable optimal solution of the probability of the motion state.
The data acquisition can be performed based on the first sensing device, the second sensing device and the third sensing device, the acquired data is calculated, the estimation value of the motion state is obtained, and then the optimal solution of the motion state probability is obtained.
Optionally, the fusion data is processed based on the kalman filter model to obtain an estimated value x of the motion state at the mth timeMThe method comprises the following steps:
obtaining initial estimated value w of motion stateM
Based on the initial estimate wMAnd the fusion data, obtaining the estimated value xM
Optionally, the wMObtained by the following formula:
wM=FMwM-1+BMuM+aM
in the formula uMIs the motion vector at time M, FMFor the state transmission matrix at the M-th time, BMIs a control input value at the M-th time, aMIs the noise information at the M-th time.
Specifically, acquiring w based on Kalman filtering model predictionMThen, w can be paired based on the following formulaMOptimizing and obtaining xM
Optionally, x isMObtained by the following formula:
xM=(z-yM)*HM*PM
wherein z is an actual measurement parameter, (z-y)M) Is a data deviation value, HMIs a projection of the motion vector at time M, PMThe matrix is transmitted for the updated state at time M.
The correction state transmission matrix F is updated in real time through the actually measured parameter zMGenerating an updated state transmission matrix P by derivationMCalculating the data deviation value y as z-wMProjection of motion vector is HMFurther calculate xM=(z-yM)*HM*PMReducing the noise information aMRealize the interference of wMAnd optimizing to further improve the precision of the Kalman filtering model.
Step S4: estimation value x based on XGboost modelMAnd processing to obtain a solution of the motion state of the target. Optionally, the estimated value x is calculated based on the XGboost modelMProcessing is carried out to obtain a solution of the motion state of the target, and the method comprises the following steps:
based on the xMEstablishing a Boosted Tree;
acquiring a target function based on the Boosted Tree;
processing the target function to obtain the result function;
and acquiring a solution of the motion state of the target based on the result function.
Specifically, the target function of the XGBoost model is composed of two parts, namely training loss and regularization, and the expression is as follows:
Figure BDA0003320843830000071
wherein the content of the first and second substances,
Figure BDA0003320843830000072
in order to be a loss of training,
Figure BDA0003320843830000073
for the regularization term, xiFor the sample values, the values of the samples,
Figure BDA0003320843830000074
is a sample xiPredicted value of fkFor the expanded complexity function of the Boosted Tree, k is the state value (x) input by the objective functionM)。
As can be seen from the expression, the target f needs to be determinedkTo obtain a result function, at obtaining fkIn the process of (1), i.e. optimizing xM, the target is further to x by the result functionMOptimizing and improving the precision of data fusion.
Specifically, for fkAnd performing second Taylor equation expansion:
fk=ωq(x)
wherein f iskIs a regression tree, omegaq(x)Is the score of a leaf node q, q (x) is the leaf node number, for any sample xMWhich eventually falls on a leaf node of the tree, with a value of ωq(x)
Based on processed fkThe result function (optimal objective function) is obtained, i.e. for xMOptimization is performed.
According to the embodiment of the application, data fusion is carried out on data acquired by the vehicle-mounted environment perception sensor and the V2X device, data evaluation is optimized based on the Kalman filtering model and the XGboost model, data are finally fully fused, the precision of data fusion is improved, and the problems that evaluation value data fusion precision is not high and data fusion is not sufficient in the prior art are solved.
In addition, the embodiment of the application solves the problem that the maximum likelihood estimation algorithm needs to obtain the analytic or empirical model of the multiple sensors to provide the prior distribution and calculate the likelihood equation, and the variance of the distribution is neglected, so that the bias is caused due to the influence of the uncertainty of the data of the multiple sensors.
Referring to fig. 5, an embodiment of the present application provides a data fusion apparatus 500 for an intelligent vehicle, including:
an obtaining module 501, configured to obtain initial data at an mth time based on multiple sensing devices of an intelligent vehicle; wherein M is an integer greater than 1;
a fusion module 502, configured to fuse the initial data at the mth moment to obtain fusion data at the mth moment;
a processing module 503, configured to process the fusion data based on a kalman filter model to obtain an estimated value x of the motion state at the mth timeM
An optimization module 504 configured to perform XGboost model-based XGboost on the estimated value xMAnd processing to obtain the value of the target motion state.
In one possible implementation manner, the acquiring initial data of an mth moment by a plurality of sensing devices based on a smart vehicle includes:
the characteristic extraction module is used for extracting the characteristics of the initial data to obtain characteristic data;
and the detection module is used for detecting the characteristic data, determining the equipment type of the characteristic data, and acquiring sensor data and V2X equipment data.
In a possible implementation manner, the performing feature extraction on the initial data to obtain feature data includes:
the image module is used for acquiring first image information of a current video frame based on the initial data; wherein the first image information comprises a first timestamp and first location information;
the image module is also used for acquiring the Nth image information of the current frame; wherein the Nth image information comprises an Nth timestamp and Nth position information of the current frame; wherein N is an integer greater than 1;
the comparison module is used for comparing the Nth timestamp with the first timestamp to obtain a first comparison result;
the comparison module is further used for comparing the Nth position information with the first position information to obtain a second comparison result;
the correction module is used for correcting the image information of the current Nth frame based on the first comparison result and the second comparison result to obtain correction data;
and the characteristic extraction module is also used for acquiring the characteristic data based on the corrected data.
In a possible implementation manner, the fusing the initial data at the mth time to obtain fused data at the mth time includes:
the first fusion module is used for fusing the sensor data at the Mth moment to acquire the sensor fusion data at the Mth moment;
the second fusion module is used for fusing the V2X device data at the Mth moment to obtain the V2X device fusion data at the Mth moment;
and the third fusion module is used for fusing the sensor fusion data at the Mth moment and the V2X device fusion data at the Mth moment to acquire the fusion data at the Mth moment.
In a possible implementation manner, the fusion data are processed based on the kalman filtering model to obtain an estimated value x of the motion state at the mth momentMThe method comprises the following steps:
a first estimation module for obtaining an initial estimation value w of the motion stateM
A second estimation module for estimating the initial estimation value w based onMAnd the fusion data, obtaining the estimated value xM
In one possible implementation, the wMObtained by the following formula:
wM=FMwM-1+BMuM+aM
in the formula uMIs the motion vector at time M, FMFor the state transmission matrix at the M-th time, BMIs a control input value at the M-th time, aMIs the noise information at the M-th time.
In one possible implementation, the xMObtained by the following formula:
xM=(z-yM)*HM*PM
wherein z is an actual measurement parameter, (z-y)M) Is a data deviation value, HMIs a projection of the motion vector at time M, PMThe matrix is transmitted for the updated state at time M.
In one possible implementation, the XGBoost-based model pairs the estimated value xMProcessing is carried out to obtain a solution of the motion state of the target, and the method comprises the following steps:
a building module for building a model based on the xMEstablishing a Boosted Tree;
the function module is used for acquiring a target function based on the Boosted Tree;
the result module is used for processing the target function to obtain the result function;
and the result module is also used for acquiring a solution of the target motion state based on the result function.
Embodiments of the present application provide a computer storage medium storing a computer program which, when executed by a processor, implements the method described above.
Embodiments of the present application provide a computer program comprising instructions which, when executed by a computer, cause the computer to perform a method as described above.
An embodiment of the present application provides an intelligent vehicle, comprising a processor, a memory and a communication interface, wherein the memory is used for storing information transmission program codes, and the processor is used for calling the vehicle running control program codes to execute the method of the first aspect.
Embodiments of the present application provide a chip system, which includes a processor, configured to enable a service device to implement the functions referred to in the above first aspect, for example, to generate or process information referred to in the above method of the first aspect. In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the data transmission device. The chip system may be constituted by a chip, or may include a chip and other discrete devices.
In addition, other configurations and functions of the apparatus according to the embodiments of the present application are known to those skilled in the art, and are not described herein for reducing redundancy.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In the description of the present application, it is to be understood that the terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the present application and to simplify the description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and are therefore not to be considered limiting of the present application.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In this application, unless expressly stated or limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can include, for example, fixed connections, removable connections, or integral parts; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
In this application, unless expressly stated or limited otherwise, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through intervening media. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (11)

1. A data fusion method for a smart vehicle, comprising:
acquiring initial data of the Mth moment based on a plurality of sensing devices of the intelligent vehicle; wherein M is an integer greater than 1;
fusing the initial data at the Mth moment to acquire fused data at the Mth moment;
processing the fusion data based on a Kalman filtering model to obtain an estimated value x of the motion state at the Mth momentM
Estimation value x based on XGboost modelMAnd processing to obtain a solution of the motion state of the target.
2. The method according to claim 1, wherein the obtaining of the initial data at the mth time based on the plurality of perception devices of the smart vehicle comprises:
extracting the characteristics of the initial data to obtain characteristic data;
and detecting the characteristic data, determining the equipment type of the characteristic data, and acquiring sensor data and V2X equipment data.
3. The method of claim 2, wherein the performing feature extraction on the initial data to obtain feature data comprises:
acquiring first image information of a current frame of a video based on the initial data; wherein the first image information comprises a first timestamp and first location information;
acquiring the Nth image information of the current frame; wherein the Nth image information comprises an Nth timestamp and Nth position information of the current frame; wherein N is an integer greater than 1;
comparing the Nth timestamp with the first timestamp to obtain a first comparison result;
comparing the Nth position information with the first position information to obtain a second comparison result;
correcting the image information of the current Nth frame based on the first comparison result and the second comparison result to obtain correction data;
and acquiring the characteristic data based on the correction data.
4. The method according to claim 2, wherein the fusing the initial data at the mth time to obtain fused data at the mth time comprises:
fusing the sensor data at the Mth moment to acquire the sensor fusion data at the Mth moment;
fusing the V2X device data at the Mth moment to obtain V2X device fused data at the Mth moment;
and fusing the sensor fusion data at the Mth moment and the V2X device fusion data at the Mth moment to acquire the fusion data at the Mth moment.
5. The method according to claim 1, wherein the fused data are processed based on a Kalman filtering model to obtain an estimated value x of the motion state at the Mth momentMThe method comprises the following steps:
obtaining initial estimated value w of motion stateM
Based on the initial estimate wMAnd the fusion data, obtaining the estimated value xM
6. The method of claim 5, wherein the wM is calculated by the following formula:
wM=FMwM-1+BMuM+aM
in the formula uMIs the motion vector at time M, FMFor the state transmission matrix at the M-th time, BMIs a control input value at the M-th time, aMIs the noise information at the M-th time.
7. The method of claim 6, wherein x isMObtained by the following formula:
xM=(z-yM)*HM*PM
wherein z is an actual measurement parameter, (z-y)M) Is a data deviation value, HMIs a projection of the motion vector at time M, PMThe matrix is transmitted for the updated state at time M.
8. The method of claim 1, wherein the XGboost model based estimation value xMProcessing is carried out to obtain a solution of the motion state of the target, and the method comprises the following steps:
based on the xMEstablishing a Boosted Tree;
acquiring a target function based on the Boosted Tree;
processing the target function to obtain the result function;
and acquiring a solution of the motion state of the target based on the result function.
9. A data fusion device for smart vehicles, comprising:
the acquisition module is used for acquiring initial data of the Mth moment based on a plurality of sensing devices of the intelligent vehicle; wherein M is an integer greater than 1;
the fusion module is used for fusing the initial data at the Mth moment to acquire fused data at the Mth moment;
a processing module for processing the fusion data based on a Kalman filtering model to obtain an estimated value x of the motion state at the Mth momentM
An optimization module for estimating the value x based on the XGboost modelMAnd processing to obtain a solution of the motion state of the target.
10. A computer storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the method of any one of the preceding claims 1 to 8.
11. A computer program, characterized in that the computer program comprises instructions which, when executed by a computer, cause the computer to carry out the method according to any one of claims 1-8.
CN202111245591.5A 2021-10-26 2021-10-26 Data fusion method and device for intelligent vehicle Pending CN113947158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111245591.5A CN113947158A (en) 2021-10-26 2021-10-26 Data fusion method and device for intelligent vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111245591.5A CN113947158A (en) 2021-10-26 2021-10-26 Data fusion method and device for intelligent vehicle

Publications (1)

Publication Number Publication Date
CN113947158A true CN113947158A (en) 2022-01-18

Family

ID=79332266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111245591.5A Pending CN113947158A (en) 2021-10-26 2021-10-26 Data fusion method and device for intelligent vehicle

Country Status (1)

Country Link
CN (1) CN113947158A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466043A (en) * 2022-01-25 2022-05-10 岚图汽车科技有限公司 Internet of vehicles system, intelligent driving control method and equipment thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466043A (en) * 2022-01-25 2022-05-10 岚图汽车科技有限公司 Internet of vehicles system, intelligent driving control method and equipment thereof
CN114466043B (en) * 2022-01-25 2023-10-31 岚图汽车科技有限公司 Internet of vehicles system, intelligent driving control method and equipment thereof

Similar Documents

Publication Publication Date Title
CN111626208B (en) Method and device for detecting small objects
CN108773373B (en) Method and device for operating an autonomous vehicle
US10481609B2 (en) Parking-lot-navigation system and method
CN107481292B (en) Attitude error estimation method and device for vehicle-mounted camera
US11841244B2 (en) Method for calibrating a position sensor in a vehicle, computer program, storage means, control unit and calibration route
CN114750759B (en) Method, device, equipment and medium for determining following target
CN110834642B (en) Vehicle deviation identification method and device, vehicle and storage medium
EP3843036B1 (en) Sample labeling method and device, and damage category identification method and device
US20190163991A1 (en) Method and apparatus for detecting road lane
US20230230484A1 (en) Methods for spatio-temporal scene-graph embedding for autonomous vehicle applications
US20180215391A1 (en) Methods and systems for detecting road surface using crowd-sourced driving behaviors
CN107745711B (en) Method and device for determining route in automatic driving mode
JP2016119547A (en) Remote collection system for vehicle data
CN113291320A (en) Vehicle track prediction method, device, equipment and storage medium
CN111401255B (en) Method and device for identifying bifurcation junctions
CN113947158A (en) Data fusion method and device for intelligent vehicle
CN112487861A (en) Lane line recognition method and device, computing equipment and computer storage medium
JP2012059058A (en) Risk estimation device and program
CN116964588A (en) Target detection method, target detection model training method and device
CN111947669A (en) Method for using feature-based positioning maps for vehicles
CN116343148A (en) Lane line detection method, device, vehicle and storage medium
US20200258379A1 (en) Determination of movement information with surroundings sensors
CN112215042A (en) Parking space limiter identification method and system and computer equipment
US20230071221A1 (en) Method for improving the estimation of existence probabilities
US20230109494A1 (en) Methods and devices for building a training dataset

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination