CN117473455B - Fusion method and device of multi-source positioning data and electronic equipment - Google Patents

Fusion method and device of multi-source positioning data and electronic equipment Download PDF

Info

Publication number
CN117473455B
CN117473455B CN202311816570.3A CN202311816570A CN117473455B CN 117473455 B CN117473455 B CN 117473455B CN 202311816570 A CN202311816570 A CN 202311816570A CN 117473455 B CN117473455 B CN 117473455B
Authority
CN
China
Prior art keywords
sensor
positioning data
dimension
data
covariance matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311816570.3A
Other languages
Chinese (zh)
Other versions
CN117473455A (en
Inventor
张瑜
李蓝星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hozon New Energy Automobile Co Ltd
Original Assignee
Hozon New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hozon New Energy Automobile Co Ltd filed Critical Hozon New Energy Automobile Co Ltd
Priority to CN202311816570.3A priority Critical patent/CN117473455B/en
Publication of CN117473455A publication Critical patent/CN117473455A/en
Application granted granted Critical
Publication of CN117473455B publication Critical patent/CN117473455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • G06F18/15Statistical pre-processing, e.g. techniques for normalisation or restoring missing data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2123/00Data types
    • G06F2123/02Data types in the time domain, e.g. time-series data

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Probability & Statistics with Applications (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)

Abstract

The invention provides a fusion method and device of multi-source positioning data and electronic equipment, wherein the fusion method comprises the following steps: calculating standard deviation of positioning data acquired by each sensor in each dimension; calculating the weight factor of each dimension according to the standard deviation of each dimension comprises the following steps: defining an objective functionWhereinAccording to the formulaWhen the objective function y takes the minimum value, the current time stamp t is obtained p Weighting factor of the D dimension of the nth sensorThe method comprises the steps of carrying out a first treatment on the surface of the Multiplying the weight factor of each dimension of each sensor by the first noise covariance matrix of each sensor to obtain a second noise covariance matrix of each sensor, wherein the first noise covariance matrix is the noise covariance matrix which is initially set; and carrying out filtering fusion on the second noise covariance matrix of each sensor and the positioning data acquired by each sensor. The invention can automatically adjust the weight factor of each positioning data along with the standard deviation of the acquired positioning data, thereby improving the precision and stability of positioning fusion output.

Description

Fusion method and device of multi-source positioning data and electronic equipment
Technical Field
The invention mainly relates to the technical field of positioning data measurement and processing, in particular to a multi-source positioning data fusion method, a multi-source positioning data fusion device and electronic equipment.
Background
With the rapid development of technology, intelligent driving technology of automobiles is continuously advanced and perfected, and many automobile manufacturers nowadays push automobiles with automatic driving functions. The intelligent driving technology is an important research field of automobile intellectualization, and mainly comprises the aspects of visual perception, track planning, motion control and the like. The intelligent driving system can monitor the environment around the vehicle in real time through the sensor, including other vehicles, pedestrians, road conditions and the like, so that potential dangers are early warned and avoided in advance, and traffic accidents are reduced. Intelligent driving of automobiles is a current research hotspot, wherein intelligent driving algorithms are divided into sensing, positioning and regulation algorithms.
In intelligent driving systems, the vehicle needs to know precisely its own position on the road and the position information of the surrounding environment in order to be able to make the correct decisions and control. Therefore, high-precision positioning technology and positioning algorithm are an indispensable component of intelligent driving systems. In the positioning algorithm, due to the limited performance of a single sensor, it is difficult to deal with a complex environment by using only a single sensor, and in order to solve the problem, fusion positioning is generally performed by using a plurality of different types of sensors, and the sensor redundancy configuration is performed by combining the advantages of the different types of sensors. The current mainstream scheme uses integrated navigation (imu+gps), map matching, and odometer (wheel odometer, laser odometer, vision odometer) for multi-source sensor fusion positioning. In the process, the weight of each sensor needs to be determined, namely, when multi-source positioning data fusion is carried out, reasonable fusion weights are set for each positioning data source to ensure that a fusion center can carry out stable, smooth and high-precision result output. However, because the performances of the sensors are different, the original sensors and the communication module have the problems of unstable performance, time delay and the like, and the positioning data acquired by the sensors are not invariable in the performances of stability, continuity and the like, so that the dynamic requirements of positioning data fusion cannot necessarily be met only by relying on fixed weights, and the accuracy and the stability of the positioning result obtained based on the fixed weights are relatively low.
Disclosure of Invention
The invention aims to solve the technical problem of providing a multi-source positioning data fusion method, a multi-source positioning data fusion device and electronic equipment, which can automatically adjust the weight factors of all positioning data along with the standard deviation of the acquired positioning data, and improve the precision and stability of positioning fusion output.
In order to solve the above technical problems, in a first aspect, the present invention provides a method for fusing multisource positioning data, including: calculating a standard deviation of positioning data acquired by each sensor in each dimension, wherein the positioning data has the following dimensions: triaxial position and triaxial angle; calculating the weight factor of each dimension according to the standard deviation of each dimension comprises the following steps: defining an objective functionWherein->L represents the number of sensors, +.>Weight factor representing the D dimension of the nth sensor, +.>Representing the standard deviation of the D dimension of the nth sensor; according to the formula->When the objective function y takes the minimum value, the current time stamp t is obtained p Weight factor of the D dimension of the nth sensor>The method comprises the steps of carrying out a first treatment on the surface of the Multiplying a weight factor of each dimension of each sensor by a first noise covariance matrix of each sensor to obtain a second noise covariance matrix of each sensor, wherein the first noise covariance matrix is an initially set noise covariance matrix; and carrying out filtering fusion on the second noise covariance matrix of each sensor and the positioning data acquired by each sensor.
Optionally, the sensor includes: the inertial sensor and the GPS are combined for navigation, map matching and odometer.
Optionally, calculating the standard deviation of the positioning data in each dimension includes: calculate the current timestamp t p Standard deviation of M frame positioning data consecutive from the current frame before.
Optionally, calculating the standard deviation of the positioning data in each dimension includes: the number of consecutive positioning data frames is the same as the data output frequency value of the sensor.
Optionally, the method further comprises: before calculating standard deviation of positioning data acquired by each sensor in each dimension, judging whether the positioning data acquired by each sensor has disorder data, wherein the disorder data is positioning data with different time stamps from that of a fusion center.
Optionally, determining whether the positioning data acquired by each sensor has out-of-order data includes: the time stamp of the positioning data acquired by each sensor is compared with the current time stamp t p Comparing, less than the current time stamp t p Is out of order data, where t p Is the current timestamp of the fusion center.
Optionally, the method further comprises: if the disordered data exist, calculating standard deviation of positioning data acquired by each sensor in each dimension, wherein the positioning data comprise the disordered data; and in the step of filtering and fusing the second noise covariance matrix of each sensor and the positioning data acquired by each sensor, the positioning data comprises modified disordered data.
Optionally, in the step of multiplying the weight factor of each dimension of each sensor by the first noise covariance matrix of each sensor, for the sensor with the disordered data, replacing the first noise covariance matrix with a third noise covariance matrix, where the third noise covariance matrix is a weighted average of the first noise covariance matrix of the disordered data and the first noise covariance matrix of the reference positioning data, and the reference positioning data is positioning data with the smallest standard deviation in the positioning data of each sensor.
Optionally, the modified out-of-order data includes data after frame patching of the out-of-order data.
Optionally, the frame-filling the out-of-order data includes: selecting reference positioning data, wherein the reference positioning data is positioning data with the minimum standard deviation in the positioning data of each sensor; calculating a time stamp t c To the current timestamp t p Relative pose Z of the datum positioning data r = Z sc -1 * Z sp Wherein Z is sc At a time stamp t for said reference positioning data c Pose, Z of (1) sp At the current time stamp t for the reference positioning data p Pose of t c < t p The method comprises the steps of carrying out a first treatment on the surface of the The relative pose Z r After supplementing the out-of-order data, the position Z of the modified out-of-order data gp = Z gc * Z r Wherein Z is gc At time stamp t for the out-of-order data c Is the pose of (1).
Optionally, the selecting datum positioning data comprises: and taking the positioning data acquired by the inertial sensor and GPS integrated navigation as the reference positioning data.
In a second aspect, the present invention provides a fusion apparatus for multi-source positioning data, including: the first calculation module is used for calculating standard deviation of positioning data acquired by each sensor in each dimension, wherein the positioning data has the following dimensions: triaxial position and triaxial angle; the second calculation module is configured to calculate a weight factor of each dimension according to a standard deviation under each dimension, and includes: defining an objective functionWherein->L represents the number of sensors, +.>Weight factor representing the D dimension of the nth sensor, +.>Representing the standard deviation of the D dimension of the nth sensor; according to the formulaWhen the objective function y takes the minimum value, the current time stamp t is obtained p Weight factor of the D dimension of the nth sensor>The method comprises the steps of carrying out a first treatment on the surface of the A third calculation module, configured to multiply a weight factor of each dimension of each sensor by a first noise covariance matrix of each sensor to obtain a second noise covariance matrix of each sensor, where the first noise covariance matrix is an initially set noise covariance matrix; and the fusion module is used for carrying out filtering fusion on the second noise covariance matrix of each sensor and the positioning data acquired by each sensor.
In a third aspect, the present invention provides an electronic device, comprising: a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method of fusion of multi-source positioning data as described in the first aspect.
In a fourth aspect, the present invention provides a readable storage medium having stored thereon a program or instructions which when executed by a processor performs the steps of the method of fusion of multisource location data according to the first aspect.
Compared with the prior art, the invention has the following advantages: firstly, calculating standard deviation of positioning data acquired by each sensor in each dimension, then calculating a weight factor of each dimension according to the standard deviation of each dimension, multiplying the weight factor of each dimension of each sensor by a first noise covariance matrix of each sensor to obtain a second noise covariance matrix of each sensor, and finally, carrying out filtering fusion on the second noise covariance matrix of each sensor and the positioning data acquired by each sensor, thereby automatically adjusting the weight factor of each positioning data along with the standard deviation of the acquired positioning data and improving the precision and stability of positioning fusion output.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the accompanying drawings:
FIG. 1 is a flow chart of a method for fusing multi-source positioning data according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for fusing three-source positioning data according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method for fusing multi-source positioning data according to another embodiment of the present invention;
FIG. 4 is a flow chart of a method for fusion of three-source positioning data according to another embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating out-of-order data processing according to another embodiment of the present invention;
FIG. 6 is a schematic diagram of a multi-source positioning data fusion device according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a multi-source positioning data fusion device according to another embodiment of the present invention;
fig. 8 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is obvious to those skilled in the art that the present application may be applied to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
As used in this application and in the claims, the terms "a," "an," "the," and/or "the" are not specific to the singular, but may include the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
The relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present application unless it is specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description. Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but should be considered part of the specification where appropriate. In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
In addition, the terms "first", "second", etc. are used to define the components, and are merely for convenience of distinguishing the corresponding components, and unless otherwise stated, the terms have no special meaning, and thus should not be construed as limiting the scope of the present application. Furthermore, although terms used in the present application are selected from publicly known and commonly used terms, some terms mentioned in the specification of the present application may be selected by the applicant at his or her discretion, the detailed meanings of which are described in relevant parts of the description herein. Furthermore, it is required that the present application be understood, not simply by the actual terms used but by the meaning of each term lying within.
Flowcharts are used in this application to describe the operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously. At the same time, other operations are added to or removed from these processes.
Embodiment one: fig. 1 is a flow chart of a method for fusing multi-source positioning data according to an embodiment of the present invention, referring to fig. 1, a method 100 includes: s110, calculating standard deviation of positioning data acquired by each sensor in each dimension, wherein the positioning data has the following dimensions: triaxial position and triaxial angle; s120, calculating a weight factor of each dimension according to the standard deviation of each dimension; s130, multiplying a weight factor of each dimension of each sensor by a first noise covariance matrix of each sensor to obtain a second noise covariance matrix of each sensor, wherein the first noise covariance matrix is an initially set noise covariance matrix; and S140, carrying out filtering fusion on the second noise covariance matrix of each sensor and the positioning data acquired by each sensor.
And step S120 includes: defining an objective functionWherein->L represents the number of sensors,weight factor representing the D dimension of the nth sensor, +.>Representing the standard deviation of the D dimension of the nth sensor; according to the formula->When the objective function y takes the minimum value, the current time stamp t is obtained p Weight factor of the D dimension of the nth sensor>
Because the performance of a single sensor is limited or focused on a certain aspect, it is difficult to deal with a complex environment by using only a single sensor, so that the high-precision fusion positioning is generally formed by several sensors, the sensors can firstly position at respective parts, and then the positioning results are transmitted to a fusion center for fusion, so that the high-precision positioning result is finally obtained. Different sensors have different accuracy, stability and reliability, and in order to obtain more accurate positioning results, each sensor (or positioning data) needs to be weighted such that more stable, reliable and accurate positioning data has a greater impact on the final result. Different from the traditional mode, in the embodiment, the weight factor of each dimension is calculated according to the standard deviation of each dimension, the weight factors of the positioning data can be automatically adjusted along with the standard deviation of the acquired positioning data, and the precision and stability of positioning fusion output are improved.
In one example, the sensors include inertial sensor and GPS integrated navigation (INS for short), map matching (MM for short), and odometer.
The inertial sensor and the GPS are integrated to navigate, so that more accurate and stable positioning service is provided by fusing the data of the inertial sensor and the GPS, and the inertial sensor and the GPS can be mutually complemented, so that respective error and noise influence are reduced. Map matching is a technology for matching positioning data with an actual road network, and is mainly used for correcting positioning errors and improving positioning accuracy. The basic principle of map matching is to compare and analyze the positioning data with the road network in the digital map, find the most matched road, and adjust the positioning data to the road to obtain more accurate position information. The odometer can help a driver or an automatic driving system to know the driving state and position of the vehicle by recording information such as the distance and speed of the vehicle.
In an example, calculating the standard deviation of the positioning data in each dimension may be calculating the current timestamp t p Standard deviation of M frame positioning data consecutive from the current frame before.
More preferably, in the step of calculating the standard deviation of the positioning data in each dimension, the number of consecutive positioning data frames is the same as the data output frequency value of the sensor.
Since each positioning data has dimensions of triaxial position and triaxial angle, each positioning data has 6 standard deviations. Taking inertial sensor and GPS combined navigation, map matching and odometer as examples, referring to FIG. 2, the current time stamp is set as t p (assuming the time stamp sequence is t 1 、t 2 、...、t p-1 、t p ),p>M represents M frames in P frame data, respectively calculates combined navigation, map matching and an inter-frame data queue of an odometer, and stores a difference value of a front frame and a rear frame of input data of each sensor from a current frame to a historical M frame, wherein M-1 data are obtained in total. Of course, M may be set to a different value depending on the output frequency of each sensor. Since the input frequency of each sensor is 100H Z ~10H Z The M can be set to be 100, so that the accuracy of standard deviation calculation is guaranteed, and the stability of standard deviation calculation is also guaranteed. Taking the inter-frame data queue of Integrated Navigation (INS) as an example, the following is calculated:
firstly, calculating triaxial positions and triaxial angle difference value arrays of adjacent nodes of an inter-frame M-1 data queue, wherein the triaxial positions and triaxial angle difference value arrays are respectively as follows:
in the formula, ins tp Indicating that the integrated navigation is at time stamp t p The output of the lower part of the output,the difference values of the X-axis position, the Y-axis position, the Z-axis position, the X-axis angle, the Y-axis angle and the Z-axis angle which respectively represent the output of the combined navigation at the 2 nd moment minus the output at the 1 st moment; in the same way, the processing method comprises the steps of, The differences of the X-axis position, the Y-axis position, the Z-axis position, the X-axis angle, the Y-axis angle, and the Z-axis angle, which respectively represent the output of the integrated navigation at the 3 rd moment minus the output at the 2 nd moment, are not described in detail herein for the rest of the definitions.
Recalculating the current timestamp t p The standard deviation of the data of each sensor is respectively obtained:
in the method, in the process of the invention,representation utilizationThe calculated integrated navigation is at the current time stamp t p Standard deviation of X-axis position of (2); in the same way, the processing method comprises the steps of,representation utilizationThe calculated integrated navigation is at the current time stamp t p The standard deviation of the Y-axis position below, the remaining definitions are not described in detail herein.
Based on the calculation mode, standard deviations of map matching and an odometer can be obtained, and the standard deviations are respectively:
. The definition of the standard deviation of the map matching and the odometer can be known by the same definition as the standard deviation in the integrated navigation, and will not be described in detail here.
Based on the calculated standard deviation, the sensor can know which kind of output positioning data is more stable, and the corresponding weight of the sensor in positioning fusion is larger. The present embodiment can obtain the weight factors (weights for short) of different sensors by solving the following ways: defining an objective functionWherein->L represents the number of sensors, +.>Weight factor representing the D dimension of the nth sensor, +. >Representing the standard deviation of the D dimension of the nth sensor; according to the formulaWhen the objective function y takes the minimum value, the current time stamp t is obtained p Weight factor of the D dimension of the nth sensor>
Illustratively, taking the dimension (posX) of the position in the X-axis as an example, solving the current timestamp as t p The objective function is:
wherein,n represents different positioning sensors, such as n=1 represents integrated navigation, n=2 represents map matching, n=3 represents odometer, etc., of course, other sequences, +.>Indicating that the positioning sensor N is at XWeight factors of the axis dimension in which the weight factors of all sensors are added to 1,/for all sensors>Indicating that the positioning sensor N is at the current time stamp t p Standard deviation of the lower X-axis dimension. And then according to the extremum theory of the multiple functions, when the weighting factors are as follows:
the objective function y can take the minimum value to obtain the current time stamp t p Weighting factor for downposX dimension. The weighting factors of other dimensions may also be calculated in the same manner, and are not described in detail herein.
Finally, at the current timestamp t p And correcting the measurement noise (namely, a noise covariance matrix) of each sensor, multiplying the calculated weight factor corresponding to each sensor by the element corresponding to the noise covariance matrix initially set by each sensor, correcting, and then carrying out filtering fusion on the calculated noise covariance matrix and positioning data of each sensor to obtain a high-precision positioning fusion output result.
According to the multi-source positioning data fusion method provided by the embodiment, firstly, standard deviation of positioning data acquired by each sensor in each dimension is calculated, then, a weight factor of each dimension is calculated according to the standard deviation of each dimension, the weight factor of each dimension of each sensor is multiplied by a first noise covariance matrix of each sensor to obtain a second noise covariance matrix of each sensor, finally, the second noise covariance matrix of each sensor is subjected to filtering fusion with the positioning data acquired by each sensor, and further, the weight factor of each positioning data can be automatically adjusted along with the standard deviation of the acquired positioning data, and the accuracy and stability of positioning fusion output are improved.
Embodiment two: fig. 3 is a flow chart of a method for fusing multi-source positioning data according to another embodiment of the present invention, and referring to fig. 3, a method 300 includes: s310, judging whether positioning data acquired by each sensor has out-of-order data, wherein the out-of-order data is positioning data with different time stamps from the time stamp of the fusion center; s320, calculating standard deviation of positioning data acquired by each sensor in each dimension, wherein the positioning data has the following dimensions: triaxial position and triaxial angle; s330, calculating a weight factor of each dimension according to the standard deviation of each dimension; s340, multiplying the weight factor of each dimension of each sensor by a first noise covariance matrix of each sensor to obtain a second noise covariance matrix of each sensor, wherein the first noise covariance matrix is an initially set noise covariance matrix; s350, carrying out filtering fusion on the second noise covariance matrix of each sensor and the positioning data acquired by each sensor.
And step S330 includes: defining an objective functionWherein->L represents the number of sensors,weight factor representing the D dimension of the nth sensor, +.>Representing the standard deviation of the D dimension of the nth sensor; according to the formula->When the objective function y takes the minimum value, the current time stamp t is obtained p Weight factor of the D dimension of the nth sensor>
Because the performances of the sensors are different, the original sensors and the communication module have the problems of unstable performance, time delay and the like, the time stamp of the positioning result of each sensor transmitted to the fusion center in the multi-source fusion positioning can be inconsistent, and the time is often later than the time of the current Fusion Center (FC), namely, out-of-order data (out-of-order positioning data) is generated. The common out-of-order data processing mode is to discard out-of-order data, and the out-of-order data is not processed at all, so that effective data of a sensor cannot be fully utilized by a fusion center, and the accuracy of a positioning result output by the fusion center is reduced.
The method can effectively use out-of-order data and improve the output precision of the final positioning result. In this embodiment, it is first required to determine whether the positioning data has disorder data, if no disorder data exists, the method may be performed in the manner shown in the first embodiment, and when there is disorder data, the disorder data needs to be corrected, etc. to perform processing, so that on one hand, the known data may be fully utilized, and the positioning data may not be effectively applied, and on the other hand, negative effects of the disorder data on the positioning result may also be avoided.
In an example, determining whether out-of-order data exists for the positioning data acquired by each sensor may be comparing a timestamp of the positioning data acquired by each sensor with a current timestamp t p Comparing, less than the current time stamp t p Is out of order data, where t p Is the current timestamp of the fusion center.
Taking inertial sensor and GPS combined navigation, map matching and odometer as an example, referring to FIG. 4, after the positioning data of the three sensors are obtained, it is determined whether there is disorder data. Here, assuming that no disorder data exists in the positioning data of the integrated navigation, performing disorder data judgment on the positioning data except the integrated navigation, and recording the latest timestamp of the current fusion center as t p The time stamp of the data input by each sensor is compared with the time stamp t p Comparing the time stamp with the time stamp t p Is marked as out-of-order data. If the disordered data exist, the disordered data need to be corrected and the disordered data are adopted for positioning data fusion. Therefore, in the present embodiment, if out-of-order data exists, the standard deviation step of the positioning data acquired by each sensor in each dimension is calculatedIn step, the positioning data comprises out-of-order data; and in the step of filtering and fusing the second noise covariance matrix of each sensor and the positioning data acquired by each sensor, the positioning data comprise modified disordered data.
Referring to FIG. 5, one way of out-of-order data processing includes: performing measurement noise (noise covariance matrix) calculation of the disordered data; performing out-of-order processing data frame supplementing; substituting the calculated out-of-order processed data and the calculated out-of-order data measurement noise into a filtering algorithm to carry out filtering fusion.
In an example, in the step of multiplying the weight factor of each dimension of each sensor by the first noise covariance matrix of each sensor, for the sensor having disordered data, a third noise covariance matrix is used to replace the first noise covariance matrix, wherein the third noise covariance matrix is weighted by the mean value of the first noise covariance matrix of the disordered data and the first noise covariance matrix of the reference positioning data, and the reference positioning data is the positioning data with the smallest standard deviation in the positioning data of each sensor.
Taking positioning data generated by map matching as disordered data as an example, firstly, carrying out mean weighting on noise covariance matrixes Γ6x6, mm and Γ6x6, ins of sensors (such as integrated navigation) for map matching and generating reference positioning data to obtain Γ6x6, mm 1= (Γ6x6, mm+Γ6x6, ins)/2, and then calculating an FC (fusion center) inter-frame data standard deviation queue to obtain Γ6x6, mm1 weight factors, and then directly multiplying the calculated weight factors to correct the noise covariance matrixes Γ6x6 and mm1.
In one example, the modified out-of-order data includes data that is frame-complemented with out-of-order data. Further, the framing out-of-order data may include: selecting reference positioning data, wherein the reference positioning data is positioning data with the minimum standard deviation in the positioning data of each sensor; calculating a time stamp t c To the current timestamp t p Relative pose Z of reference positioning data between r = Z sc -1 * Z sp Wherein Z is sc For datum positioning data at time stamp t c Pose, Z of (1) sp For datum positioning data at current time stamp t p Pose of t c < t p The method comprises the steps of carrying out a first treatment on the surface of the Relative pose Z r After supplementing the out-of-order data, the position Z of the modified out-of-order data gp = Z gc * Z r Wherein Z is gc At time stamp t for out-of-order data c Is the pose of (1).
After the out-of-order data is processed, the calculated weight factors corresponding to the sensors are multiplied by the elements corresponding to the noise covariance matrix of the sensors to be corrected, and then the calculated noise covariance matrix is subjected to filtering fusion with the positioning data of the sensors to obtain a high-precision positioning fusion output result.
The details of other operations performed by the steps in this embodiment may refer to the same steps as those in the previous embodiment, and will not be further expanded herein.
The fusion method of the multi-source positioning data provided by the embodiment not only can automatically adjust the weight factors of all positioning data along with the standard deviation of the acquired positioning data, improve the precision and stability of positioning fusion output, but also can fully utilize disordered data and further improve the precision of positioning results.
Embodiment III: fig. 6 is a schematic structural diagram of a multi-source positioning data fusion device according to an embodiment of the present invention, and the device 600 mainly includes: a first calculation module 601, configured to calculate a standard deviation of positioning data acquired by each sensor in each dimension, where the positioning data has the following dimensions: triaxial position and triaxial angle; a second calculation module 602, configured to calculate a weight factor of each dimension according to the standard deviation under each dimension, including: defining an objective functionWhereinL represents the number of sensors, +.>Weight factor representing the D dimension of the nth sensor, +.>Representing the standard deviation of the D dimension of the nth sensor; according to the formula->When the objective function y takes the minimum value, the current time stamp t is obtained p Weight factor of the D dimension of the nth sensor>The method comprises the steps of carrying out a first treatment on the surface of the A third calculation module 603, configured to multiply a weight factor of each dimension of each sensor by a first noise covariance matrix of each sensor to obtain a second noise covariance matrix of each sensor, where the first noise covariance matrix is an initially set noise covariance matrix; and the fusion module 604 is configured to filter and fuse the second noise covariance matrix of each sensor with the positioning data acquired by each sensor.
In one example, the sensors include inertial sensor and GPS integrated navigation, map matching, and odometry.
In an example, calculating the standard deviation of the positioning data in each dimension may be calculating the current timestamp t p Standard deviation of M frame positioning data consecutive from the current frame before.
In one example, the standard deviation of the calculated positioning data in each dimension may be the same number of consecutive positioning data frames as the data output frequency value of the sensor.
Reference may be made to the foregoing embodiments for details of other operations performed by the modules in this embodiment, which are not further described herein.
According to the multi-source positioning data fusion device provided by the embodiment, firstly, the standard deviation of positioning data acquired by each sensor in each dimension is calculated, then the weight factor of each dimension is calculated according to the standard deviation of each dimension, the weight factor of each dimension of each sensor is multiplied by the first noise covariance matrix of each sensor to obtain the second noise covariance matrix of each sensor, finally, the second noise covariance matrix of each sensor is subjected to filtering fusion with the positioning data acquired by each sensor, and then the weight factor of each positioning data can be automatically adjusted along with the standard deviation of the acquired positioning data, so that the accuracy and stability of positioning fusion output are improved.
Embodiment four: fig. 7 is a schematic structural diagram of a multi-source positioning data fusion device according to another embodiment of the present invention, and referring to fig. 7, an apparatus 700 mainly includes: the judging module 701 is configured to judge whether the positioning data acquired by each sensor has out-of-order data, where the out-of-order data is positioning data with a timestamp different from that of the fusion center; a first calculation module 702, configured to calculate a standard deviation of positioning data acquired by each sensor in each dimension, where the positioning data has the following dimensions: triaxial position and triaxial angle; a second calculation module 703, configured to calculate a weight factor of each dimension according to the standard deviation under each dimension, including: defining an objective functionWherein->L represents the number of sensors, +.>Weight factor representing the D dimension of the nth sensor, +.>Representing the standard deviation of the D dimension of the nth sensor; according to the formula->When the objective function y takes the minimum value, the current time stamp t is obtained p Weight factor of the D dimension of the nth sensor>The method comprises the steps of carrying out a first treatment on the surface of the A third calculation module 704, configured to multiply the weight factor of each dimension of each sensor by the first noise covariance matrix of each sensor to obtain a second noise covariance matrix of each sensor Wherein the first noise covariance matrix is an initially set noise covariance matrix; and the fusion module 705 is configured to filter and fuse the second noise covariance matrix of each sensor with the positioning data acquired by each sensor.
In an example, determining whether out-of-order data exists for the positioning data acquired by each sensor may be comparing a timestamp of the positioning data acquired by each sensor with a current timestamp t p Comparing, less than the current time stamp t p Is out of order data, where t p Is the current timestamp of the fusion center.
In an example, in calculating the standard deviation of the positioning data acquired by each sensor in each dimension, the positioning data includes out-of-order data; and in the process of filtering and fusing the second noise covariance matrix of each sensor and the positioning data acquired by each sensor, the positioning data comprise modified disordered data.
In an example, in multiplying the weight factor of each dimension of each sensor by the first noise covariance matrix of each sensor, for the sensor with disordered data, replacing the first noise covariance matrix with a third noise covariance matrix, wherein the third noise covariance matrix is the average weight of the first noise covariance matrix of disordered data and the first noise covariance matrix of reference positioning data, and the reference positioning data is the positioning data with the smallest standard deviation in the positioning data of each sensor
In one example, the modified out-of-order data includes data that is frame-complemented with out-of-order data.
In an example, the framing out of order data includes: selecting reference positioning data, wherein the reference positioning data is positioning data with the minimum standard deviation in the positioning data of each sensor; calculating a time stamp t c To the current timestamp t p Relative pose Z of reference positioning data between r = Z sc -1 * Z sp Wherein Z is sc For datum positioning data at time stamp t c Pose, Z of (1) sp For datum positioning data at current time stamp t p Pose of t c < t p The method comprises the steps of carrying out a first treatment on the surface of the Relative pose Z r After supplementing the out-of-order data, the position Z of the modified out-of-order data gp = Z gc * Z r Wherein Z is gc At time stamp t for out-of-order data c Is the pose of (1).
In one example, the selected reference positioning data may be positioning data based on positioning data acquired by inertial sensor and GPS integrated navigation.
Reference may be made to the foregoing embodiments for details of other operations performed by the modules in this embodiment, which are not further described herein.
The multi-source positioning data fusion device provided by the embodiment not only can automatically adjust the weight factors of all positioning data along with the standard deviation of the acquired positioning data, improve the precision and stability of positioning fusion output, but also can fully utilize disordered data and further improve the precision of positioning results.
The fusion device of the multi-source positioning data in the embodiment of the application can be a device, and also can be a component, an integrated circuit or a chip in a terminal. A multi-source positioning data fusion device in the embodiments of the present application may be a device with an operating system. The operating system may be an android operating system, an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The application also provides an electronic device, comprising: a memory for storing programs or instructions executable by the processor; and a processor, configured to execute the program or the instruction to implement each process of the embodiment of the fusion method of multi-source positioning data, and achieve the same technical effects, so that repetition is avoided, and no description is repeated here.
Fig. 8 is a schematic diagram of an electronic device according to an embodiment of the invention. The electronic device 800 may include an internal communication bus 801, a Processor (Processor) 802, a Read Only Memory (ROM) 803, a Random Access Memory (RAM) 804, and a communication port 805. Internal communication bus 801 may enable data communication among the components of electronic device 800. The processor 802 may make the determination and issue the prompt. In some implementations, the processor 802 may be comprised of one or more processors. Communication port 805 may enable electronic device 800 to communicate data with the outside. In some implementations, the electronic device 800 may send and receive information and data from a network through the communication port 805. The electronic device 800 may also include program storage elements in various forms as well as data storage elements, read Only Memory (ROM) 803 and Random Access Memory (RAM) 804 capable of storing various data files for computer processing and/or communication, as well as possible programs or instructions for execution by the processor 802. The results processed by the processor 802 are communicated to the user device via the communication port 805 for display on a user interface.
The embodiment of the application further provides a readable storage medium, on which a program or an instruction is stored, where the program or the instruction realizes each process of the above embodiment of the fusion method of multi-source positioning data when executed by a processor, and the same technical effects can be achieved, so that repetition is avoided, and no redundant description is provided herein.
The processor is a processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk.
The above disclosure is intended to be illustrative only and not limiting to the present application to those skilled in the art. Although not explicitly described herein, various modifications, improvements, and adaptations of the present application may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this application, and are therefore within the spirit and scope of the exemplary embodiments of this application.
Likewise, it should be noted that in order to simplify the presentation disclosed herein and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are presented in the claims are required for the subject application. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
While the present application has been described with reference to the present specific embodiments, those of ordinary skill in the art will recognize that the above embodiments are for illustrative purposes only, and that various equivalent changes or substitutions can be made without departing from the spirit of the present application, and therefore, all changes and modifications to the embodiments described above are intended to be within the scope of the claims of the present application.

Claims (14)

1. A method for fusing multisource positioning data, comprising:
calculating a standard deviation of positioning data acquired by each sensor in each dimension, wherein the positioning data has the following dimensions: triaxial position and triaxial angle;
calculating the weight factor of each dimension according to the standard deviation of each dimension comprises the following steps: defining an objective functionWherein->L represents the number of sensors, +.>Weight factor representing the D dimension of the nth sensor, +.>Representing the standard deviation of the D dimension of the nth sensor; according to the formula->When the objective function y takes the minimum value, the current time stamp t is obtained p Weight factor of the D dimension of the nth sensor>
Multiplying a weight factor of each dimension of each sensor by a first noise covariance matrix of each sensor to obtain a second noise covariance matrix of each sensor, wherein the first noise covariance matrix is an initially set noise covariance matrix;
and carrying out filtering fusion on the second noise covariance matrix of each sensor and the positioning data acquired by each sensor.
2. The method of fusion of multi-source positioning data of claim 1, wherein the sensor comprises: the inertial sensor and the GPS are combined for navigation, map matching and odometer.
3. The method of fusion of multi-source positioning data of claim 1, wherein calculating a standard deviation of the positioning data in each dimension comprises: calculate the current timestamp t p Standard deviation of M frame positioning data consecutive from the current frame before.
4. A method of fusion of multi-source positioning data as recited in claim 3 in which calculating the standard deviation of the positioning data in each dimension comprises: the number of consecutive positioning data frames is the same as the data output frequency value of the sensor.
5. The method of fusion of multi-source positioning data of claim 1, further comprising: before calculating standard deviation of positioning data acquired by each sensor in each dimension, judging whether the positioning data acquired by each sensor has disorder data, wherein the disorder data is positioning data with different time stamps from that of a fusion center.
6. The method of claim 5, wherein determining whether the positioning data acquired by each sensor has out-of-order data comprises: the time stamp of the positioning data acquired by each sensor is compared with the current time stamp t p Comparing, less than the current time stamp t p Is out of order data, where t p Is the current timestamp of the fusion center.
7. The method of fusion of multi-source positioning data of claim 5, further comprising: if the out-of-order data exists, then:
in the step of calculating standard deviation of positioning data acquired by each sensor in each dimension, the positioning data comprise the disordered data;
and in the step of filtering and fusing the second noise covariance matrix of each sensor and the positioning data acquired by each sensor, the positioning data comprises modified disordered data.
8. The method of claim 7, wherein in the step of multiplying the weight factor for each dimension of each sensor by the first noise covariance matrix of each sensor,
for the sensors with the disordered data, replacing the first noise covariance matrix with a third noise covariance matrix, wherein the third noise covariance matrix is the average weight of the first noise covariance matrix of the disordered data and the first noise covariance matrix of the reference positioning data, and the reference positioning data is the positioning data with the minimum standard deviation in the positioning data of each sensor.
9. The method of claim 7, wherein the modified out-of-order data comprises data that is frame-complemented with the out-of-order data.
10. The method of claim 9, wherein the framing the out-of-order data comprises:
selecting reference positioning data, wherein the reference positioning data is positioning data with the minimum standard deviation in the positioning data of each sensor;
calculating a time stamp t c To the current timestamp t p Relative pose Z of the datum positioning data r = Z sc -1 * Z sp Wherein Z is sc At a time stamp t for said reference positioning data c Pose, Z of (1) sp At the current time stamp t for the reference positioning data p Pose of t c < t p
The relative pose Z r After supplementing the out-of-order data, the position Z of the modified out-of-order data gp = Z gc * Z r Wherein Z is gc At time stamp t for the out-of-order data c Is the pose of (1).
11. The method of claim 10, wherein selecting reference positioning data comprises: and taking the positioning data acquired by the inertial sensor and GPS integrated navigation as the reference positioning data.
12. A fusion device for multi-source positioning data, comprising:
The first calculation module is used for calculating standard deviation of positioning data acquired by each sensor in each dimension, wherein the positioning data has the following dimensions: triaxial position and triaxial angle;
the second calculation module is configured to calculate a weight factor of each dimension according to a standard deviation under each dimension, and includes: defining an objective functionWherein->L represents the number of sensors, +.>Weight factor representing the D dimension of the nth sensor, +.>Representing the standard deviation of the D dimension of the nth sensor; according to the formula->When the objective function y takes the minimum value, the current time stamp t is obtained p Weight factor of the D dimension of the nth sensor>
A third calculation module, configured to multiply a weight factor of each dimension of each sensor by a first noise covariance matrix of each sensor to obtain a second noise covariance matrix of each sensor, where the first noise covariance matrix is an initially set noise covariance matrix;
and the fusion module is used for carrying out filtering fusion on the second noise covariance matrix of each sensor and the positioning data acquired by each sensor.
13. An electronic device, comprising: a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method of fusion of multi-source positioning data as claimed in any one of claims 1 to 11.
14. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the method of fusion of multi-source positioning data according to any of claims 1-11.
CN202311816570.3A 2023-12-27 2023-12-27 Fusion method and device of multi-source positioning data and electronic equipment Active CN117473455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311816570.3A CN117473455B (en) 2023-12-27 2023-12-27 Fusion method and device of multi-source positioning data and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311816570.3A CN117473455B (en) 2023-12-27 2023-12-27 Fusion method and device of multi-source positioning data and electronic equipment

Publications (2)

Publication Number Publication Date
CN117473455A CN117473455A (en) 2024-01-30
CN117473455B true CN117473455B (en) 2024-03-29

Family

ID=89626083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311816570.3A Active CN117473455B (en) 2023-12-27 2023-12-27 Fusion method and device of multi-source positioning data and electronic equipment

Country Status (1)

Country Link
CN (1) CN117473455B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114199259A (en) * 2022-02-21 2022-03-18 南京航空航天大学 Multi-source fusion navigation positioning method based on motion state and environment perception
CN114565010A (en) * 2022-01-14 2022-05-31 山东师范大学 Adaptive Kalman noise estimation method and system based on data fusion
CN115096309A (en) * 2022-06-13 2022-09-23 重庆九洲星熠导航设备有限公司 Fusion positioning method and device, electronic equipment and storage medium
JP7150229B1 (en) * 2022-01-21 2022-10-11 寧波工程学院 Target positioning method for wireless sensor networks based on RSS-AoA measurements
CN116125370A (en) * 2022-11-28 2023-05-16 中国电子科技集团公司第二十八研究所 Multi-platform direction finding rapid fusion positioning method
WO2023116797A2 (en) * 2021-12-22 2023-06-29 比亚迪股份有限公司 In-vehicle multi-sensor fusion positioning method, computer device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023116797A2 (en) * 2021-12-22 2023-06-29 比亚迪股份有限公司 In-vehicle multi-sensor fusion positioning method, computer device and storage medium
CN114565010A (en) * 2022-01-14 2022-05-31 山东师范大学 Adaptive Kalman noise estimation method and system based on data fusion
JP7150229B1 (en) * 2022-01-21 2022-10-11 寧波工程学院 Target positioning method for wireless sensor networks based on RSS-AoA measurements
CN114199259A (en) * 2022-02-21 2022-03-18 南京航空航天大学 Multi-source fusion navigation positioning method based on motion state and environment perception
CN115096309A (en) * 2022-06-13 2022-09-23 重庆九洲星熠导航设备有限公司 Fusion positioning method and device, electronic equipment and storage medium
CN116125370A (en) * 2022-11-28 2023-05-16 中国电子科技集团公司第二十八研究所 Multi-platform direction finding rapid fusion positioning method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
关维国 ; 邹林杰 ; 郝德华 ; 焦萌 ; .基于多属性代价函数的WiFi与蓝牙TLS融合定位算法.传感器与微系统.2018,(第11期),全文. *
基于多属性代价函数的WiFi与蓝牙TLS融合定位算法;关维国;邹林杰;郝德华;焦萌;;传感器与微系统;20181030(第11期);全文 *
基于联邦卡尔曼的GPS/基站定位信息融合算法;邓中亮;尹露;杨磊;余彦培;席岳;;北京邮电大学学报;20131215(第06期);全文 *
邓中亮 ; 尹露 ; 杨磊 ; 余彦培 ; 席岳 ; .基于联邦卡尔曼的GPS/基站定位信息融合算法.北京邮电大学学报.2013,(第06期),全文. *

Also Published As

Publication number Publication date
CN117473455A (en) 2024-01-30

Similar Documents

Publication Publication Date Title
JP6161942B2 (en) Curve shape modeling device, vehicle information processing system, curve shape modeling method, and curve shape modeling program
CN109491369B (en) Method, device, equipment and medium for evaluating performance of vehicle actual control unit
CN114295126B (en) Fusion positioning method based on inertial measurement unit
EP4345421A2 (en) Method for calibrating sensor parameters based on autonomous driving, apparatus, storage medium, and vehicle
CN113052966A (en) Automatic driving crowdsourcing high-precision map updating method, system and medium
JP2022513511A (en) How to identify the range of integrity
CN111982158A (en) Inertial measurement unit calibration method and device
KR20230137439A (en) A computer-implemented method for assessing the accuracy of swarm trajectory locations.
CN115390086A (en) Fusion positioning method and device for automatic driving, electronic equipment and storage medium
CN115617051A (en) Vehicle control method, device, equipment and computer readable medium
CN109710594B (en) Map data validity judging method and device and readable storage medium
CN117473455B (en) Fusion method and device of multi-source positioning data and electronic equipment
JP2019082328A (en) Position estimation device
CN116543271A (en) Method, device, electronic equipment and medium for determining target detection evaluation index
CN115372020A (en) Automatic driving vehicle test method, device, electronic equipment and medium
CN113884089B (en) Camera lever arm compensation method and system based on curve matching
CN115112125A (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN113012429B (en) Vehicle road multi-sensor data fusion method and system
CN112595330B (en) Vehicle positioning method and device, electronic equipment and computer readable medium
CN115014395A (en) Real-time calibration method and device for vehicle course angle for automatic driving
CN115031755A (en) Automatic driving vehicle positioning method and device, electronic equipment and storage medium
US20180038696A1 (en) A system for use in a vehicle
CN117490705B (en) Vehicle navigation positioning method, system, device and computer readable medium
CN113327456A (en) Lane structure detection method and device
CN113763483B (en) Method and device for calibrating pitch angle of automobile data recorder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant