CN113960648A - Positioning method, positioning device, electronic equipment and computer readable storage medium - Google Patents

Positioning method, positioning device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113960648A
CN113960648A CN202111138474.9A CN202111138474A CN113960648A CN 113960648 A CN113960648 A CN 113960648A CN 202111138474 A CN202111138474 A CN 202111138474A CN 113960648 A CN113960648 A CN 113960648A
Authority
CN
China
Prior art keywords
error
information
state variable
imu
gnss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111138474.9A
Other languages
Chinese (zh)
Inventor
聂琼
智向阳
韩天思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202111138474.9A priority Critical patent/CN113960648A/en
Publication of CN113960648A publication Critical patent/CN113960648A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)
  • Navigation (AREA)

Abstract

The application discloses a positioning method, a positioning device, electronic equipment and a computer readable storage medium, and belongs to the technical field of positioning. The method comprises the following steps: acquiring a first state variable of an object to be positioned, wherein the first state variable comprises reference information of the object at least one moment, and the reference information comprises IMU information, visual information and GNSS information. An IMU error, a visual error and a GNSS error are determined based on the reference information comprised by the first state variables, and a target error is determined based on the IMU error, the visual error and the GNSS error. And updating the reference information included in the first state variable through the process of minimizing the target error to obtain a second state variable, wherein the second state variable is used for positioning the object. The positioning accuracy is high, and the method and the device are suitable for various scenes needing positioning. For example, the present invention is applicable to positioning an unmanned vehicle in a scenario in which the unmanned vehicle is automatically driven, and positioning a vehicle in a scenario in which a user manually drives the vehicle.

Description

Positioning method, positioning device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of positioning technologies, and in particular, to a positioning method, an apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of positioning technology, more and more positioning methods are applied to the life of people. For example, IMU (Inertial Measurement Unit) positioning, visual positioning, and GNSS (Global Navigation Satellite System) positioning are selectable positioning methods.
However, the positioning methods in the above examples have certain disadvantages. For example, the positioning result at the next moment is calculated by the IMU positioning according to the positioning result at the current moment, so that the positioning accuracy gradually diverges and is low when the IMU positioning is applied to a long-time and long-distance positioning scene. In addition, the visual positioning has high requirements on scene brightness and low positioning accuracy. GNSS positioning is only suitable for outdoor open environments, not for indoor environments or sheltered outdoor environments. Therefore, it is desirable to provide a positioning method with high applicability and high positioning accuracy.
Disclosure of Invention
The embodiment of the application provides a positioning method, a positioning device, electronic equipment and a computer readable storage medium, so as to solve the problems of weak applicability and low positioning accuracy of the related technology. The technical scheme is as follows:
in one aspect, a positioning method is provided, and the method includes:
acquiring a first state variable of an object to be positioned, wherein the first state variable comprises reference information of the object at least one moment, and the reference information comprises IMU information, visual information and GNSS information;
determining an IMU error, a visual error and a GNSS error based on reference information comprised by the first state variables, determining a target error based on the IMU error, the visual error and the GNSS error;
and updating the reference information included in the first state variable through a process of minimizing the target error to obtain a second state variable, wherein the second state variable is used for positioning the object.
In an exemplary embodiment, the obtaining a first state variable of an object to be located includes: acquiring a third state variable of the object, wherein the third state variable comprises reference information of the object at least one first moment; in response to detecting that a condition is met, inserting reference information of the object at a second moment into the third state variable to obtain an updated third state variable, wherein the second moment is later than any one of the at least one first moment; and obtaining the first state variable based on the updated third state variable.
In an exemplary embodiment, the obtaining the first state variable based on the updated third state variable includes: determining at least one target time from the at least one first time and the second time in response to a sum of a number of times of the at least one first time and the second time being greater than a number threshold; and deleting the reference information of the object at the at least one target moment from the updated third state variable to obtain the first state variable.
In an exemplary embodiment, the method further comprises: determining a marginalization error based on the first state variable and reference information of the object at the at least one target time instant; the determining a target error based on the IMU error, the vision error, and the GNSS error comprises: determining a sum of the marginalization error, the IMU error, the vision error, and the GNSS error as the target error.
In an exemplary embodiment, said determining at least one target time from said at least one first time and said second time comprises: determining an earliest one of the at least one first time and the second time as the target time.
In an exemplary embodiment, the method further comprises: determining that the condition is detected to be satisfied in response to at least one event occurrence of obtaining IMU measurement information, obtaining visual measurement information, obtaining GNSS measurement information, and a passage of a reference duration.
In an exemplary embodiment, the determining IMU error, vision error and GNSS error based on the reference information comprised by the first state variables comprises: obtaining IMU measurement information and IMU estimation information, determining the IMU error based on the IMU measurement information, the IMU estimation information and the IMU information; obtaining vision measurement information, obtaining vision estimation information based on the vision information, and determining the vision error based on the vision measurement information and the vision estimation information; a difference between GNSS measurement information and GNSS estimation information is obtained, and the GNSS error is determined based on the difference and the GNSS information.
In one aspect, a positioning apparatus is provided, the apparatus comprising:
the positioning system comprises an acquisition module, a positioning module and a positioning module, wherein the acquisition module is used for acquiring a first state variable of an object to be positioned, the first state variable comprises reference information of the object at least one moment, and the reference information comprises Inertial Measurement Unit (IMU) information, visual information and Global Navigation Satellite System (GNSS) information;
a determination module for determining an IMU error, a visual error and a GNSS error based on reference information comprised by the first state variables, a target error based on the IMU error, the visual error and the GNSS error;
and the updating module is used for updating the reference information included in the first state variable through the process of minimizing the target error to obtain a second state variable, and the second state variable is used for positioning the object.
In an exemplary embodiment, the obtaining module is configured to obtain a third state variable of the object, where the third state variable includes reference information of the object at least one first time; in response to detecting that a condition is met, inserting reference information of the object at a second moment into the third state variable to obtain an updated third state variable, wherein the second moment is later than any one of the at least one first moment; and obtaining the first state variable based on the updated third state variable.
In an exemplary embodiment, the obtaining module is configured to determine at least one target time from the at least one first time and the second time in response to a sum of a number of times of the at least one first time and the second time being greater than a number threshold; and deleting the reference information of the object at the at least one target moment from the updated third state variable to obtain the first state variable.
In an exemplary embodiment, the determining module is further configured to determine a marginalization error based on the first state variable and reference information of the object at the at least one target time instant;
the determining module is configured to determine a sum of the marginalization error, the IMU error, the visual error, and the GNSS error as the target error.
In an exemplary embodiment, the obtaining module is configured to determine an earliest one of the at least one first time and the second time as the target time.
In an exemplary embodiment, the obtaining module is further configured to determine that the condition is detected to be satisfied in response to at least one event occurrence of obtaining IMU measurement information, obtaining visual measurement information, obtaining GNSS measurement information, and a reference elapsed time period.
In an exemplary embodiment, the determining module is configured to obtain IMU measurement information and IMU estimation information, determine the IMU error based on the IMU measurement information, the IMU estimation information, and the IMU information; obtaining vision measurement information, obtaining vision estimation information based on the vision information, and determining the vision error based on the vision measurement information and the vision estimation information; a difference between GNSS measurement information and GNSS estimation information is obtained, and the GNSS error is determined based on the difference and the GNSS information.
In one aspect, an electronic device is provided that includes a memory and a processor; the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to enable the electronic device to implement the positioning method provided by any one of the exemplary embodiments of the present application.
In one aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to enable a computer to implement a positioning method provided in any one of the exemplary embodiments of the present application.
In another aspect, there is provided a computer program or computer program product comprising: computer instructions which, when executed by a computer, cause the computer to implement the positioning method provided by any one of the exemplary embodiments of the present application.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
target errors are calculated based on the IMU information, the visual information and the GNSS information, and a second state variable for positioning is obtained through minimization of the target errors, so that three different positioning modes including IMU positioning, visual positioning and GNSS positioning are closely coupled, and positioning accuracy is high. The method is not only suitable for complex shielded environments, but also low in cost and high in applicability.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of a positioning method provided in an embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating insertion of reference information provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of information acquisition provided by an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a positioning device provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The embodiment of the application provides a positioning method, which can be applied to the implementation environment shown in fig. 1. In fig. 1, at least one electronic device 11 and a server 12 are included, and the electronic device 11 may be communicatively connected to the server 12 to obtain information required to be used in the positioning process from the server 12.
The electronic device 11 may be a PC (Personal Computer), a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a wearable device, a pocket PC (pocket PC), a tablet PC, a smart car, a smart tv, and other electronic products.
The server 12 may be a server, a server cluster composed of a plurality of servers, or a cloud computing service center.
It should be understood by those skilled in the art that the above-mentioned electronic device 11 and server 12 are only examples, and other existing or future electronic devices or servers may be suitable for the present application and are included within the scope of the present application and are incorporated herein by reference.
Based on the implementation environment shown in fig. 1, referring to fig. 2, an embodiment of the present application provides a positioning method, which can be applied to the electronic device shown in fig. 1. As shown in fig. 2, the method includes the following steps.
A first state variable of an object to be positioned is obtained 201, the first state variable includes reference information of the object at least one time, and the reference information includes IMU information, visual information and GNSS information.
The object to be located includes, but is not limited to, a person and an object, and the object to be located is not limited in this embodiment. The first state variable includes reference information of the object at least one time, and the number of times is not limited in this embodiment, and may be determined according to actual needs.
Illustratively, the IMU information of an object at a time includes: at least one of position, velocity, rotation angle, acceleration bias and gyroscope bias of the object in the world coordinate system at the moment. The visual information of the object at a time comprises: the information of each visual feature point in the image captured at this time is captured by a camera associated with the subject, for example, a camera attached to the subject. Illustratively, the information of one visual feature point includes an actual position of the visual feature point in a three-dimensional space, or a position of the visual feature point in a camera coordinate system. The GNSS information of an object at a time instant includes: the position of the object in the terrestrial coordinate system and the ambiguity vector of the GNSS at the moment. Illustratively, the ambiguity vector is, for example, an inter-station single difference integer ambiguity vector of a GNSS including, but not limited to, at least one of the following: GPS (Global Positioning System), GLONASS (GLONASS, a Global Satellite Navigation System), BDS (BeiDou Navigation Satellite System), and Galileo (Galileo, a Global Satellite Navigation System). GNSS may also be referred to as S system for short.
It should be noted that, in some embodiments, the reference information of the object at any one of the at least one time includes the IMU information, the visual information, and the GNSS information. In other embodiments, the reference information of the object at a time instant includes at least one of IMU information, visual information, and GNSS information. This is because the acquisition frequencies of the IMU information, the visual information, and the GNSS information are often different, and therefore, at one time, only at least one of the IMU information, the visual information, and the GNSS information may be able to be acquired at that time. In addition, the IMU information of the object at different times may be the same or different. For example, the IMU information of the object at some moments in time includes position, velocity, and rotation angle of the object in the IMU coordinate system, and the IMU information of the object at other moments in time includes position, velocity, rotation angle, acceleration bias, and gyroscope bias of the object in the IMU coordinate system.
Based on the above description, the first state variable is expressed as the following formula (1):
Figure BDA0003283131010000061
first, for [ X ] in formula (1)0,X1,...,Xn]The description is given. Wherein, XnIs the object at bkIMU information of time (n is a positive integer),
Figure BDA0003283131010000062
bkthe granularity of the time of day is determined based on the frequency of acquisition of the IMU information. For example, if the IMU information is acquired at a frequency of 100 Hz, bkThe particle size at time point was 0.01 second. At XnIn (1),
Figure BDA0003283131010000063
is b iskThe position of the time object under the world coordinate system (denoted w),
Figure BDA0003283131010000064
is b iskThe velocity of the time object in the world coordinate system,
Figure BDA0003283131010000065
is b iskThe rotation angle of the time object in the world coordinate system.
Figure BDA0003283131010000066
Is b iskThe offset of the acceleration at the moment in time,
Figure BDA0003283131010000067
is b iskThe gyroscope bias at the moment.
Then, for those in the formula (1)
Figure BDA0003283131010000068
And [ lambda ]01,...,λm]The description is given.
Figure BDA0003283131010000069
For external reference information of the camera associated with the object,
Figure BDA00032831310100000610
as a camera coordinate system(denoted c) relative to the IMU coordinate system (denoted b),
Figure BDA00032831310100000611
is the angle of rotation of the camera coordinate system relative to the IMU coordinate system. Lambda [ alpha ]mIs the object at cjVisual information of time of day (m is a positive integer), cjThe granularity of the time of day is determined based on the frequency of acquisition of the visual information. For example, the frequency of acquisition of visual information is 10 Hz, cjThe particle size at time point was 0.1 second. As can be seen from the above description, the visual information of the object at a time includes: and the positions of all visual characteristic points in the images shot at the moment in the camera coordinate system. Therefore, the temperature of the molten metal is controlled,
Figure BDA00032831310100000612
wherein l is used to indicate cjAnd shooting each visual feature point in the obtained image at any moment.
Furthermore, for P in the formula (1)TAnd δ NS TThe description is given. PTRepresenting the position of an object in the terrestrial coordinate system at a time, the granularity of which is determined on the basis of the acquisition frequency of the GNSS, PTThe superscript T of (a) denotes transpose. Delta NS TThe ambiguity vector at that time is shown, for example, the inter-station single-difference integer ambiguity vector, δ N, of GNSS in the above descriptionS TThe superscript T of (a) denotes transpose.
In the above, the reference information included in the first state variable is explained. In an exemplary embodiment, obtaining the first state variable of the object to be located includes the following steps.
2011, a third state variable of the subject is obtained, the third state variable comprising reference information of the subject at the at least one first time instant.
The reference information of the object at the first time is referred to the above description, and is not described herein again. Illustratively, the manner of acquiring the third state variable of the object includes the following two manners.
The acquisition method is as follows: the third state variable is a state variable obtained through an initialization process. In the initialization process, IMU measurement information, visual measurement information and GNSS measurement information are obtained, and therefore a third state variable is obtained through solving. The method for obtaining the third state variable by solution is, for example, SVD (Singular Value Decomposition), and the embodiment does not limit the solution method.
Illustratively, IMU measurement information includes: a measurement of position pre-integration, a measurement of velocity pre-integration, a measurement of rotation angle pre-integration, a measurement of acceleration bias, and a measurement of gyroscope bias. The vision measurement information includes a measurement value of a pixel position of the vision feature point in the image, the GNSS measurement information includes at least one of a measurement value of a carrier phase including an L1 carrier phase or an L2 carrier phase and a measurement value of a pseudo range, and the carrier used is not limited in this embodiment.
And the second acquisition mode is as follows: the third state variable is an optimized state variable. For example, after the state variable is obtained through the initialization process, and the state variable is optimized one or more times in the manner provided by the embodiment of the present application, the state variable that has been optimized one or more times may be used as the third state variable. In this embodiment, the optimization is continued for the third state variable that has been optimized one or more times. It can be seen that the reference information can be updated for multiple times along with the change of time, so that the positioning accuracy is ensured.
2012, in response to detecting that the condition is satisfied, inserting reference information of the object at a second time into the third state variable to obtain an updated third state variable, where the second time is later than any one of the at least one first time.
The reference information of the object at the second time is referred to the above description, and is not described here again. The updated third state variable includes not only the reference information of the object at the at least one first time, but also the reference information of the object at the second time. It can be appreciated that, since the second time is later than any one of the at least one first time, the reference information of the object at the second time is newer reference information than the reference information of the object at the first time. In other words, in the process of locating the object, the present embodiment will continuously insert new reference information into the third state variable, so as to continuously optimize the new reference information.
In an exemplary embodiment, the method further comprises: referring to fig. 3, it is determined that the condition is detected to be satisfied in response to at least one event occurrence of obtaining IMU measurement information, obtaining visual measurement information, obtaining GNSS measurement information, and a lapse of a reference time period, thereby inserting reference information of the object at the second time into the third state variable. Illustratively, the second time is a time when the at least one event occurs. The IMU measurement information and the visual measurement information are referred to above, and are not described here again. Illustratively, since the visual measurement information is a measurement value of a pixel position of the visual feature point in the image, obtaining the visual measurement information includes: the images are obtained by shooting through a camera. For example, the reference duration refers to: the time interval between the current time and the time when the reference information of the object at the second time is inserted into the third state variable last time is the reference time length, that is, the reference information of the object at the second time is inserted into the third state variable every reference time length in the embodiment.
In some embodiments, considering that the frequency of obtaining IMU measurement information is high, if reference information of the object at the second time is inserted into the third state variable every time IMU measurement information is obtained, more processing resources are required. For example, referring to fig. 4, the embodiment acquires IMU measurement information at 6 times, acquires visual measurement information only at 5 times, and acquires GNSS information only at 1 time. Therefore, the present embodiment only inserts the reference information of the object at the second time into the third state variable when at least one event of obtaining the visual measurement information, obtaining the GNSS measurement information, and passing the reference time period occurs.
2013, obtaining the first state variable based on the updated third state variable.
In some embodiments, deriving the first state variable based on the updated third state variable comprises: and taking the updated third state variable as the first state variable. Or, in an exemplary embodiment, obtaining the first state variable based on the updated third state variable includes: at least one target time instant is determined from the at least one first time instant and the second time instant in response to a sum of a number of time instants of the at least one first time instant and the second time instant being greater than a number threshold. And deleting the reference information of the object at least one target moment from the updated third state variable to obtain the first state variable.
It can be understood that the more reference information of the object in the first state variable at various times, the more calculation load of the subsequent calculation process. Therefore, referring to fig. 3, in a case that the sum of the time quantities of the at least one first time and the second time is greater than the quantity threshold, the present embodiment needs to delete the reference information of the object at the at least one target time from the updated third state variable, so as to reduce the calculation amount of the subsequent calculation process and avoid affecting the real-time performance of the positioning. In this embodiment, the number threshold is not limited, and the number threshold may be set according to at least one of actual needs or computing capabilities of the electronic device.
In this embodiment, the reference information of the object at the at least one target time instant is deleted with the insertion of the reference information of the object at the second time instant. The process of inserting and deleting the reference information corresponds to a sliding process of a sliding window (sliding window) between different time instants, the sliding window corresponds to a first state variable, and the number of time instants included in the first state variable is the sliding window size (window size).
In some embodiments, determining at least one target time from among the at least one first time and the second time comprises: the earliest time is determined as the target time in the at least one first time and the second time. That is, every time reference information of an object at the second time is inserted, the reference information of an object at the earliest target time is deleted accordingly, so that the size of the sliding window remains unchanged during the insertion and deletion of the reference information. Of course, this embodiment is merely an example, and the target time and the number of target times are not limited in this embodiment.
Illustratively, the embodiment reserves the reference information of the deleted object at least one target moment through the marginalization error. In the subsequent calculation process, the marginalization error is also calculated. In an exemplary embodiment, the method further comprises: the marginalization error is determined based on the first state variable and reference information of the object at the at least one target time instant.
In some embodiments, the present embodiment determines the marginalization error by using a schur-complement calculation method based on the first state variable and the reference information of the object at the at least one target time. The determination manner of the marginalization error is merely an example, and is not used to limit the present embodiment.
An IMU error, a vision error and a GNSS error are determined based on the reference information comprised by the first state variables, and a target error is determined based on the IMU error, the vision error and the GNSS error 202.
In some embodiments, determining the target error based on the IMU error, the vision error, and the GNSS error comprises: the sum of the IMU error, the vision error and the GNSS error is determined as a target error. Alternatively, for the case where marginalized errors are determined above, determining a target error based on the IMU error, the visual error, and the GNSS error comprises: determining a sum of the marginalization error, the IMU error, the vision error, and the GNSS error as a target error. Of course, the above manner of determining the target error is merely an example, and is not used to limit the present embodiment. For example, the present embodiment may also determine the target error by a weighted summation based on the summation, and the weights of different errors may be the same or different.
Next, a manner of determining the IMU error, the visual error, and the GNSS error will be described. In an exemplary embodiment, the determination of IMU error, vision error and GNSS error based on the reference information comprised by the first state variables comprises the following steps.
2021, obtaining IMU measurement information and IMU estimation information, and determining IMU errors based on the IMU measurement information, IMU estimation information, and IMU information.
As can be seen from the above 2011, the IMU measurement information includes: a measurement of position pre-integration, a measurement of velocity pre-integration, a measurement of rotation angle pre-integration, a measurement of acceleration bias, and a measurement of gyroscope bias. Accordingly, the IMU estimation information includes: an estimate of position pre-integration, an estimate of velocity pre-integration, and an estimate of rotation angle pre-integration. It should be noted that the pre-integration is calculated between two adjacent time instants. For example, the measurement of position pre-integration refers to: a value obtained by pre-integrating the measured values of the positions at two adjacent moments. For another example, the estimate of the position pre-integration is: a value obtained by pre-integrating the estimated values of the positions at two adjacent times. For other pre-integrations, detailed descriptions thereof are omitted. In this embodiment, the IMU sub-errors at two adjacent times in the first state variable are determined based on the IMU measurement information, the IMU estimation information, and the IMU information in the first state variable, where any two adjacent times correspond to one IMU sub-error, and then the sum of the IMU sub-errors is determined as the IMU error.
Based on the above description, the IMU error is expressed as equation (2) below:
Figure BDA0003283131010000101
wherein, bkFor distinguishing different moments, bkIs determined based on the acquisition frequency of the IMU. B is a set of IMU measurement information and IMU estimation information, Z1For indicating two adjacent time instants bkAnd bk+1The corresponding IMU sub-error, | is used to represent the norm. Z1Formula (3) as follows:
Figure BDA0003283131010000102
in the formula (3), the first and second groups,
Figure BDA0003283131010000103
measuring values for position pre-integration
Figure BDA0003283131010000104
Estimation of pre-integration with position
Figure BDA0003283131010000105
The difference between the values of the two signals,
Figure BDA0003283131010000106
i.e. the error value of the position pre-integration. In the estimation of the position pre-integration,
Figure BDA0003283131010000107
denotes bkA rotation matrix of the time world coordinate system under the IMU coordinate system,
Figure BDA0003283131010000111
is b isk+1The location of the time of day object in the world coordinate system,
Figure BDA0003283131010000112
is b iskThe location of the time of day object in the world coordinate system,
Figure BDA0003283131010000113
is b iskVelocity of the time object in the world coordinate system, δ tkIs b iskTime and bk+1Difference between moments, gwIs the gravitational acceleration under the world coordinate system.
Figure BDA0003283131010000114
Pre-integrating the measured value for velocity
Figure BDA0003283131010000115
Estimation of pre-integration with velocity
Figure BDA0003283131010000116
Figure BDA0003283131010000117
The difference between the values of the two signals,
Figure BDA0003283131010000118
i.e. the error value of the speed pre-integration.
Figure BDA0003283131010000119
Is b isk+1The velocity of the time object in the world coordinate system.
Figure BDA00032831310100001110
Is the error value pre-integrated for the angle of rotation,
Figure BDA00032831310100001111
measurement based on rotational angle pre-integration
Figure BDA00032831310100001112
And the estimated value of the pre-integration of the rotation angle
Figure BDA00032831310100001113
And (4) calculating. In the estimation of the pre-integration of the rotation angle,
Figure BDA00032831310100001114
is b iskThe rotation angle of the time object in the world coordinate system,
Figure BDA00032831310100001115
is b isk+1The rotation angle of the time object in the world coordinate system, and x, y and z represent three different directions of the rotation angle.
Figure BDA00032831310100001116
Is a tensor product and belongs to an operation sign.
δbaError value for acceleration bias, by bkAcceleration offset measurement of time of day
Figure BDA00032831310100001117
And bk+1Acceleration offset measurement of time of day
Figure BDA00032831310100001118
And (4) calculating. δ bgError value for gyroscope bias, bykMoment gyroscope bias measurements
Figure BDA00032831310100001119
And bk+1Moment gyroscope bias measurements
Figure BDA00032831310100001120
And (4) calculating.
In the above formula (3), the variables to be solved include
Figure BDA00032831310100001121
Figure BDA00032831310100001122
And
Figure BDA00032831310100001123
the variable to be solved is IMU information in the reference information included in the first state variable.
2022, obtaining vision measurement information, obtaining vision estimation information based on the vision information, and determining vision errors based on the vision measurement information and the vision estimation information.
As can be seen from the explanation in 2011, the visual measurement information includes a measurement value of a pixel position of the visual feature point in the image. The vision estimation information of the object at a time includes: an estimate of a pixel location of the visual feature point in the image, the visual estimate information being determined based on the visual information. For the images captured at each time in the first state variable, the present embodiment calculates the difference between the estimated value and the measured value of the visual feature point in the image at the pixel position in the image to obtain the error value corresponding to the feature point, and determines the sum of the error values corresponding to the feature points in each image as the visual error.
Based on the above description, the visual error is expressed as the following equation (4):
Figure BDA00032831310100001124
wherein, C is a set of visual measurement information and visual estimation information, j is used to distinguish images shot at various moments in the first state variable, and l is used to distinguish various feature points in the images. ρ (-) is the loss function, Z2The error value corresponding to the feature point l is obtained. Z2Formula (5) as follows:
Figure BDA0003283131010000121
in the formula (5), the first and second groups,
Figure BDA0003283131010000122
is cjAnd (4) measuring the pixel position of the ith characteristic point in the image j obtained by shooting at the moment.
Figure BDA0003283131010000123
Is cjAnd (3) estimating the pixel position of the ith characteristic point in the image j obtained by shooting at the moment.
Wherein the content of the first and second substances,
Figure BDA0003283131010000124
formula (6) as follows:
Figure BDA0003283131010000125
πcit is shown that the circumferential ratio,
Figure BDA0003283131010000126
a rotation matrix representing the IMU coordinate system relative to the camera coordinate system,
Figure BDA0003283131010000127
representation batA rotation matrix of the world coordinate system at the moment of the image j relative to the IMU coordinate system is obtained,
Figure BDA0003283131010000128
a rotation matrix representing the camera coordinate system relative to the IMU coordinate system,
Figure BDA0003283131010000129
the actual position of the visual feature point in the image j +1 in the three-dimensional space at the time when the image j +1 is taken.
Figure BDA00032831310100001210
Is the position of the camera coordinate system relative to the IMU coordinate system,
Figure BDA00032831310100001211
the IMU measures the position of the object in the world coordinate system at the time when the image j +1 is taken,
Figure BDA00032831310100001212
the IMU measures the position of the object in the world coordinate system for the moment the image j is taken.
In the above equation (6), the variables to be solved include
Figure BDA00032831310100001213
The variables to be solved are the visual information in the reference information included in the first state variables.
2023, obtaining a difference between the GNSS measurement information and the GNSS estimation information, and determining a GNSS error based on the difference and the GNSS information.
As can be seen from the description in 2011 above, the GNSS measurement information includes, but is not limited to, at least one of a measurement of carrier phase and a measurement of pseudorange, and accordingly, the GNSS estimation information includes, but is not limited to, at least one of an estimate of carrier phase and an estimate of pseudorange. In this embodiment, the difference l (X) between the measured value of the carrier phase and the estimated value of the carrier phase is calculatedS,φCalculating between the measured value of the pseudo-range and the estimated value of the pseudo-rangeDifference value of (l), (X)S,P. The difference value between the GNSS measurement information and the GNSS estimation information is l (X)S,φAnd l (X)S,P. Then, GNSS information is combined to determine GNSS errors.
Based on the above description, the GNSS error is calculated according to the following equation (7):
Figure BDA00032831310100001214
in equation (7), G represents GNSS, i.e., S system, indicating that GNSS errors are calculated based on the respective satellites included in the S system. Z3Formula (8) expressed as follows:
Figure BDA0003283131010000131
in formula (8), H is a design matrix, DSA transformation matrix for transforming the measured values of the S system from single difference between stations to double difference between stations, ESIs the cosine of the S system observation equation, lambdaSIs the wavelength of the S system frequency point. PTFor position in GNSS information, δ NS TIs an ambiguity vector in the GNSS information. l (X)S,φAnd l (X)S,PSee description above. Wherein, the variable to be solved is PTAnd δ NS TThe variable to be solved is GNSS information in the reference information included in the first state variable.
For example, the S system includes GPS and BDS (i.e., short baseline GPS/BDS dual system), the GNSS error is calculated according to the following equation (9):
Figure BDA0003283131010000132
in formula (9), DGFor a transformation matrix from single difference between stations to double difference between stations, DBFor a transformation matrix from single difference between stations to double difference between stations, EGIs the cosine of the GPS observation equation, EBFor BDS observationCosine of the equation, λGIs the wavelength, lambda, of the GPS frequency pointBWavelength at BDS frequency point, PTFor the position to be solved, δ NG TFor the ambiguity vector of the GPS to be solved,
Figure BDA0003283131010000133
ambiguity vector for BDS to be solved, l (X)G,φDifference between observed and estimated values of the carrier phase of GPS, < l >, < X >)G,PDifference between observed and estimated values of pseudo-range of GPS, < l >, < X >)B,φDifference between observed and estimated values of the carrier phase of BDS, < X >B,PIs the difference between the observed and estimated values of the pseudoranges for the BDS.
It should be noted that, according to the description in 201, the S system includes at least one of GPS, GLONASS, BDS, and Galileo. The above example corresponds to the case where the S system includes a GPS and a BDS, and for other cases, there is no description here.
Based on the description of IMU errors in 2021, the description of visual errors in 2022, and the description of GNSS errors in 2023, the target error is expressed as equation (10) below:
Figure BDA0003283131010000134
illustratively, for the case illustrated in 201 where the marginalization error is calculated, the marginalization error is represented as Z4Then, the target error is expressed as the following equation (11):
Figure BDA0003283131010000135
in addition, it can be understood that the execution sequence of 2021-2023 is not limited in this embodiment, and 2021-2023 can be executed simultaneously or sequentially according to actual requirements.
And 203, updating the reference information included in the first state variable through a process of minimizing the target error to obtain a second state variable, wherein the second state variable is used for positioning the object.
In this embodiment, iteration may be performed based on the first state variable, so that the target error is minimized, and at this time, the first state variable is updated to the second state variable. This process is an optimization process for the reference information included in the first state information. The reference information comprised by the second state variable is closer to the real information than the reference information comprised by the first state variable.
As illustrated in 202, the optimized IMU information includes
Figure BDA0003283131010000141
Figure BDA0003283131010000142
And
Figure BDA0003283131010000143
the optimized visual information comprises
Figure BDA0003283131010000144
Optimized GNSS information includes PTAnd δ NS T. Based on one or more of these information, the present embodiment enables the positioning of the object.
The method provided by the embodiment of the application is suitable for various scenes needing positioning. For example, it is suitable for positioning an unmanned vehicle in a scenario in which the unmanned vehicle is automatically driven, and also suitable for positioning a vehicle in a scenario in which a user manually drives the vehicle. Taking a scene of automatic driving of an unmanned vehicle as an example, the unmanned vehicle is provided with an IMU, a camera and an electronic device. IMU measurement information can be obtained through the IMU, and the IMU transmits the IMU measurement information to the electronic equipment. The vision measurement information can be obtained by a camera, which transmits the vision measurement information to the electronic device. In addition, the electronic device can calculate IMU estimation information and visual estimation information, and can receive GNSS measurement information and GNSS estimation information. Based on the above information, the electronic device may execute the method in the above description to obtain optimized IMU information, visual information, and GNSS information, thereby implementing positioning of the unmanned vehicle. It will be appreciated that in the scenario where the unmanned vehicle is automatically driven, the method described above may be performed one or more times to locate the unmanned vehicle one or more times.
To sum up, the target error is calculated based on the IMU information, the visual information, and the GNSS information, and the second state variable for positioning is obtained by minimizing the target error, so that the positioning method is equivalent to tightly coupling three different positioning modes, i.e., IMU positioning, visual positioning, and GNSS positioning, and the positioning accuracy is high. The method is not only suitable for complex shielded environments, but also low in cost and high in applicability.
An embodiment of the present application provides a positioning apparatus, referring to fig. 5, the apparatus includes:
an obtaining module 501, configured to obtain a first state variable of an object to be located, where the first state variable includes reference information of the object at least one time, and the reference information includes inertial measurement unit IMU information, visual information, and global navigation satellite system GNSS information;
a determining module 502 for determining an IMU error, a visual error and a GNSS error based on the reference information comprised by the first state variables, and determining a target error based on the IMU error, the visual error and the GNSS error;
an updating module 503, configured to update the reference information included in the first state variable through a process of minimizing the target error, so as to obtain a second state variable, where the second state variable is used to locate the object.
In an exemplary embodiment, the obtaining module 501 is configured to obtain a third state variable of the object, where the third state variable includes reference information of the object at least one first time; in response to detecting that the condition is met, inserting reference information of the object at a second moment into the third state variable to obtain an updated third state variable, wherein the second moment is later than any one of the at least one first moment; and obtaining the first state variable based on the updated third state variable.
In an exemplary embodiment, the obtaining module 501 is configured to determine at least one target time from at least one first time and a second time in response to a sum of a number of times of the at least one first time and the second time being greater than a number threshold; and deleting the reference information of the object at least one target moment from the updated third state variable to obtain the first state variable.
In an exemplary embodiment, the determining module 502 is further configured to determine a marginalization error based on the first state variable and reference information of the object at the at least one target time instant;
a determining module 502 for determining a sum of the marginalization error, the IMU error, the vision error and the GNSS error as a target error.
In an exemplary embodiment, the obtaining module 501 is configured to determine an earliest time as the target time among the at least one first time and the second time.
In an exemplary embodiment, the obtaining module 501 is further configured to determine that the condition is detected to be satisfied in response to at least one event occurrence of obtaining IMU measurement information, obtaining visual measurement information, obtaining GNSS measurement information, and a reference duration.
In an exemplary embodiment, the determining module 502 is configured to obtain IMU measurement information and IMU estimation information, determine IMU errors based on the IMU measurement information, IMU estimation information, and IMU information; obtaining vision measurement information, obtaining vision estimation information based on the vision information, and determining a vision error based on the vision measurement information and the vision estimation information; a difference between the GNSS measurement information and the GNSS estimate information is obtained, and a GNSS error is determined based on the difference and the GNSS information.
To sum up, the target error is calculated based on the IMU information, the visual information, and the GNSS information, and the second state variable for positioning is obtained by minimizing the target error, so that the positioning method is equivalent to tightly coupling three different positioning modes, i.e., IMU positioning, visual positioning, and GNSS positioning, and the positioning accuracy is high. The method is not only suitable for complex shielded environments, but also low in cost and high in applicability.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to fig. 6, a schematic structural diagram of a terminal 600 provided in an embodiment of the present application is shown. The terminal 600 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), a notebook computer or a desktop computer. The terminal 600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 600 includes: a processor 601 and a memory 602.
The processor 601 may include one or more processing cores, such as a 4-core processor, a 6-core processor, and so on. The processor 601 may be implemented in at least one hardware form selected from the group consisting of a DSP (Digital Signal Processing), a Field-Programmable Gate Array (FPGA), and a Programmable Logic Array (PLA). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display 605. In some embodiments, processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 602 may include one or more computer-readable storage media, which may be non-transitory. The memory 602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 602 is used to store at least one instruction for execution by the processor 601 to implement the positioning method provided by the method embodiments of the present application.
In some embodiments, the terminal 600 may further optionally include: a peripheral interface 603 and at least one peripheral. The processor 601, memory 602, and peripheral interface 603 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 603 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of the group consisting of a radio frequency circuit 604, a display 605, a camera 606, an audio circuit 607, a positioning component 608, and a power supply 609.
The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 6G), Wireless local area networks, and/or Wi-Fi (Wireless Fidelity) networks. In some embodiments, the rf circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to capture touch signals on or over the surface of the display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. At this point, the display 605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 605 may be one, providing the front panel of the terminal 600; in other embodiments, the display 605 may be at least two, respectively disposed on different surfaces of the terminal 600 or in a folded design; in still other embodiments, the display 605 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing or inputting the electric signals to the radio frequency circuit 604 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 607 may also include a headphone jack.
The positioning component 608 is used for positioning the current geographic Location of the terminal 600 to implement navigation or LBS (Location Based Service). The Positioning component 608 can be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union's galileo System.
Power supply 609 is used to provide power to the various components in terminal 600. The power supply 609 may be ac, dc, disposable or rechargeable. When the power supply 609 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 610 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the terminal 600. For example, the acceleration sensor 611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 601 may control the display screen 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 612 may detect a body direction and a rotation angle of the terminal 600, and the gyro sensor 612 and the acceleration sensor 611 may cooperate to acquire a 3D motion of the user on the terminal 600. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 613 may be disposed on the side bezel of terminal 600 and/or underneath display screen 605. When the pressure sensor 613 is disposed on the side frame of the terminal 600, a user's holding signal of the terminal 600 can be detected, and the processor 601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613. When the pressure sensor 613 is disposed at the lower layer of the display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 605. The operability control comprises at least one of a group consisting of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 614 is used for collecting a fingerprint of a user, and the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be disposed on the front, back, or side of the terminal 600. When a physical button or vendor Logo is provided on the terminal 600, the fingerprint sensor 614 may be integrated with the physical button or vendor Logo.
The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the display brightness of display screen 605 based on the ambient light intensity collected by optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the display screen 605 is increased; when the ambient light intensity is low, the display brightness of the touch screen 606 is turned down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
A proximity sensor 616, also known as a distance sensor, is typically disposed on the front panel of the terminal 600. The proximity sensor 616 is used to collect the distance between the user and the front surface of the terminal 600. In one embodiment, when proximity sensor 616 detects that the distance between the user and the front face of terminal 600 gradually decreases, processor 601 controls display 605 to switch from the bright screen state to the dark screen state; when the proximity sensor 616 detects that the distance between the user and the front face of the terminal 600 is gradually increased, the processor 601 controls the display 605 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is not intended to be limiting of terminal 600 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The embodiment of the application provides electronic equipment, which comprises a memory and a processor; the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the positioning method provided by any one of the exemplary embodiments of the present application.
The embodiment of the present application provides a computer-readable storage medium, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the positioning method provided in any one of the exemplary embodiments of the present application.
An embodiment of the present application provides a computer program or a computer program product, where the computer program or the computer program product includes: computer instructions which, when executed by a computer, cause the computer to implement the positioning method provided by any one of the exemplary embodiments of the present application.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The present invention is not limited to the above embodiments, and any modifications, equivalent replacements, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A method of positioning, the method comprising:
acquiring a first state variable of an object to be positioned, wherein the first state variable comprises reference information of the object at least one moment, and the reference information comprises Inertial Measurement Unit (IMU) information, visual information and Global Navigation Satellite System (GNSS) information;
determining an IMU error, a visual error and a GNSS error based on reference information comprised by the first state variables, determining a target error based on the IMU error, the visual error and the GNSS error;
and updating the reference information included in the first state variable through a process of minimizing the target error to obtain a second state variable, wherein the second state variable is used for positioning the object.
2. The method of claim 1, wherein obtaining the first state variable of the object to be located comprises:
acquiring a third state variable of the object, wherein the third state variable comprises reference information of the object at least one first moment;
in response to detecting that a condition is met, inserting reference information of the object at a second moment into the third state variable to obtain an updated third state variable, wherein the second moment is later than any one of the at least one first moment;
and obtaining the first state variable based on the updated third state variable.
3. The method of claim 2, wherein deriving the first state variable based on the updated third state variable comprises:
determining at least one target time from the at least one first time and the second time in response to a sum of a number of times of the at least one first time and the second time being greater than a number threshold;
and deleting the reference information of the object at the at least one target moment from the updated third state variable to obtain the first state variable.
4. The method of claim 3, further comprising:
determining a marginalization error based on the first state variable and reference information of the object at the at least one target time instant;
the determining a target error based on the IMU error, the vision error, and the GNSS error comprises:
determining a sum of the marginalization error, the IMU error, the vision error, and the GNSS error as the target error.
5. The method according to claim 3 or 4, wherein said determining at least one target time instant from said at least one first time instant and said second time instant comprises:
determining an earliest one of the at least one first time and the second time as the target time.
6. The method of claim 2, further comprising:
determining that the condition is detected to be satisfied in response to at least one event occurrence of obtaining IMU measurement information, obtaining visual measurement information, obtaining GNSS measurement information, and a passage of a reference duration.
7. The method according to any of claims 1-6, wherein said determining IMU errors, visual errors and GNSS errors based on reference information comprised by said first state variables comprises:
obtaining IMU measurement information and IMU estimation information, determining the IMU error based on the IMU measurement information, the IMU estimation information and the IMU information;
obtaining vision measurement information, obtaining vision estimation information based on the vision information, and determining the vision error based on the vision measurement information and the vision estimation information;
a difference between GNSS measurement information and GNSS estimation information is obtained, and the GNSS error is determined based on the difference and the GNSS information.
8. A positioning device, the device comprising:
the positioning system comprises an acquisition module, a positioning module and a positioning module, wherein the acquisition module is used for acquiring a first state variable of an object to be positioned, the first state variable comprises reference information of the object at least one moment, and the reference information comprises Inertial Measurement Unit (IMU) information, visual information and Global Navigation Satellite System (GNSS) information;
a determination module for determining an IMU error, a visual error and a GNSS error based on reference information comprised by the first state variables, a target error based on the IMU error, the visual error and the GNSS error;
and the updating module is used for updating the reference information included in the first state variable through the process of minimizing the target error to obtain a second state variable, and the second state variable is used for positioning the object.
9. An electronic device, comprising a memory and a processor; the memory stores at least one instruction, which is loaded and executed by the processor to cause the electronic device to implement the positioning method of any one of claims 1-7.
10. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor, to cause a computer to implement the positioning method according to any one of claims 1 to 7.
11. A computer program product, characterized in that the computer program product comprises a computer program or instructions which are executed by a processor to cause a computer to implement the positioning method according to any of claims 1-7.
CN202111138474.9A 2021-09-27 2021-09-27 Positioning method, positioning device, electronic equipment and computer readable storage medium Pending CN113960648A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111138474.9A CN113960648A (en) 2021-09-27 2021-09-27 Positioning method, positioning device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111138474.9A CN113960648A (en) 2021-09-27 2021-09-27 Positioning method, positioning device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113960648A true CN113960648A (en) 2022-01-21

Family

ID=79462360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111138474.9A Pending CN113960648A (en) 2021-09-27 2021-09-27 Positioning method, positioning device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113960648A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116381760A (en) * 2023-06-05 2023-07-04 之江实验室 GNSS RTK/INS tight coupling positioning method, device and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116381760A (en) * 2023-06-05 2023-07-04 之江实验室 GNSS RTK/INS tight coupling positioning method, device and medium
CN116381760B (en) * 2023-06-05 2023-08-15 之江实验室 GNSS RTK/INS tight coupling positioning method, device and medium

Similar Documents

Publication Publication Date Title
US11158083B2 (en) Position and attitude determining method and apparatus, smart device, and storage medium
CN108682036B (en) Pose determination method, pose determination device and storage medium
CN110986930B (en) Equipment positioning method and device, electronic equipment and storage medium
CN111464749B (en) Method, device, equipment and storage medium for image synthesis
CN110926473A (en) Method and device for identifying floor, electronic equipment and storage medium
CN110134744B (en) Method, device and system for updating geomagnetic information
CN111127509B (en) Target tracking method, apparatus and computer readable storage medium
CN109166150B (en) Pose acquisition method and device storage medium
CN109886208B (en) Object detection method and device, computer equipment and storage medium
CN111768454A (en) Pose determination method, device, equipment and storage medium
CN111897429A (en) Image display method, image display device, computer equipment and storage medium
CN111624630A (en) GNSS-based satellite selection method and device, terminal and storage medium
CN113627413A (en) Data labeling method, image comparison method and device
CN113384880A (en) Virtual scene display method and device, computer equipment and storage medium
CN109281648B (en) Method and apparatus for determining a reasonable well pattern density of an oil reservoir
CN111753606A (en) Intelligent model upgrading method and device
CN111179628B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN113960648A (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN110263695B (en) Face position acquisition method and device, electronic equipment and storage medium
CN111860064B (en) Video-based target detection method, device, equipment and storage medium
CN111754564B (en) Video display method, device, equipment and storage medium
CN109116424B (en) Low wave number noise separation method and device for seismic wave data and storage medium
CN108564196B (en) Method and device for forecasting flood
CN113824902B (en) Method, device, system, equipment and medium for determining time delay of infrared camera system
CN112243083B (en) Snapshot method and device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination