CN115752436A - Fusion positioning method, device and system and hovercar - Google Patents

Fusion positioning method, device and system and hovercar Download PDF

Info

Publication number
CN115752436A
CN115752436A CN202211450927.6A CN202211450927A CN115752436A CN 115752436 A CN115752436 A CN 115752436A CN 202211450927 A CN202211450927 A CN 202211450927A CN 115752436 A CN115752436 A CN 115752436A
Authority
CN
China
Prior art keywords
data
imu
positioning
visual
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211450927.6A
Other languages
Chinese (zh)
Inventor
彭登
陶永康
傅志刚
董博
南志捷
胡荣海
赵德力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Huitian Aerospace Technology Co Ltd
Original Assignee
Guangdong Huitian Aerospace Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Huitian Aerospace Technology Co Ltd filed Critical Guangdong Huitian Aerospace Technology Co Ltd
Priority to CN202211450927.6A priority Critical patent/CN115752436A/en
Publication of CN115752436A publication Critical patent/CN115752436A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Navigation (AREA)

Abstract

The application relates to a fusion positioning method, a fusion positioning device, a fusion positioning system and a flying automobile. The method comprises the following steps: acquiring local pose data output by a visual inertial navigation odometer, wherein the local pose data are obtained by performing close-coupling fusion on visual data and IMU data, and the IMU data are obtained by processing according to first IMU data of the visual inertial navigation odometer and second IMU data of navigation equipment; acquiring global pose data output by the navigation equipment; and performing loose coupling fusion processing on the local pose data and the global pose data, and outputting positioning data obtained after fusion processing as a positioning result. The scheme that this application provided can provide comparatively accurate and stable location under outdoor flight scene.

Description

Fusion positioning method, device and system and hovercar
Technical Field
The application relates to the technical field of hovercar, in particular to a fusion positioning method, a fusion positioning device, a fusion positioning system and a hovercar.
Background
At present, SLAM (synchronous positioning and mapping) is considered as a core technology for realizing autonomous operation of a mobile robot, and has been widely applied to the fields of unmanned aerial vehicles, unmanned vehicles, virtual reality and the like. The visual sensor is easy to blur in the process of high-speed motion, and the motion estimation of the high-speed motion and rotation has inherent defects; an Inertial Measurement Unit (IMU) is provided with an acceleration and a gyroscope inside, which is very accurate in detection of short-time high-speed motion, but has accumulated errors in long-time operation. The VIO (Visual-Inertial odometer) integrates the respective characteristics of the IMU and the Visual sensor, and estimates the motion state information of the current carrier, including speed, attitude, position, etc., mainly through the input data of the IMU and the Visual sensor, such as a camera.
At present, most of the applications of the visual odometer are concentrated on indoor scenes, because the indoor scenes are relatively stable, and the indoor scenes are relatively stable in terms of illumination, body motion and feature textures (point features and line features). In the related art, the visual odometer for the air scenes such as outdoor flight scenes is relatively rarely applied. The aerial scene is sensitive to the states of 6 degrees of freedom, is a real 3D scene, is more complex in high-altitude motion state, can be more fiercely switched in view angles, and is difficult to accurately and stably realize positioning.
Therefore, for outdoor flying scenes, a more applicable and accurate positioning method is to be provided for aircrafts such as flying cars and the like.
Disclosure of Invention
In order to solve or partially solve the problems in the related art, the fusion positioning method, the fusion positioning device, the fusion positioning system and the hovercar are provided, and accurate and stable positioning can be provided in an outdoor flight scene.
A first aspect of the present application provides a fusion positioning method, including:
acquiring local pose data output by a visual inertial navigation odometer, wherein the local pose data are obtained by performing close-coupling fusion on visual data and IMU data, and the IMU data are obtained by processing according to first IMU data of the visual inertial navigation odometer and second IMU data of navigation equipment;
acquiring global pose data output by the navigation equipment;
and performing loose coupling fusion processing on the local pose data and the global pose data, and outputting positioning data obtained after fusion processing as a positioning result.
In an embodiment, the visual inertial navigation odometer includes a camera and a first IMU device, the visual data from the camera, the first IMU data from the first IMU device;
the navigation device includes a second IMU device, the second IMU data from the second IMU device.
In one embodiment, the processing of the IMU data based on the first IMU data of the visual inertial navigation odometer and the second IMU data of the navigation device comprises:
selecting a second IMU of the navigation device as the IMU data if the receiving of the second IMU data by the second IMU device is normal;
selecting the first IMU data as the IMU data if the receiving of the second IMU data by a second IMU device of the navigation device is abnormal.
In an embodiment, the performing loosely-coupled fusion processing on the local pose data and the global pose data, and outputting positioning data obtained after the fusion processing as a positioning result includes:
when the navigation equipment has signal abnormality, determining positioning data according to local pose data output by the visual inertial navigation odometer, and outputting the positioning data as a positioning result;
and when the vision under-constraint occurs to the vision inertial navigation odometer, determining positioning data according to the global pose data output by the navigation equipment, and outputting the positioning data as a positioning result.
In one embodiment, when the navigation device is in signal abnormality, determining positioning data according to local pose data output by the visual inertial navigation odometer, and outputting the positioning data as a positioning result includes:
when the covariance matrix of the navigation equipment is abnormal, acquiring a transformation relation value between the local pose data of the visual inertial navigation odometer and the global pose data after the local pose data of the visual inertial navigation odometer is converted into a global coordinate system;
and multiplying the local pose data output by the visual inertial navigation odometer by the transformation relation value to obtain positioning data, and outputting the positioning data as a positioning result.
In an embodiment, when the visual under-constraint occurs to the visual inertial navigation odometer, determining positioning data according to global pose data output by the navigation device, and outputting the positioning data as a positioning result includes:
when the covariance matrix of the visual inertial navigation odometer is abnormal, acquiring a transformation relation value between the local pose data of the visual inertial navigation odometer and the global pose data after converting the local pose data into a global coordinate system, and determining a difference value between the transformation relation value and a real position transformation relation value according to the transformation relation value;
and multiplying the global pose data output by the navigation equipment and the inverse of the difference value to obtain positioning data, and outputting the positioning data as a positioning result.
In one embodiment, the acquiring global pose data output by the navigation device includes:
and acquiring global position and orientation data which is output by the navigation equipment and obtained by adopting a carrier phase differential positioning RTK mode.
In one embodiment, the visual inertial navigation odometer is initialized according to visual data, IMU data and global pose data output by the navigation device; wherein,
performing PnP pose resolution on the visual data, performing integral processing on the IMU data, and performing pose resolution;
filtering the pose resolving result of the visual data and the pose resolving result of the IMU data together to obtain updated pose transformation data;
and combining the pose transformation data obtained by filtering with the global pose data output by the navigation equipment as feedback to correct the visual data of the last frame of the visual inertial navigation odometer.
The second aspect of the present application provides a fusion positioning device, comprising:
the system comprises a first input module, a second input module and a third input module, wherein the first input module is used for acquiring local pose data output by a visual inertial navigation odometer, the local pose data are obtained by performing close coupling fusion on visual data and IMU data, and the IMU data are obtained by processing the visual data and the IMU data according to first IMU data of the visual inertial navigation odometer and second IMU data of navigation equipment;
the second input module is used for acquiring global pose data output by the navigation equipment;
and the fusion positioning module is used for performing loose coupling fusion processing on the local pose data and the global pose data and outputting positioning data obtained after fusion processing as a positioning result.
In one embodiment, the fusion localization module comprises:
the first processing submodule is used for determining positioning data according to local pose data output by the visual inertial navigation odometer when the navigation equipment has signal abnormality and outputting the positioning data as a positioning result;
and the second processing submodule is used for determining positioning data according to the global position and posture data output by the navigation equipment when the vision under-constraint occurs to the vision inertial navigation odometer, and outputting the positioning data as a positioning result.
A third aspect of the present application provides a fusion positioning system, comprising:
the visual inertial navigation odometer is used for outputting local pose data;
a navigation device to output global pose data and second IMU data;
the computing platform is used for acquiring local pose data output by the visual inertial navigation odometer, wherein the local pose data are obtained by performing close-coupling fusion on visual data and IMU data, and the IMU data are obtained by processing first IMU data of the visual inertial navigation odometer and second IMU data of the navigation equipment; acquiring global pose data output by the navigation equipment; and performing loose coupling fusion processing on the local pose data and the global pose data, and outputting positioning data obtained after fusion processing as a positioning result.
The fourth aspect of the application provides a flying automobile, which comprises the fusion positioning device.
The present application provides in a fifth aspect a flying automobile comprising:
a processor; and
a memory having executable code stored thereon which, when executed by the processor, causes the processor to perform the method as described above.
A sixth aspect of the present application provides a computer-readable storage medium having stored thereon executable code, which, when executed by a processor of an electronic device, causes the processor to perform the method as described above.
The technical scheme provided by the application can comprise the following beneficial effects:
according to the technical scheme, a fusion positioning mode of combining the visual inertial navigation odometer with the navigation equipment is adopted, wherein in the process of obtaining local pose data by carrying out close coupling fusion on visual data and IMU data, second IMU data of the navigation equipment is also considered, so that the positioning accuracy of the visual inertial navigation odometer is improved; and then forming a loose combination relationship between the visual inertial navigation odometer and the navigation equipment, namely performing loose coupling fusion processing on the local pose data and the global pose data, and finally outputting positioning data obtained after fusion processing as a positioning result. Through the processing, the navigation equipment participates in the positioning process in the whole process, including the process of participating in tight coupling fusion of the visual data and IMU data of the front end and the process of participating in loose combination of the visual inertial navigation odometer and the navigation equipment of the rear end, so that the problems of unstable output of the visual inertial navigation odometer and inaccurate positioning in an outdoor flight scene are solved, more accurate and stable positioning can be provided in the outdoor flight scene, and more accurate and stable visual positioning and navigation support is provided for stable flight of aircrafts such as flying automobiles.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
Fig. 1 is a schematic flowchart of a fusion positioning method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating a fusion positioning method according to another embodiment of the present application;
FIG. 3 is a schematic diagram illustrating the hardware components of a fusion positioning system according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating an application of the fusion positioning method according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating an initialization flow in a fusion positioning method according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating comparison between forced handover and fusion in a fusion positioning method according to an embodiment of the present application;
fig. 7 is a schematic diagram of a device state in a fusion positioning method according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a fusion locator according to an embodiment of the application;
fig. 9 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While embodiments of the present application are illustrated in the accompanying drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In the related art, the visual odometer for the air scene such as the outdoor flight scene is relatively rarely applied. Aiming at outdoor flying scenes, a more applicable and more accurate positioning method is to be provided for aircrafts such as flying cars and the like. The application provides a fusion positioning method which can provide accurate and stable positioning for an aircraft in an outdoor flight scene.
The technical solutions of the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a fusion positioning method according to an embodiment of the present application.
Referring to fig. 1, the method includes:
s101, local pose data output by the vision inertial navigation odometer are obtained, wherein the local pose data are obtained by performing close coupling fusion on the vision data and IMU data, and the IMU data are obtained by processing according to first IMU data of the vision inertial navigation odometer and second IMU data of the navigation equipment.
The visual inertial navigation odometer comprises a camera and a first IMU device, wherein visual data come from the camera, and the first IMU data come from the first IMU device; the navigation device includes a second IMU device from which the second IMU data is derived. The camera may be a binocular camera.
The method comprises the following steps that IMU data are obtained by processing according to first IMU data of a visual inertial navigation odometer and second IMU data of navigation equipment, and comprises the following steps: selecting second IMU data of a second IMU device of the navigation device as IMU data if the reception of the second IMU data is normal; selecting the first IMU data as the IMU data if the receiving of the second IMU data by the second IMU device of the navigation device is abnormal.
When the visual inertial navigation odometer is initialized, initializing a process according to visual data, IMU data and global pose data output by navigation equipment; performing PnP pose resolution on visual data, performing integral processing on IMU data, and performing pose resolution; filtering the pose resolving result of the visual data and the pose resolving result of the IMU data together to obtain updated pose transformation data; and combining the pose transformation data obtained by filtering with the global pose data output by the navigation equipment as feedback to correct the visual data of the last frame of the visual inertial navigation odometer.
And S102, acquiring global pose data output by the navigation equipment.
The method comprises the steps of obtaining global pose data which are output by navigation equipment and obtained in an RTK (Real-time kinematic) mode.
And S103, performing loose coupling fusion processing on the local pose data and the global pose data, and outputting positioning data obtained after fusion processing as a positioning result.
When the navigation equipment has abnormal signals, positioning data is determined according to local pose data output by the visual inertial navigation odometer, and the positioning data is output as a positioning result. For example, when the covariance matrix of the navigation equipment is abnormal, a transformation relation value between the local pose data of the visual inertial navigation odometer and the global pose data after being converted into the global coordinate system is obtained; and multiplying the local pose data output by the visual inertial navigation odometer by the transformation relation value to obtain positioning data, and outputting the positioning data as a positioning result.
And when the vision under-constraint occurs to the vision inertial navigation odometer, determining positioning data according to the global pose data output by the navigation equipment, and outputting the positioning data as a positioning result. For example, when the covariance matrix of the visual inertial navigation odometer is abnormal, a transformation relation value between the local pose data of the visual inertial navigation odometer and the global pose data after being converted into the global coordinate system is obtained, and a difference value between the transformation relation value and the real position transformation relation value is determined according to the transformation relation value; and multiplying the global pose data output by the navigation equipment by the inverse of the difference to obtain positioning data, and outputting the positioning data as a positioning result.
According to the embodiment, the technical scheme of the application adopts a mode of fusion and positioning by combining the visual inertial navigation odometer with the navigation equipment, wherein the second IMU data of the navigation equipment is also considered in the process of obtaining the local pose data by carrying out close-coupling fusion on the visual data and the IMU data so as to improve the positioning precision of the visual inertial navigation odometer; and then forming a loose combination relationship between the visual inertial navigation odometer and the navigation equipment, namely performing loose coupling fusion processing on the local pose data and the global pose data, and finally outputting positioning data obtained after fusion processing as a positioning result. Through the processing, the navigation equipment participates in the positioning process in the whole process, including the process of participating in the tight coupling fusion of the visual data and the IMU data of the front end and the process of participating in the loose combination of the visual inertial navigation odometer and the navigation equipment of the rear end, so that the problems of unstable output of the visual inertial navigation odometer and inaccurate positioning in an outdoor flight scene are solved, more accurate and stable positioning can be provided in the outdoor flight scene, and more accurate and stable visual positioning and navigation support are provided for stable flight of aircrafts such as flying automobiles.
Fig. 2 is a schematic flowchart of a fusion positioning method according to another embodiment of the present application. The embodiment introduces the fusion positioning method of the application through interaction among the visual inertial navigation odometer, the navigation device and the computing platform for fusion processing.
According to the embodiment of the application, a secondary fusion mode of three sensors including a camera, an IMU and a satellite navigation device is adopted, the problem that the output of a visual odometer is unstable in large outdoor scenes such as outdoor high altitude and ground is solved, and the support of visual positioning navigation is provided for the stable flight of a high-altitude aircraft. In the embodiment of the application, the satellite navigation equipment participates in the positioning process in the whole process, not only participates in the loose combination of the rear end, but also participates in the initialization process of the visual inertial navigation odometer, and is used for strengthening the stability of visual feature constraint. Meanwhile, the output information of the satellite navigation equipment in the embodiment of the application also comprises attitude information besides longitude and latitude height information.
Referring to fig. 3 and fig. 4, the fusion positioning system according to the embodiment of the present application includes a visual inertial navigation odometer, a navigation device, and a computing platform. The visual inertial navigation odometer is used for outputting local pose data; the navigation equipment is used for outputting global pose data and second IMU data; the computing platform is used for acquiring local pose data output by the visual inertial navigation odometer, wherein the local pose data are obtained by performing close-coupled fusion on the visual data and IMU data, and the IMU data are obtained by processing the first IMU data of the visual inertial navigation odometer and the second IMU data of the navigation equipment; acquiring global pose data output by navigation equipment; and performing loose coupling fusion processing on the local pose data and the global pose data, and outputting positioning data obtained after fusion processing as a positioning result.
The visual inertial Navigation odometer includes a camera and an IMU device (which may be referred to as a first IMU device), and the Navigation device includes a GNSS (Global Navigation Satellite System) board and an IMU device (which may be referred to as a second IMU device).
Wherein the camera may be a binocular camera or a monocular camera. Taking a binocular camera as an example, the binocular camera may include 2 camera sensors, plus 1 built-in IMU. The NAV (Navigation) device can be internally provided with a high-precision IMU and a satellite GNSS board card. The navigation device can be a combined navigation device or an independent external IMU plus RTK device. By adopting a multi-IMU redundancy design, the method can adapt to an air dynamic scene and improve the safety level.
The left image and the right image of the binocular camera, together with the IMU data, may constitute a tightly-coupled VINS (visual-inertial system). Close coupling typically uses raw data from two sensors to jointly estimate a set of variables, and sensor noise also affects each other. The VINS system may use a non-linear optimization method to iteratively optimize the output pose, but is not limited thereto. The process is a local odometer, and the transformation relation between the local coordinate and the global coordinate system, namely the transformation relation between the local pose and the global pose, can be obtained by aligning the local odometer with the global pose of the machine body when the machine body is started.
The navigation equipment triggers the binocular camera through a PPS (Pulse per second) synchronization trigger line, and simultaneously sends the GPRMC (recommended positioning information) time of the navigation equipment to a camera port through a serial port, so that the time synchronization among the image data, the IMU data and the navigation equipment is completed inside the camera. The Time synchronization between the camera and the navigation device may be a PTP (Precision Time Protocol) network port synchronization method, or the like, in addition to the PPS pulse method.
The computing platform receives image data and IMU data from the visual inertial navigation odometer and global pose data and high-precision IMU data from the navigation equipment. In this embodiment, the visual data is taken as the image data for example but not limited thereto. The global pose estimation result or the local pose estimation result output by the computing platform can be sent to the receiving end equipment. The whole fusion visual mileage calculation method is operated on a computing platform such as an embedded platform of an Advanced RISC Machine (ARM), a binocular camera transmits image data and IMU data to the computing platform through a data port (USB (Universal Serial Bus, internet access, etc., depending on the camera), and global pose data and high-precision IMU data (such as measured acceleration and angular velocity) of a navigation device are transmitted to the computing platform through a Controller Area Network (CAN) port. And after the fusion is completed on the computing platform, the fusion positioning result is sent out through the CAN port and sent to any equipment terminal which wants to use the result.
In order to ensure normal operation under a complex scene working condition, the embodiment of the application can set a loose combination relationship between the output of a local visual inertial navigation odometer (camera-IMU visual odometer) and the global pose (position is converted into an NED northeast coordinate system due to longitude and latitude height) of the navigation equipment. The loose combination can be based on a non-linear optimization mode and can also be based on a filtering mode. The purpose of setting the loose composition relationship includes: 1) When satellite signals of the navigation equipment are good, the positioning accuracy of the whole visual inertial navigation odometer is improved as much as possible; 2) Under the condition of visual under-constraint (a scene with failed high-speed motion visual detection and tracking or exact texture) of the visual inertial navigation odometer, the track convergence can be realized by depending on the global position constraint of the satellite of the navigation equipment; 3) When the satellite signal of the navigation equipment is not good enough, the track convergence can be realized through the vision of the vision inertial navigation odometer and the constraint of the IMU, and stable positioning output is stored.
Referring to fig. 2, the method includes:
s201, the visual inertial navigation odometer acquires local image data and local first IMU data, and receives second IMU data and global pose data of the navigation equipment.
The vision inertial navigation odometer can acquire a left image and a right image shot by the binocular camera and first IMU data acquired by measuring a built-in first IMU. The left image and the right image of the binocular camera are added with IMU data to form a tightly combined visual odometer system.
And the visual inertial navigation odometer receives second IMU data acquired by measuring a second IMU arranged in the navigation equipment and global pose data output by the navigation equipment.
And S202, initializing the visual inertial navigation odometer according to the image data, the IMU data and the global pose data output by the navigation equipment.
The initialization process can be seen in fig. 5. The initialization process is mainly that the whole camera-IMU system, namely the visual inertial navigation odometer, is set with a relatively stable initial value. The initial values may include, for example, the coordinate accuracy of the 3D points used for PnP (productive-n-Point) computation, the accuracy of IMU zero bias, etc. PnP is a method of solving for 3D to 2D point pair motion.
In order to improve the safety level of the system, at least 2 IMU sensors, namely an IMU positioned on a binocular camera and an IMU positioned on navigation equipment, are arranged. Therefore, in the using process, two paths of IMU data are selected for use. Considering that the second IMU of the navigation equipment has high precision, if the second IMU data of the navigation equipment is received without abnormity, the second IMU data of the navigation equipment is adopted as input; and if the second IMU data of the navigation equipment is abnormal, switching to the first IMU data of the first IMU built in the camera, namely, selecting the first IMU data of the camera as input instead.
When the visual inertial navigation odometer is initialized, initializing a process according to visual data, IMU data and global pose data output by navigation equipment; performing PnP pose resolution on visual data, and performing pose resolution after performing integral processing on IMU data (including acceleration, angular velocity and the like); filtering the pose resolving result of the visual data and the pose resolving result of the IMU data together to obtain updated pose transformation data; and combining the pose transformation data obtained by filtering with the global pose data output by the navigation equipment as feedback to correct the visual data of the last frame of the visual inertial navigation odometer. Or the pose transformation data obtained by filtering is used as feedback to correct the visual data of the last frame of the visual inertial navigation odometer. The last frame is the last image frame acquired by the visual inertial navigation odometer.
In the selection process of the two IMU data, the embodiment of the application can align and synchronize the two IMU data in real time. For example, the pose resolved by the IMU and the pose resolved by the PnP are filtered and updated to generate a pose transformation, and the updated pose transformation is used as a feedback to re-correct the 3D point (3D landmark point) of the previous frame in combination with the pose given by the satellite, so that the feedback adjustment can improve the accuracy of the PnP at the next moment. The filtering may be, for example, kalman filtering, but is not limited thereto. The Kalman filtering is an algorithm for performing optimal estimation on the system state by using a linear system state equation and inputting and outputting observation data through a system. The optimal estimate can also be viewed as a filtering process, since the observed data includes the effects of noise and interference in the system. Kalman filtering can try to remove the effects of noise using the dynamic information of the target to get a good estimate of the target's position. This estimate may be an estimate of the current target position (filtered), an estimate of the future position (predicted), or an estimate of the past position (interpolated or smoothed).
In the initialization stage, the embodiment of the application is the integral participation of IMU + vision + satellite positioning. Wherein the relative transformation between two frames can be calculated in an RTK positioning manner to assist in constructing and improving the accuracy of the 3D waypoints.
Because the initialization process is in a process of iterative convergence all the time, the output is extremely unstable in the process, the global positioning of the satellite navigation equipment can be directly and simply used as the output of the whole system of the visual inertial navigation odometer in the initialization stage until the initialization is completed. Because the initialization time is usually short, after the initialization is finished, the deviation between the first output pose of the vision inertial navigation odometer and the pose of the satellite navigation equipment at the moment is not too large, and the machine body coordinate system of the vision inertial navigation odometer can be converted into the global coordinate system according to the pose of the starting point.
And S203, outputting the local pose data to the computing platform by the vision inertial navigation odometer.
And after initialization is completed, the visual inertial navigation odometer processes the image data and the IMU data to obtain local pose data. The image data and the IMU data may be set by using a processing method of the related art to obtain local pose data, which is not limited in the present application. The IMU data at this time is the first IMU data or the second IMU data selected for use according to the data reception condition of the second IMU data of the navigation apparatus. Selecting second IMU data of a second IMU device of the navigation device as IMU data if the reception of the second IMU data is normal; selecting the first IMU data as IMU data if the receiving of the second IMU data by the second IMU device of the navigation device is abnormal.
And the vision inertial navigation odometer outputs the processed local pose data to the computing platform.
And S204, the navigation equipment outputs second IMU data and global pose data to the visual inertial navigation odometer and outputs the global pose data to the computing platform.
In the embodiment of the application, the navigation equipment participates in the positioning process in the whole process, not only participates in the loose combination of the rear end, but also participates in the initialization process of the visual inertial navigation odometer, and the stability of the visual feature constraint is enhanced. Therefore, the navigation device outputs the second IMU data and the global pose data to the visual inertial navigation odometer for reference use by the visual inertial navigation odometer.
And the navigation equipment processes the second IMU data and other acquired navigation information to obtain global pose data and outputs the global pose data to the computing platform. The second IMU data and other acquired navigation information may be set by using a processing method of the related art to obtain global pose data, which is not limited in the present application. The navigation equipment of the application adopts a carrier phase differential positioning RTK mode to obtain global position and orientation data and then sends the global position and orientation data to the computing platform.
The RTK technology is a differential method for processing the observed quantity of carrier phases of two measuring stations in real time, and sends the carrier phases acquired by a reference station to a user receiver for solving the coordinates by calculating the difference. The method is a new common satellite positioning measurement method, the former static, rapid static and dynamic measurements all need to be solved afterwards to obtain centimeter-level accuracy, the RTK is a measurement method capable of obtaining centimeter-level positioning accuracy in real time in the field, a carrier phase dynamic real-time difference method is adopted, the appearance of the method is project lofting and terrain mapping, various control measurements bring new measurement principles and methods, and the operation efficiency is greatly improved.
It should be noted that steps S204 and S201 have no sequential relationship.
And S205, the computing platform receives local pose data output by the visual inertial navigation odometer and receives global pose data output by the navigation equipment.
According to the embodiment of the application, the computing platform is arranged to perform fusion processing on the pose data of the visual inertial navigation odometer and the navigation equipment, so that the computing platform receives local pose data output by the visual inertial navigation odometer and receives global pose data output by the navigation equipment.
S206, the computing platform carries out loose coupling fusion processing on the local pose data and the global pose data, and the positioning data obtained after fusion processing is output as a positioning result.
Due to the respective characteristics of the sensors, noise and other factors, the output pose and track of the IMU + camera combination in the visual inertial navigation odometer cannot be completely coincided with the satellite positioning of the navigation equipment, and the output pose and track cannot be completely coincided even if the output pose and the track are aligned under the condition of being converted into a unified coordinate system. The satellite positioning of the navigation device may be positioned using an RTK approach. Because the RTK positioning precision is high, the RTK positioning data of the navigation equipment can be used when the received signal of the navigation equipment is stronger. However, when the satellite signal of the navigation device becomes weak or lost (e.g., outdoor high-rise building obstruction), the whole system needs to switch to the output of the visual inertial navigation odometer. Since both of them are different, instantaneous switching causes instantaneous jitter and jitter. This deviation includes: 1) Coordinate system conversion error: 2) Noise models of all sensors are different, and inevitable deviation exists in estimation results of relative poses.
Referring to fig. 6, if the navigation device simply switches directly to the visual inertial navigation odometer state when the satellite signal is lost, a state step occurs, which is unacceptable for body control. According to the method and the device, a loose combination is set by the global pose of the satellite of the navigation equipment and the visual inertial navigation odometer, and when satellite signals are good, the system pulls the solution value of the visual inertial navigation odometer to the positioning value of the global pose of the satellite; when the satellite signal is lost or weakened, the system still keeps a smooth track to continue running, and the smooth track keeps relatively small error with the true value in a short time.
The loose combination of the navigation device at the rear end and the visual inertial navigation odometer in the embodiment of the application can be realized in a nonlinear optimization mode, but is not limited to the nonlinear optimization mode.
And recording a transformation relation value between the pose data converted from the local pose data of the visual inertial navigation odometer to the global coordinate system and the global pose data of the navigation equipment as T _, wherein a difference value between the transformation relation value T _andthe real position transformation relation value is recorded as T _ gps. That is to say, the local pose data of the visual inertial navigation odometer is converted into the global coordinate system to obtain converted pose data, and a transformation relation value T exists between the converted pose data and the global pose data of the navigation equipment. The real position transformation relation value may be obtained according to the related art, which is not limited in this application.
In the embodiment of the application, when the visual under-constraint occurs to the visual inertial navigation odometer, positioning data is determined according to global pose data output by navigation equipment, and the positioning data is output as a positioning result. For example, when the covariance matrix of the visual inertial navigation odometer is abnormal, a transformation relation value between the local pose data of the visual inertial navigation odometer and the global pose data after being converted into the global coordinate system is obtained, and a difference value between the transformation relation value and the real position transformation relation value is determined according to the transformation relation value; and multiplying the global pose data output by the navigation equipment by the inverse of the difference to obtain positioning data, and outputting the positioning data as a positioning result. That is, when the variance matrix of the visual inertial navigation odometer is abnormal (the covariance is too large, which represents that the visual inertial navigation odometer enters an under-constrained state), the pose data of the visual inertial navigation odometer is discarded, the optimization calculation is stopped, and the last stable transformation T _ and T _ gps is kept. The system outputs the pose = (inverse of T _ gps) × satellite navigation output pose until the visual constraint is reconstructed. Covariance represents the linear correlation between two random variables, and is an overall parameter used to measure the magnitude of the cooperative variation between two variables, i.e. the magnitude of the mutual influence of two variables, and the larger the absolute value of covariance is, the larger the mutual influence of two variables is. When the covariance is greater than the set threshold, the covariance is considered to be too large, and the set threshold can be taken according to experience, which is not limited in the present application.
In the embodiment of the application, when the navigation equipment has abnormal signals, positioning data is determined according to local pose data output by the visual inertial navigation odometer, and the positioning data is output as a positioning result. For example, when the covariance matrix of the navigation equipment is abnormal, a transformation relation value between the local pose data of the visual inertial navigation odometer and the global pose data after being converted into the global coordinate system is obtained; and multiplying the local pose data output by the visual inertial navigation odometer by the transformation relation value to obtain positioning data, and outputting the positioning data as a positioning result. That is, if the covariance of satellite navigation is too large, the system output pose = (T) = visual inertial navigation output pose until the satellite signal gets better again.
Referring to fig. 7, when all sensors are in good condition, the fusion process of the loose combination provides a constraint to correct the estimated value of the visual inertial navigation odometer by using the satellite global positioning observed value of the navigation device; when the satellite signal of the navigation device is lost, the whole track constraint is constructed by the camera of the visual inertial navigation odometer and the IMU since the lost point; when image fuzzy tracking loss exists in a close-coupled system of a camera and an IMU in the visual inertial navigation odometer (if the visual odometer is a pure visual odometer, the output will diverge), and the only constraint is constructed by the satellite positioning observation value of the navigation equipment under the whole motion state constraint. In the visual inertial navigation odometer, the camera + IMU tight coupling can be based on nonlinear optimization or filtering.
Therefore, by using the scheme of the embodiment of the application, the track convergence can be realized through the global position constraint of the satellite of the navigation equipment under the condition of the visual under-constraint of the visual inertial navigation odometer; when the satellite signal of the navigation equipment is not good enough, the track convergence is realized through the vision of the vision inertial navigation odometer and the constraint of the IMU, and stable positioning output is stored. In the two-level loose combination of the visual inertial navigation and the satellite navigation positioning, data failure and under-constraint of any party cannot cause the integral divergence of the system, the tracking of visual features is lost or transient texture is not generated, and the output can be stably output by depending on the constraint of a satellite.
Corresponding to the embodiment of the application function implementation method, the application also provides a fusion positioning device, a flying automobile, electronic equipment and corresponding embodiments.
Fig. 8 is a schematic structural diagram of a fusion positioning device according to an embodiment of the present application.
Referring to fig. 8, a fusion positioning apparatus 80 provided in the embodiment of the present application includes: a first input module 81, a second input module 82, and a fusion positioning module 83.
The first input module 81 is configured to acquire local pose data output by the visual inertial navigation odometer, where the local pose data is obtained by performing close-coupled fusion on the visual data and IMU data, and the IMU data is obtained by processing the first IMU data of the visual inertial navigation odometer and the second IMU data of the navigation device. The visual inertial navigation odometer comprises a camera and a first IMU device, wherein visual data come from the camera, and the first IMU data come from the first IMU device; the navigation device includes a second IMU device from which the second IMU data is derived. The camera may be a binocular camera. The method comprises the following steps that IMU data are obtained by processing according to first IMU data of a visual inertial navigation odometer and second IMU data of navigation equipment, and comprises the following steps: selecting second IMU data of a second IMU device of the navigation device as IMU data if the reception of the second IMU data is normal; selecting the first IMU data as IMU data if the receiving of the second IMU data by the second IMU device of the navigation device is abnormal.
And a second input module 82, configured to obtain global pose data output by the navigation device. The second input module 82 obtains global pose data output by the navigation device and obtained in an RTK manner.
And the fusion positioning module 83 is configured to perform loose coupling fusion processing on the local pose data and the global pose data, and output positioning data obtained after the fusion processing as a positioning result.
Wherein, the fusion positioning module 83 includes: a first processing submodule 831 and a second processing submodule 832.
And the first processing submodule 831 is configured to determine positioning data according to the local pose data output by the visual inertial navigation odometer when the navigation device has a signal abnormality, and output the positioning data as a positioning result. For example, when the covariance matrix of the navigation equipment is abnormal, a transformation relation value between the local pose data of the visual inertial navigation odometer and the global pose data after the local pose data is converted into the global coordinate system is obtained; and multiplying the local pose data output by the visual inertial navigation odometer by the transformation relation value to obtain positioning data, and outputting the positioning data as a positioning result.
And the second processing sub-module 832 is configured to, when the visual under-constraint occurs to the visual inertial navigation odometer, determine positioning data according to the global pose data output by the navigation device, and output the positioning data as a positioning result. For example, when the covariance matrix of the visual inertial navigation odometer is abnormal, a transformation relation value between the local pose data of the visual inertial navigation odometer and the global pose data after being converted into the global coordinate system is obtained, and a difference value between the transformation relation value and the real position transformation relation value is determined according to the transformation relation value; and multiplying the global pose data output by the navigation equipment by the inverse of the difference to obtain positioning data, and outputting the positioning data as a positioning result.
According to the technical scheme, a mode that the visual inertial navigation odometer is combined with navigation equipment to perform fusion positioning is adopted, wherein second IMU data of the navigation equipment is also considered in the process of performing close-coupling fusion on visual data and IMU data to obtain local pose data, so that the positioning accuracy of the visual inertial navigation odometer is improved; and then forming a loose combination relationship between the visual inertial navigation odometer and the navigation equipment, namely performing loose coupling fusion processing on the local pose data and the global pose data, and finally outputting positioning data obtained after the fusion processing as a positioning result. Through the processing, the navigation equipment participates in the positioning process in the whole process, including the process of participating in tight coupling fusion of the visual data and IMU data of the front end and the process of participating in loose combination of the visual inertial navigation odometer and the navigation equipment of the rear end, so that the problems of unstable output of the visual inertial navigation odometer and inaccurate positioning in an outdoor flight scene are solved, more accurate and stable positioning can be provided in the outdoor flight scene, and more accurate and stable visual positioning and navigation support is provided for stable flight of aircrafts such as flying automobiles.
The present application further provides a flying car including the fusion positioning device 80 as shown in fig. 8.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated herein.
Fig. 9 is a schematic structural diagram of an electronic device shown in an embodiment of the present application. The electronic device may be, for example, but is not limited to, an airplane.
Referring to fig. 9, the electronic device 1000 includes a memory 1010 and a processor 1020.
The Processor 1020 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1010 may include various types of storage units, such as system memory, read Only Memory (ROM), and a persistent storage device. Wherein the ROM may store static data or instructions that are needed by the processor 1020 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. Further, the memory 1010 may comprise any combination of computer-readable storage media, including various types of semiconductor memory chips (e.g., DRAM, SRAM, SDRAM, flash, programmable read only memory), magnetic and/or optical disks, may also be employed. In some embodiments, memory 1010 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-density optical disc, a flash memory card (e.g., SD card, min SD card, micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 1010 has stored thereon executable code that, when processed by the processor 1020, causes the processor 1020 to perform some or all of the methods described above.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present application.
Alternatively, the present application may also be embodied as a computer-readable storage medium (or non-transitory machine-readable storage medium or machine-readable storage medium) having executable code (or a computer program or computer instruction code) stored thereon, which, when executed by a processor of an electronic device (or server, etc.), causes the processor to perform part or all of the various steps of the above-described method according to the present application.
The foregoing description of the embodiments of the present application has been presented for purposes of illustration and description and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. A fusion localization method, comprising:
acquiring local pose data output by a visual inertial navigation odometer, wherein the local pose data are obtained by performing close-coupling fusion on visual data and IMU data, and the IMU data are obtained by processing according to first IMU data of the visual inertial navigation odometer and second IMU data of navigation equipment;
acquiring global pose data output by the navigation equipment;
and performing loose coupling fusion processing on the local pose data and the global pose data, and outputting positioning data obtained after fusion processing as a positioning result.
2. The method of claim 1, wherein:
the visual inertial navigation odometer comprises a camera and a first IMU device, the visual data from the camera, the first IMU data from the first IMU device;
the navigation device includes a second IMU device, the second IMU data from the second IMU device.
3. The method of claim 2, wherein processing the IMU data from the first IMU data of the visual inertial navigation odometer and the second IMU data of the navigation device comprises:
selecting a second IMU of the navigation device as the IMU data if the receiving of the second IMU data by the second IMU device is normal;
selecting the first IMU data as the IMU data if the receiving of the second IMU data by a second IMU device of the navigation device is abnormal.
4. The method according to claim 1, wherein the loosely-coupled fusion processing of the local pose data and the global pose data, and outputting positioning data obtained after the fusion processing as a positioning result, comprises:
when the navigation equipment has signal abnormality, determining positioning data according to local pose data output by the visual inertial navigation odometer, and outputting the positioning data as a positioning result;
and when the visual under-constraint occurs to the visual inertial navigation odometer, determining positioning data according to the global position and pose data output by the navigation equipment, and outputting the positioning data as a positioning result.
5. The method according to claim 4, wherein the determining positioning data according to the local pose data output by the visual inertial navigation odometer when the navigation device has signal abnormality, and outputting the positioning data as a positioning result comprises:
when the covariance matrix of the navigation equipment is abnormal, acquiring a transformation relation value between the local pose data of the visual inertial navigation odometer and the global pose data after converting the local pose data into a global coordinate system;
and multiplying the local pose data output by the visual inertial navigation odometer by the transformation relation value to obtain positioning data, and outputting the positioning data as a positioning result.
6. The method according to claim 4, wherein when the visual inertial navigation odometer is visually under-constrained, determining positioning data according to global pose data output by the navigation device, and outputting the positioning data as a positioning result comprises:
when the covariance matrix of the visual inertial navigation odometer is abnormal, acquiring a transformation relation value between the local pose data of the visual inertial navigation odometer and the global pose data after converting the local pose data into a global coordinate system, and determining a difference value between the transformation relation value and a real position transformation relation value according to the transformation relation value;
and multiplying the global pose data output by the navigation equipment and the inverse of the difference value to obtain positioning data, and outputting the positioning data as a positioning result.
7. The method of claim 1, wherein the obtaining global pose data output by the navigation device comprises:
and acquiring global position and orientation data which is output by the navigation equipment and obtained by adopting a carrier phase differential positioning RTK mode.
8. The method according to any one of claims 1 to 7, wherein:
when the visual inertial navigation odometer is initialized, initializing a process according to visual data, IMU data and global pose data output by the navigation equipment; wherein,
performing PnP pose resolution on the visual data, and performing pose resolution after performing integral processing on the IMU data;
filtering the pose resolving result of the visual data and the pose resolving result of the IMU data together to obtain updated pose transformation data;
and combining the pose transformation data obtained by filtering with the global pose data output by the navigation equipment as feedback to correct the visual data of the last frame of the visual inertial navigation odometer.
9. A fusion positioning device, comprising:
the system comprises a first input module, a second input module and a third input module, wherein the first input module is used for acquiring local pose data output by a visual inertial navigation odometer, the local pose data are obtained by performing close-coupling fusion on visual data and IMU data, and the IMU data are obtained by processing the first IMU data of the visual inertial navigation odometer and the second IMU data of navigation equipment;
the second input module is used for acquiring global pose data output by the navigation equipment;
and the fusion positioning module is used for performing loose coupling fusion processing on the local pose data and the global pose data and outputting positioning data obtained after fusion processing as a positioning result.
10. The apparatus of claim 9, wherein the fusion localization module comprises:
the first processing submodule is used for determining positioning data according to local pose data output by the visual inertial navigation odometer when the navigation equipment has signal abnormality and outputting the positioning data as a positioning result;
and the second processing submodule is used for determining positioning data according to the global position and posture data output by the navigation equipment when the vision under-constraint occurs to the vision inertial navigation odometer, and outputting the positioning data as a positioning result.
11. A fusion positioning system, comprising:
the visual inertial navigation odometer is used for outputting local pose data;
the navigation equipment is used for outputting global pose data and second IMU data;
the computing platform is used for acquiring local pose data output by the visual inertial navigation odometer, wherein the local pose data are obtained by performing close coupling fusion on visual data and IMU data, and the IMU data are obtained by processing the first IMU data of the visual inertial navigation odometer and the second IMU data of the navigation equipment; acquiring global pose data output by the navigation equipment; and performing loose coupling fusion processing on the local pose data and the global pose data, and outputting positioning data obtained after fusion processing as a positioning result.
12. A flying automobile comprising a fusion positioning device according to any one of claims 9 to 10.
CN202211450927.6A 2022-11-18 2022-11-18 Fusion positioning method, device and system and hovercar Pending CN115752436A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211450927.6A CN115752436A (en) 2022-11-18 2022-11-18 Fusion positioning method, device and system and hovercar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211450927.6A CN115752436A (en) 2022-11-18 2022-11-18 Fusion positioning method, device and system and hovercar

Publications (1)

Publication Number Publication Date
CN115752436A true CN115752436A (en) 2023-03-07

Family

ID=85333256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211450927.6A Pending CN115752436A (en) 2022-11-18 2022-11-18 Fusion positioning method, device and system and hovercar

Country Status (1)

Country Link
CN (1) CN115752436A (en)

Similar Documents

Publication Publication Date Title
CN109991636B (en) Map construction method and system based on GPS, IMU and binocular vision
KR102463176B1 (en) Device and method to estimate position
Schreiber et al. Vehicle localization with tightly coupled GNSS and visual odometry
US8315794B1 (en) Method and system for GPS-denied navigation of unmanned aerial vehicles
CN106767752B (en) Combined navigation method based on polarization information
EP2133662B1 (en) Methods and system of navigation using terrain features
US7840352B2 (en) Method and system for autonomous vehicle navigation
CN107478220B (en) Unmanned aerial vehicle indoor navigation method and device, unmanned aerial vehicle and storage medium
WO2018128669A1 (en) Systems and methods for using a sliding window of global positioning epochs in visual-inertial odometry
US20090263009A1 (en) Method and system for real-time visual odometry
CN112230242A (en) Pose estimation system and method
US8467612B2 (en) System and methods for navigation using corresponding line features
CN113405545A (en) Positioning method, positioning device, electronic equipment and computer storage medium
CN115930959A (en) Vision initialization method and device and hovercar
CN115523920B (en) Seamless positioning method based on visual inertial GNSS tight coupling
CN115135963A (en) Method for generating 3D reference point in scene map
CN115388884A (en) Joint initialization method for intelligent body pose estimator
Ćwian et al. GNSS-augmented lidar slam for accurate vehicle localization in large scale urban environments
CN116625359A (en) Visual inertial positioning method and device for self-adaptive fusion of single-frequency RTK
JP2021143861A (en) Information processor, information processing method, and information processing system
CN115752436A (en) Fusion positioning method, device and system and hovercar
Li et al. Accuracy-and Simplicity-Oriented Self-Calibration Approach for In-Vehicle GNSS/INS/Vision System With Observability Analysis
Emter et al. Stochastic cloning and smoothing for fusion of multiple relative and absolute measurements for localization and mapping
CN116625362A (en) Indoor positioning method and device, mobile terminal and storage medium
CN105874352B (en) The method and apparatus of the dislocation between equipment and ship are determined using radius of turn

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination