CN114485648B - Navigation positioning method based on bionic compound eye inertial system - Google Patents

Navigation positioning method based on bionic compound eye inertial system Download PDF

Info

Publication number
CN114485648B
CN114485648B CN202210119005.0A CN202210119005A CN114485648B CN 114485648 B CN114485648 B CN 114485648B CN 202210119005 A CN202210119005 A CN 202210119005A CN 114485648 B CN114485648 B CN 114485648B
Authority
CN
China
Prior art keywords
compound eye
inertial
measurement unit
image
bionic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210119005.0A
Other languages
Chinese (zh)
Other versions
CN114485648A (en
Inventor
张哲�
李磊磊
杨伊菲
梁琳
王艺翔
王霞
韩勇强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202210119005.0A priority Critical patent/CN114485648B/en
Publication of CN114485648A publication Critical patent/CN114485648A/en
Application granted granted Critical
Publication of CN114485648B publication Critical patent/CN114485648B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Abstract

The invention discloses a navigation positioning method based on a bionic compound eye inertial system, which utilizes the bionic compound eye inertial system to realize high-accuracy and high-precision navigation positioning of a target object according to a dynamic baseline and triangle ranging method, utilizes a compound eye and inertia measuring unit coupling method to acquire and process visual information and inertia information, obtains high-accuracy relative pose change of the target object, and realizes high-precision pose positioning through the dynamic baseline and triangle ranging method.

Description

Navigation positioning method based on bionic compound eye inertial system
Technical Field
The invention relates to the technical field of navigation, in particular to a navigation positioning method based on a bionic compound eye inertial system.
Background
In recent years, with the rapid development of unmanned aerial vehicles, unmanned vehicles and robots, the navigation technology becomes one of bottleneck technologies for restricting the wide application of unmanned platforms, and the vision/inertial integrated navigation technology, particularly the combination of vision and inertial sensors, gradually develops into a hot spot for research and application in the field of autonomous navigation and robots at present, and has wide application in national economy and national defense construction. One of the advantages is that vision/inertia can be achieved, and higher-precision positioning and map construction can be achieved compared with a single system.
In the national defense construction field, as the vision/inertial integrated navigation technology does not depend on external artificial facilities, the novel combined accurate air drop system developed by the United states army solves the positioning problem by adopting the vision/inertial integrated navigation technology, and the three-goddess inspection device also realizes high-precision gesture positioning by adopting the vision/inertial integrated navigation technology. Currently, conventional visual inertial integrated navigation techniques are classified into: monocular inertial systems and binocular inertial systems.
The binocular inertial system performs positioning through parallax obtained by the accurate spatial position relationship between the two cameras, but has the advantages of large calculated amount, large matching difficulty of corresponding points, poor instantaneity, unsuitability for rapid and efficient real-time positioning, large volume of the binocular system and higher cost of high-quality cameras; in the monocular inertial system, compared with the binocular inertial system, the cost is reduced, but a monocular camera cannot work normally in a region with less texture, the field of view of the camera is relatively small, large-range image information cannot be acquired, and when an object moves rapidly, the image acquired by the camera is blurred, so that a dynamic scene is mismatched. The development of the bionic compound eye technology is relatively perfect, a large visual field can be provided for the system, and the rapid dynamic response characteristic of the bionic compound eye technology can improve the mismatching caused by rapid movement in single-vision inertial measurement, and improve the positioning precision. Aiming at the problems of positioning precision and accuracy in a single-vision inertial measurement system, the invention designs a bionic compound eye inertial positioning system, and provides a dynamic baseline and triangular ranging method for realizing high-precision positioning of a target, so that the problems can be effectively improved.
Disclosure of Invention
In order to solve the limitations and defects existing in the prior art, the invention provides a navigation positioning method based on a bionic compound eye inertial system, which comprises the following steps:
the bionic compound eye inertial system comprises a sensor acquisition system and a system main board, wherein the sensor acquisition system comprises a compound eye and an inertial measurement unit, the system main board comprises a vision processing module, an inertial measurement unit data processing module and a vision inertial fusion processing module, the sensor acquisition system is used for acquiring pose information, and the system main board is used for processing the acquired pose information to obtain preset pose information;
the compound eye and the inertia measurement unit are calibrated in a combined mode, a calibration model of a bionic compound eye inertia system is established, and optimal estimation is carried out on time offset and a coordinate transformation matrix of the bionic compound eye inertia system;
restoring the scale of an object in the image by using a compound eye object image scale restoring technology;
and constructing an integral residual error model by utilizing a tight coupling mode of vision and an inertial measurement unit according to an inverse depth optimization technology, and obtaining the inverse depth and the optimization speed of the inertial measurement unit when the system error of the bionic compound eye inertial system is minimum according to the reprojection error based on the vision structure, the residual error item based on the inertial measurement unit structure and the factor based on prior information.
Optionally, the step of jointly calibrating the compound eye and the inertial measurement unit to establish a calibration model of the bionic compound eye inertial system and optimally estimating the time offset and the coordinate transformation matrix of the bionic compound eye inertial system includes:
setting the coordinates of the origin of the physical coordinate system of the image in the pixel coordinate system of the image as (u) 0 ,v 0 ) The expression of the relationship between the image physical coordinate system and the image pixel coordinate system is as follows:
wherein d x And d y U and v are the number of columns and rows of pixels in the digital image;
converting the expression (1) into homogeneous coordinates and a matrix form, wherein the expression is as follows:
the conversion relation from the world coordinate system to the compound eye camera coordinate system is obtained, and the expression is as follows:
wherein, the optical center O of the compound eye c Z of the compound eye camera coordinate system is used as the origin of the compound eye camera coordinate system c The axis coincides with the compound eye optical axis, and the Z of the compound eye camera coordinate system c The axis is perpendicular to the imaging plane of the camera, and the X of the compound eye camera coordinate system c The axis is parallel to the x axis of the physical coordinate system of the image, and the Y axis of the compound eye camera coordinate system c The axis is parallel to the y-axis of the physical coordinate system of the image, OO c The focal length f is the distance from the lens focus of the compound eye to the lens center, and is the distance between the origin of the compound eye camera coordinate system and the origin of the image physical coordinate system;
the transformation process is converted into a matrix multiplication form, and the expression is as follows:
wherein the point p= (X, Y, Z) of the world coordinate system is converted into the point of the compound-eye camera coordinate system by rigid body transformation, the point of the compound-eye camera coordinate system is converted into the point p= (X, Y) on the imaging plane of the camera by perspective projection, and the point p= (X, Y) on the imaging plane of the camera is converted into the point p= (μ, v) of the image pixel coordinate system by scaling and translation;
obtaining an internal parameter K of the compound eye, wherein the expression is as follows:
setting P c Is the coordinate of P in the compound eye camera coordinate system, P c Is the coordinates of P in the world coordinate system, P is determined using the rotation matrix R and translation vector t c Conversion to P w The expression is as follows:
P c =RP w +t (6)
setting R to be a 3×3 rotation matrix, and t to be a 3×1 translation vector, the expression is obtained as follows:
bringing the rotation matrix R and the translation vector t into an expression (7) to obtain the external parameters of the compound eye, wherein the expression is as follows:
calibrating the compound eye by using a Zhang Zhengyou calibration method to obtain an inner parameter, an outer parameter and a distortion coefficient of the compound eye, and calibrating the inertial measurement unit by using an A11an variance method to obtain Gaussian noise and random walk of the inertial measurement unit;
and fixedly connecting the calibrated compound eye with the inertial measurement unit, and carrying out joint calibration on the bionic compound eye inertial system.
Optionally, the step of restoring the scale of the object in the image using compound eye object image scale restoration technique includes:
the method comprises the steps of carrying out joint initialization on visual information and inertial information to obtain scale factors;
according to the frequency of respectively acquiring data by the compound eye and the inertia measurement unit, carrying out pre-integration processing on the inertia measurement unit to obtain the variation of position, rotation and speed;
for the acquired data of the compound eye, solving the pose of all frames and the 3D positions of all road mark points in the sliding window according to the preset operation of the motion recovery structure;
obtaining relative rotation constraint between preset two key frame moment body coordinate systems by using pre-integration of the inertial measurement unit;
aligning the result of the motion recovery structure with a pre-integrated value of the inertial measurement unit to realize correction of offset;
solving the corresponding speed and gravity vector direction of each frame, and recovering the scale factors of the compound eyes;
and restoring the scale of the object in the compound eye image, and simultaneously obtaining the speed of the inertial measurement unit under the metric unit.
Optionally, the step of constructing the integral residual model by using a tight coupling mode of vision and an inertial measurement unit according to an inverse depth optimization technology, and obtaining the inverse depth and the optimization speed of the inertial measurement unit when the system error of the bionic compound eye inertial system is minimum according to the reprojection error based on the vision structure, the residual item based on the inertial measurement unit structure and the factor based on the priori information comprises the following steps:
and obtaining a pre-integral residual error of the inertia measurement unit according to the pre-integral of the inertia measurement unit, wherein the expression is as follows:
wherein,δb a 、δb g representing an error state of the bionic compound eye inertial system, wherein the error state is a difference value between an estimated value and a true value of the bionic compound eye inertial system;
and calculating the minimum value of an objective function of the pre-integral residual error of the inertia measurement unit, wherein the expression is as follows:
wherein,as error term r B Jacobian matrix for all optimization variables X;
expanding expression (10) to let the derivative with respect to the increment Δx be 0, the expression of the increment Δx is obtained as follows:
the optimized variables are the position, the direction, the speed of two frames, gaussian white noise of an inertial measurement unit and random walk of the inertial measurement unit respectively, and the expression is as follows:
calculating partial derivatives of the residual errors corresponding to each optimization variable, and obtaining an expression of the jacobian matrix as follows:
obtaining a state quantity when the error is minimum through a nonlinear optimization algorithm according to the expression (11), the expression (12) and the expression (13);
the vision measurement residual error is the reprojection error of the feature point, and when the 1 st feature point is observed by the ith image for the first time, the expression defining the residual error of the feature observation in the jth image is as follows:
wherein,is the first time that the 1 st feature is observed for the ith image,/th feature>The observation of the same feature in j images;
the expression for obtaining the visual measurement residual is as follows:
wherein,as error term r C Jacobian matrix for all optimization variables X;
expanding expression (15) to let the derivative with respect to the increment Δx be 0, the expression of the increment Δx is obtained as follows:
wherein the expression of the optimization variable is as follows:
and calculating a jacobian matrix relative to the optimization variable according to a visual residual formula, wherein the jacobian matrix is expressed as follows:
the noise covariance of visual constraint and the reprojection error when calibrating the camera internal parameters correspond to the covariance matrix on the normalized camera plane divided by the focal length f, and the information matrix is equal to the inverse of the covariance matrix, and the expression is as follows:
obtaining an optimization variable of visual residual minimization according to the expression (17), the expression (18) and the expression (19);
the expression of the objective function optimized by the bionic compound eye inertial system is as follows:
wherein three residual items are priori information, inertia measurement unit measurement residual and visual observation residual in sequence, and state quantity X= [ X ] 0 ,x 1 ,...,x n ,λ 1 ,...λ n ]The states of all cameras for n+1 and the inverse depth of the 3D points for m+1;
the expression of the global delta equation for the global objective function of expression (20) is as follows:
wherein,for the noise covariance of the inertial measurement unit, < >>An information matrix for the inertial measurement unit;
the expression (21) is simplified as:
pBC )ΔX=b p +b B +b C (22)
wherein, lambda p ,Λ B Sum lambda C Is a black plug matrix;
calculating partial derivatives of the whole objective function on all optimized variables to obtain a jacobian matrix of the bionic compound eye inertial system, and obtaining a state increment delta X, wherein the state quantity comprises optimized depth and inertial measurement unit speed information;
and carrying out iterative updating on all the state quantities until the objective function reaches the minimum value, and optimizing the depth and the speed information of the inertial measurement unit at the moment.
Optionally, the method further comprises:
O 1 and O 2 Is the optical center of compound eyes, I 1 And I 2 For two frames of images of an object at the same positionThe characteristic points in (2) pass through the triangular ranging straight line O 1 I 1 And O 2 I 2 Intersecting at a point P in the scene;
feature point I 1 And feature point I 2 Normalized, feature point I 1 Is X in the coordinate of 1 Feature point I 2 Is X in the coordinate of 2 The following expression is satisfied:
S 1 X 2 =S 1 RX 1 +t (23)
wherein S is 1 Is the characteristic point I 1 Depth S of (1) 2 Is the characteristic point I 2 R is the rotation matrix between two frames, t is the translation vector;
solving the expression (23) to obtain the depth of the characteristic points in the two frames of images;
and determining the space coordinates of the target point according to the scale information determined during the initialization of the bionic compound eye inertial system.
The invention has the following beneficial effects:
the invention provides a navigation positioning method based on a bionic compound eye inertial system, which utilizes the bionic compound eye inertial system to realize high-accuracy and high-precision navigation positioning of a target object according to a dynamic baseline and triangle ranging method, utilizes a compound eye and inertia measuring unit coupling method to acquire and process visual information and inertia information, obtains high-accuracy relative pose change of the target object, and realizes high-precision pose positioning through the dynamic baseline and triangle ranging method.
Drawings
Fig. 1 is a schematic diagram of an overall framework of a bionic compound eye inertial system according to a first embodiment of the present invention.
Fig. 2 is a schematic diagram of an algorithm design framework based on a bionic compound eye inertial system according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a coordinate axis according to a first embodiment of the present invention.
Fig. 4 is a schematic diagram of three coordinate axis positions of compound eyes according to a first embodiment of the present invention.
Fig. 5 is a schematic diagram of a pinhole imaging model according to a first embodiment of the present invention.
Fig. 6 is a schematic diagram of joint calibration according to a first embodiment of the present invention.
Fig. 7 is a schematic diagram of data frequency and pre-integration of a vision and inertial measurement unit according to a first embodiment of the present invention.
Fig. 8 is a flow chart of a scale reduction technique according to a first embodiment of the present invention.
Fig. 9 is a flowchart of an inverse depth optimization technique according to a first embodiment of the present invention.
Fig. 10 is a measurement chart of dynamic baseline and triangulation according to a first embodiment of the present invention.
Detailed Description
In order to enable those skilled in the art to better understand the technical scheme of the invention, the navigation positioning method based on the bionic compound eye inertial system provided by the invention is described in detail below with reference to the accompanying drawings.
Example 1
The traditional vision and inertial navigation system adopts a camera as an image acquisition sensor, and because the field of view is smaller, the acquisition range of surrounding environment images is smaller, and the local similar environment can cause misjudgment of the system, carry out closed-loop operation and influence the positioning accuracy of navigation. Secondly, if an object with a higher motion speed appears in the market, the quick response of the camera is poor, the acquired image can appear blurred, and the positioning accuracy of the vision inertial system is affected. Improving positioning accuracy and positioning precision is a ubiquitous and urgent problem to be solved by an inertial navigation system.
The bionic compound eye imaging simulates the insect optic nerve movement sensing mechanism, has the advantages of quick response characteristic, high resolution, large view field, larger environment image acquisition range compared with a common camera, more sensitivity to the gesture of a moving object, small volume and light weight. Therefore, in order to achieve high accuracy and high precision positioning, the present embodiment provides a navigation positioning method based on a bionic compound eye inertial system, which is used to achieve high accuracy and high precision navigation positioning of a target object according to a dynamic baseline+triangle ranging method, wherein the method of tightly coupling the compound eye+inertia measurement unit (Inertial Measurement Unit, IMU) collects and processes visual information and inertial information, so as to obtain high accuracy relative pose change of the target object, and the dynamic baseline+triangle ranging method is used to achieve high precision pose positioning.
Fig. 1 is a schematic diagram of an overall framework of a bionic compound eye inertial system according to a first embodiment of the present invention. As shown in FIG. 1, the hardware part of the bionic compound eye inertial system consists of a sensor acquisition system and a system main board. The sensor acquisition system consists of a compound eye and an IMU module, and is powered by 12v and 5v voltages provided by a power supply. The system main board is divided into three parts, namely a visual processing module, an IMU data processing module and a visual inertial fusion processing module, and the collected data is processed by the system main board to obtain optimized pose information.
Fig. 2 is a schematic diagram of an algorithm design framework based on a bionic compound eye inertial system according to an embodiment of the present invention. As shown in fig. 2, the algorithm design of the bionic compound eye inertial system is divided into three parts, wherein the first part is the compound eye and IMU combined calibration, a compound eye and inertial system calibration model is established, and the time offset and coordinate transformation matrix of the compound eye and the inertial system are optimally estimated; the second part is a compound eye object image scale reduction technology, so that the scale reduction of objects in the image is realized; the third part is an inverse depth optimization technology, and an integral residual error model is constructed by utilizing a visual and IMU tight coupling mode, so that the inverse depth capable of minimizing the system error is obtained.
The embodiment provides the combined calibration of the compound eye inertial system. In order to recover the compound eye object image scale information, a multi-aperture bionic compound eye camera needs to be positioned. By matching the feature points of the two images before and after the movement of the compound eye camera, the corresponding relation of the two-dimensional image points can be recovered. The movement of the camera between two frames requires calibration of the compound eye model. The calibration of compound eyes mainly involves three coordinate systems: an image coordinate system, a camera coordinate system, and a world coordinate system.
Fig. 3 is a schematic diagram of a coordinate axis according to a first embodiment of the present invention. As shown in fig. 3, the image coordinate system is a coordinate system in units of pixels, the origin is at the upper left, and the position of each pixel point is represented by a pixel unit, where u and v represent the columns and rows of pixels in the digital image, and no physical unit is used to represent the pixel position at this time, so that it is necessary to establish an image coordinate system in units of physical units, an image physical coordinate system (x, y), which is parallel to the image pixel coordinate system in units of millimeters with the intersection point of the optical axis and the image plane as the origin.
If the origin of the physical coordinate system of the image is the coordinates (u) in the pixel coordinate system of the image 0 ,v 0 ) Each pixel has a size d in the physical coordinate system of the image x And d y The relationship of the two coordinate systems is:
in the form of homogeneous coordinates and a matrix:
the compound eye coordinate system takes the optical center of the compound eye as the origin O c ,Z c The axis coincides with the optical axis and is perpendicular to the imaging plane, X c And Y c The axis is parallel to the two coordinate axes of the image coordinate system. OO (OO) c Is the focal length of the compound eye, i.e. the distance between the origin of the camera coordinate system and the origin of the physical coordinate system of the image.
The world coordinate system is a three-dimensional coordinate system defined by a user for describing the positions of objects and cameras in a three-dimensional space. The three can be represented by fig. 4, and fig. 4 is a schematic diagram of three coordinate axis positions of compound eyes according to the first embodiment of the present invention. As shown in FIG. 4, O c The point is the optical center of compound eye, X c Axes and Y c The axis is parallel to the X axis and the Y axis of the image, Z c The axis is the compound eye optical axis, which is perpendicular to the image plane. The intersection point of the optical axis and the image plane is the origin of the image coordinate system. From point O c And X is c 、Y c 、Z c The rectangular coordinate system of axes is called camera coordinate system, OO c Is the focal length of the compound eye camera.
In the compound eye model provided in this embodiment, the lens of the compound eye is a group of lenses, and when light rays parallel to the main optical axis pass through the lenses, the light rays converge on a point called a focal point, and the distance from the focal point to the center of the lenses is called a focal length f. The lens of the digital camera is equivalent to a convex lens, the photosensitive element is positioned near the focus of the convex lens, and the lens becomes a small-hole imaging model when the focal length is approximate to the distance from the center of the convex lens to the photosensitive element. The pinhole imaging model is shown in fig. 5.
The conversion from the world coordinate system to the compound eye camera coordinate system can be obtained through the conversion from the coordinate system:
and designing a multi-aperture compound eye according to the compound eye model, and calibrating the compound eye by using a Zhang Zhengyou calibration method to obtain the internal and external parameters and the distortion coefficient of the compound eye.
Let p= (X, Y, Z) be a point in the scene, which in the pinhole camera model is transformed to finally become an image point p= (μ, v) on the two-dimensional image: transforming the world coordinate system into the camera coordinate system through rigid body transformation, wherein the transformation process uses the relative pose among cameras, namely the external parameters of the cameras; transforming from the camera coordinate system by perspective projection to an image point p= (x, y) on the imaging plane of the camera; the image point is transformed from the imaging coordinate system by scaling and translation to a point p= (μ, v) on the pixel coordinate system.
The compound eye transforms three-dimensional points in the scene into two-dimensional points in the image, i.e. the combination of the transforms of the respective coordinate systems, the above transformation process can be arranged into a matrix multiplication form:
the matrix K is called the camera's internal parameters:
introducing a stable and invariable coordinate system: the world coordinate system is absolutely unchanged, and the visual odometer in synchronous positioning and mapping (Simultaneous Localization and Mapping, SLAM) solves the motion trail of the camera under the world coordinate system.
Let P be c Is the coordinate of P in the compound eye coordinate system, P c Is the coordinate of the coordinate system in the world coordinate system, and P is calculated by using a rotation matrix R and a translation vector t c Conversion to P w The expression is as follows:
P c =RP v +t (6)
in expression (6), R is a 3×3 rotation matrix, t is a 3×1 translation vector, and expression is as follows:
bringing the rotation matrix R and the translation vector t in, the extraocular parameters of the expression (8) are derived as follows:
fig. 6 is a schematic diagram of joint calibration according to a first embodiment of the present invention. As shown in fig. 6, after obtaining the internal reference and the external reference of the compound eye, calibrating the IMU by using an alan variance method to obtain gaussian noise and random walk of the IM, and fixedly connecting the calibrated compound eye with the IMU, and performing system joint calibration of compound eye inertia after the fixation is successful. Calibration mainly determines the spatial position transformation between the compound eye camera and the IMU and the time offset of the two sensors.
An object in the real world is imaged on an image sensor through a bionic compound eye, three-dimensional space distance measurement is completed, and firstly, the dimension of the object on the image is restored. By means of vision/inertia combined initialization, scale factors can be obtained, scale reduction of objects in the image is completed, and parameters such as IMU speed after preliminary optimization are obtained.
Fig. 7 is a schematic diagram of data frequency and pre-integration of a vision and inertial measurement unit according to a first embodiment of the present invention. As shown in fig. 7, before the scale reduction, since the compound eye and the IMU acquire data according to a certain frequency, the acquisition frequency of the IMU is far greater than that of the compound eye, so that a plurality of IMU measurement data exist between two adjacent keyframes. The IMU pre-integration is performed to integrate the several IMU data to obtain the amount of change in position, rotation and speed for optimization.
Fig. 8 is a flow chart of a scale reduction technique according to a first embodiment of the present invention. As shown in fig. 8, compound eye data captured by the system is first solved by a series of operations such as 8-point method, triangulation, pnP and the like in a motion recovery structure (Structure from Motion, SFM), the pose of all frames and the 3D positions of all road mark points in the sliding window, and at the same time, the relative rotation constraint between the body coordinate systems at the moment of two key frames is obtained by using IMU pre-integration. However, because the gyroscope bias exists in the IMU result at this time, the SFM result and the IMU pre-integration value need to be aligned, so that the bias correction is realized, then the corresponding speed and the gravity vector direction of each frame are solved, and the scale factor of the bionic compound eye is recovered. Thereby reducing the scale of the object in the compound eye image and obtaining the IMU speed under metric units.
Fig. 9 is a flowchart of an inverse depth optimization technique according to a first embodiment of the present invention. As shown in fig. 9, in the actual running process, in order to further reduce the system error and improve the positioning precision, the system is subjected to inverse depth optimization, an integral residual error model is constructed by utilizing a visual and IMU tight coupling mode, and the re-projection error based on visual construction, the residual error item based on IMU construction and the factor based on priori information are jointly optimized to obtain the inverse depth and the IMU optimization speed for minimizing the system error.
According to IMU pre-integration, obtaining IMU pre-integration residual error, wherein the expression is as follows:
wherein,δb a 、δb g the error state of the system is represented, the error state is obtained by subtracting a true value from a system estimated value, if the minimum value of an objective function is calculated, after an increment of an optimized variable is needed, the objective function value is minimized, and for IMU residual error, the method can be written as follows:
wherein the method comprises the steps ofAs error term r B Regarding Jacobian of all state vectors (i.e., optimization variables) X, the above formula is expanded and let the derivative regarding Δx be 0, the calculation formula for delta Δx can be obtained:
the optimization variables are the position, the direction, the speed, the IMU Gaussian white noise and the IMU random walk of the two frames respectively, and the expression is as follows:
and calculating partial derivatives of the residual errors corresponding to each optimization quantity to obtain Jacobian, wherein the expression is as follows:
the simultaneous expression (11), the expression (12) and the expression (13) obtain state quantities minimizing errors through a nonlinear optimization algorithm.
The visual measurement residual error is the re-projection error of the feature point, and when the 1 st feature point is observed by the ith image for the first time, the residual error of the feature observation in the jth image can be defined, and the expression is as follows:
referring to the ith image the first observation of the 1 st feature,/>Is an observation of the same feature in j images. Since the degree of freedom of the visual residual is 2, the visual residual is projected onto the tangential plane.
For visual residuals, it can be written as:
wherein,as error term r C Regarding Jacobian of all state vectors (i.e., optimization variables) X, the above formula is expanded and let the derivative regarding Δx be 0, the calculation formula for delta Δx can be obtained:
the optimization variables are as follows:
according to the visual residual formula, jacobian relative to each optimization variable is calculated as follows:
the noise covariance of visual constraint and the reprojection error when calibrating the internal parameters of the camera are related to deviation of a few pixels, 1.5 pixels are taken, the covariance matrix corresponding to the normalized camera plane is divided by the focal length f, and then the information matrix is equal to the inverse of the covariance matrix, and the expression is as follows:
the optimized variable of minimizing the visual residual of the expression can be found through the expression (17), the expression (18) and the expression (19).
The present embodiment obtains the objective function of system optimization, and the expression is as follows:
wherein three residual items are priori information, IMU measurement residual and visual observation residual in sequence, and state quantity X= [ X ] 0 ,x 1 ,...,x n ,λ 1 ,...λ n ]The states of all cameras (including bias for position, direction, speed, accelerometer and gyroscope) for n+1 and the inverse depth of m+1 3D points.
The global delta equation for the global objective function of expression (20) can be written as:
wherein,the covariance of the noise term is pre-integrated for the IMU. Noise covariance when IMU->The larger the information matrix thereof +.>The smaller will mean that the IMU observations are less trustworthy, and at this point the visual observations will be more believed.
Expression (21) can be further reduced to:
pBC )ΔX=b p +b B +b C (22)
wherein, lambda p ,Λ B Sum lambda C And calculating partial derivatives of the whole objective function on all the optimized variables for the Hessian matrix to obtain Jacobian of the system, thereby obtaining state increment delta X, wherein the state quantity comprises optimized depth and IMU speed information, and carrying out iterative updating on all the state quantities until the objective function reaches the minimum value, namely the optimized depth and IMU speed information.
The embodiment uses a dynamic baseline+triangulation ranging method to achieve target positioning. Triangulation is used for astronomical and geographic measurement at the earliest, in the system, the compound eyes observe the same landmark point through different positions to obtain the relative pose change of the observed position, and the distance of the landmark point is deduced.
Fig. 10 is a measurement chart of dynamic baseline and triangulation according to a first embodiment of the present invention. As shown in FIG. 10, the first observation position is O 1 The compound eye moves to O by taking the image as a reference frame 2 When there is a transformation matrix T, while O 1 、O 2 Also the optical center of compound eyes, I 1 、I 2 For the characteristic points of the object at the same position in the two frames of images, the straight line O is measured by the triangle 1 I 1 、O 2 I 2 Which intersect at point P in the scene. In actual measurement, the two can not be completely intersected due to the influence of noise, and the least square method is used for solving.
Normalizing the two feature points, wherein the coordinates of the two feature points are X 1 And X 2 The rotation matrix R and translation vector t between two frames are known. Then the following expression is satisfied:
S 1 X 2 =S 1 RX 1 +t (23)
wherein S is 1 And S is 2 And solving the above formula for the depths of the two feature points to obtain the depths of the points under the two frames, and determining the space coordinates of the target point according to the scale information determined during the initialization of the compound eye inertial system. The compound eye inertial system can fix the pose of the target point.
In actual measurement, we can control the observation point O according to the requirement 1 And observation point O 2 Thus, the observation point O 1 And observation point O 2 The connection between the two is a dynamic baseline. Referring to FIG. 10, when the compound eye inertial system moves to O 3 In this case, the position can still be solved by this method, and thus the motion. Each frame can be used as a starting frame for real-time positioning, and after the corresponding distance is obtained by measurement, the previous data information is lost, so that the real-time performance and the stability of measurement are ensured.
The embodiment provides a set of bionic compound eye inertial navigation positioning system with low power consumption by utilizing the characteristics of large compound eye view field and quick response and combining an inertial device, wherein the compound eye power consumption is only 8W. According to the embodiment, more accurate relative position data is obtained through a dynamic baseline and triangle ranging method, so that the positioning accuracy and the navigation efficiency in the vision inertial system are effectively improved, and compared with a common 'camera+IMU' system or a single vision system, the positioning accuracy and the navigation efficiency are improved. The method and the device for optimizing the information acquired by the back end through the 'compound eye and inertia' tight coupling algorithm are more effective than a common filtering method and are more suitable for real-time large SLAM scenes. According to the method and the device for processing the IMU, the IMU direction information and the camera coordinate system are fused, so that the position of the image coordinate in the inertial navigation coordinate system can be intuitively obtained, and the data processing process is simplified.
The visual inertial navigation positioning scheme provided by the embodiment can adopt the positioning modes of a camera, a GPS, a laser radar and the like besides the compound eye and the inertial scheme of the patent. In the inverse depth optimization part, loose coupling processing can be performed on visual information and IMU information, the visual information and the IMU information are processed respectively, and joint optimization is not performed. In the inverse depth optimization part, the optimization strategies are various, and are not limited to nonlinear optimization modes, and the technical scheme provided by the embodiment realizes nonlinear closed-loop optimization of data and has remarkable advantages in the aspect of real-time navigation and positioning.
The method introduced by the embodiment has the main significance that the compound eyes are utilized to replace cameras in the vision inertial navigation system, and the dynamic base line and the triangular ranging are utilized to improve the positioning mode of the vision inertial navigation system, so that the calculation precision of the vision inertial fusion algorithm is higher, and the more accurate navigation positioning precision is obtained. In the embodiment, the image information acquired by the compound eye is acquired, the acquired image information is fused with the inertial information, a compound eye and inertial system calibration model is established, and the time offset and the coordinate transformation matrix of the compound eye and the inertial system are optimally estimated. In the compound eye and inertial model, the rotation matrix and the translation vector are used for representing the coordinate transformation of the compound eye and the inertial model, and the external parameters of the compound eye are obtained by utilizing the coordinate transformation, so that the combined calibration of the compound eye inertia is realized. The method for calibrating the compound eye external parameters is as follows:
let P be c Is the coordinate of P in the compound eye camera coordinate system, P c Is its coordinates in world coordinate system, and can use a rotation matrix R and a translation vector t to transform P c Conversion to P w The expression is as follows:
P c =RP w +t (6)
wherein R is a 3×3 rotation matrix, t is a 3×1 translation vector, and the transformation process is expressed as follows:
and carrying the rotation matrix R and the translation vector t in, and deducing the compound eye external parameters, wherein the expression is as follows:
the embodiment provides a high-precision real-time navigation positioning method by a dynamic baseline and triangle ranging method, which is used for tracking a detected targetThe tracking and positioning improve the real-time navigation positioning function of the visual inertial system in the large market SLAM. Referring to FIG. 10, the first observation position is O 1 The compound eye moves to O by taking the image as a reference frame 2 When there is a transformation matrix T, while O 1 、O 2 Also the optical center of compound eyes, I 1 、I 2 For the characteristic points of the object at the same position in the two frames of images, the straight line O is measured by the triangle 1 I 1 、O 2 I 2 Which intersect at point P in the scene. In actual measurement, the two can not be completely intersected due to the influence of noise, and the least square method is used for solving.
Normalizing the two feature points, wherein the coordinates of the two feature points are X 1 And X 2 The rotation matrix R and translation vector t between two frames are known. Then the following expression is satisfied:
S 1 X 2 =S 1 RX 1 +t (23)
wherein S is 1 And S is 2 And solving the above formula for the depths of the two feature points to obtain the depths of the points under the two frames, and determining the space coordinates of the target point according to the scale information determined during the initialization of the compound eye inertial system. The compound eye inertial system can fix the pose of the target point. By adding the inverse depth optimization technology and adding priori information into the optimization function, the problem of poor scale precision of the compound eye inertial system in positioning is solved.
It is to be understood that the above embodiments are merely illustrative of the application of the principles of the present invention, but not in limitation thereof. Various modifications and improvements may be made by those skilled in the art without departing from the spirit and substance of the invention, and are also considered to be within the scope of the invention.

Claims (2)

1. A navigation positioning method based on a bionic compound eye inertial system is characterized by comprising the following steps:
the bionic compound eye inertial system comprises a sensor acquisition system and a system main board, wherein the sensor acquisition system comprises a compound eye and an inertial measurement unit, the system main board comprises a vision processing module, an inertial measurement unit data processing module and a vision inertial fusion processing module, the sensor acquisition system is used for acquiring pose information, and the system main board is used for processing the acquired pose information to obtain preset pose information;
the compound eye and the inertia measurement unit are calibrated in a combined mode, a calibration model of a bionic compound eye inertia system is established, and optimal estimation is carried out on time offset and a coordinate transformation matrix of the bionic compound eye inertia system;
restoring the scale of an object in the image by using a compound eye object image scale restoring technology;
and constructing an integral residual error model by utilizing a tight coupling mode of vision and an inertial measurement unit according to an inverse depth optimization technology, and obtaining the inverse depth and the optimization speed of the inertial measurement unit when the system error of the bionic compound eye inertial system is minimum according to the reprojection error based on the vision structure, the residual error item based on the inertial measurement unit structure and the factor based on prior information.
2. The navigation positioning method based on the bionic compound eye inertial system according to claim 1, wherein the step of restoring the scale of the object in the image by using the compound eye object image scale restoring technology comprises:
the method comprises the steps of carrying out joint initialization on visual information and inertial information to obtain scale factors;
according to the frequency of respectively acquiring data by the compound eye and the inertia measurement unit, carrying out pre-integration processing on the inertia measurement unit to obtain the variation of position, rotation and speed;
for the acquired data of the compound eye, solving the pose of all frames and the 3D positions of all road mark points in the sliding window according to the preset operation of the motion recovery structure;
obtaining relative rotation constraint between preset two key frame moment body coordinate systems by using pre-integration of the inertial measurement unit;
aligning the result of the motion recovery structure with a pre-integrated value of the inertial measurement unit to realize correction of offset;
solving the corresponding speed and gravity vector direction of each frame, and recovering the scale factors of the compound eyes;
and restoring the scale of the object in the compound eye image, and simultaneously obtaining the speed of the inertial measurement unit under the metric unit.
CN202210119005.0A 2022-02-08 2022-02-08 Navigation positioning method based on bionic compound eye inertial system Active CN114485648B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210119005.0A CN114485648B (en) 2022-02-08 2022-02-08 Navigation positioning method based on bionic compound eye inertial system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210119005.0A CN114485648B (en) 2022-02-08 2022-02-08 Navigation positioning method based on bionic compound eye inertial system

Publications (2)

Publication Number Publication Date
CN114485648A CN114485648A (en) 2022-05-13
CN114485648B true CN114485648B (en) 2024-02-02

Family

ID=81479124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210119005.0A Active CN114485648B (en) 2022-02-08 2022-02-08 Navigation positioning method based on bionic compound eye inertial system

Country Status (1)

Country Link
CN (1) CN114485648B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681732B (en) * 2023-08-03 2023-10-20 南昌工程学院 Target motion recognition method and system based on compound eye morphological vision

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109079799A (en) * 2018-10-23 2018-12-25 哈尔滨工业大学(深圳) It is a kind of based on bionical robot perception control system and control method
CN110375738A (en) * 2019-06-21 2019-10-25 西安电子科技大学 A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method
CN110887473A (en) * 2019-12-09 2020-03-17 北京航空航天大学 Bionic polarization autonomous combined navigation method based on polarization degree weighting
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN111307139A (en) * 2019-12-09 2020-06-19 北京航空航天大学 Course and attitude determination method based on polarization/astronomical information fusion
CN112697138A (en) * 2020-12-07 2021-04-23 北方工业大学 Factor graph optimization-based bionic polarization synchronous positioning and composition method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019191288A1 (en) * 2018-03-27 2019-10-03 Artisense Corporation Direct sparse visual-inertial odometry using dynamic marginalization
CN109085845B (en) * 2018-07-31 2020-08-11 北京航空航天大学 Autonomous air refueling and docking bionic visual navigation control system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109079799A (en) * 2018-10-23 2018-12-25 哈尔滨工业大学(深圳) It is a kind of based on bionical robot perception control system and control method
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN110375738A (en) * 2019-06-21 2019-10-25 西安电子科技大学 A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method
CN110887473A (en) * 2019-12-09 2020-03-17 北京航空航天大学 Bionic polarization autonomous combined navigation method based on polarization degree weighting
CN111307139A (en) * 2019-12-09 2020-06-19 北京航空航天大学 Course and attitude determination method based on polarization/astronomical information fusion
CN112697138A (en) * 2020-12-07 2021-04-23 北方工业大学 Factor graph optimization-based bionic polarization synchronous positioning and composition method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
仿生导航技术综述;胡小平;毛军;范晨;张礼廉;何晓峰;韩国良;范颖;;导航定位与授时(第04期);全文 *
基于微惯性/偏振视觉的组合定向方法;范晨;何晓峰;范颖;胡小平;张礼廉;于化鹏;;中国惯性技术学报(第02期);全文 *

Also Published As

Publication number Publication date
CN114485648A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN109993113B (en) Pose estimation method based on RGB-D and IMU information fusion
Heng et al. Project autovision: Localization and 3d scene perception for an autonomous vehicle with a multi-camera system
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
EP2656309A1 (en) Method for determining a parameter set designed for determining the pose of a camera and/or for determining a three-dimensional structure of the at least one real object
WO2024045632A1 (en) Binocular vision and imu-based underwater scene three-dimensional reconstruction method, and device
CN112669354A (en) Multi-camera motion state estimation method based on vehicle incomplete constraint
CN111932674A (en) Optimization method of line laser vision inertial system
CN112781582A (en) Multi-sensor fusion high-precision pose estimation algorithm under satellite weak observation condition
Ramezani et al. Omnidirectional visual-inertial odometry using multi-state constraint Kalman filter
KR20220113781A (en) How to measure the topography of your environment
CN114485648B (en) Navigation positioning method based on bionic compound eye inertial system
CN115371673A (en) Binocular camera target positioning method based on Bundle Adjustment in unknown environment
CN113450334B (en) Overwater target detection method, electronic equipment and storage medium
CN113345032B (en) Initialization map building method and system based on wide-angle camera large distortion map
CN116182855B (en) Combined navigation method of compound eye-simulated polarized vision unmanned aerial vehicle under weak light and strong environment
CN112444245A (en) Insect-imitated vision integrated navigation method based on polarized light, optical flow vector and binocular vision sensor
Aminzadeh et al. Implementation and performance evaluation of optical flow navigation system under specific conditions for a flying robot
Donnelly Semi-dense stereo reconstruction from aerial imagery for improved obstacle detection
CN116597106A (en) Air-to-ground large scene sparse reconstruction and target positioning method
Autera Study and development of calibration algorithms of multi-camera systems for precision agriculture robotics
CN116400333A (en) External parameter calibration method for mutual constraint of laser radar, camera and inertial sensor
Peng et al. Visual SLAM for Mobile Robot
Sæthern et al. Automatic Calibration of Multiple Fisheye Cameras: Trajectory based Extrinsic Calibration of a Multicamera Rig

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant