CN109579871A - Inertial navigation components installation error detection method and device based on computer vision - Google Patents

Inertial navigation components installation error detection method and device based on computer vision Download PDF

Info

Publication number
CN109579871A
CN109579871A CN201811354743.3A CN201811354743A CN109579871A CN 109579871 A CN109579871 A CN 109579871A CN 201811354743 A CN201811354743 A CN 201811354743A CN 109579871 A CN109579871 A CN 109579871A
Authority
CN
China
Prior art keywords
camera
image
coordinate system
dimensional
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811354743.3A
Other languages
Chinese (zh)
Other versions
CN109579871B (en
Inventor
贺涛
李冲辉
何权荣
何嵘
陈江苏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Helicopter Research and Development Institute
Original Assignee
China Helicopter Research and Development Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Helicopter Research and Development Institute filed Critical China Helicopter Research and Development Institute
Priority to CN201811354743.3A priority Critical patent/CN109579871B/en
Publication of CN109579871A publication Critical patent/CN109579871A/en
Application granted granted Critical
Publication of CN109579871B publication Critical patent/CN109579871B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • G01C25/005Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application belongs to Airborne Inertial navigation component installation error field of measuring technique, more particularly to a kind of inertial navigation components installation error detection method based on computer vision, comprising the following steps: three-dimensional reconstruction step: to the course feature and carrier aircraft benchmark progress three-dimensional reconstruction on the mounting surface of the mounting surface of inertial navigation components, inertial navigation components under the same coordinate system;Error calculating step: mounting surface, course feature and carrier aircraft benchmark inertial navigation components installation error based on three-dimensional reconstruction.Furthermore, being related to a kind of inertial navigation components installation error detection device based on computer vision includes: three-dimensional reconstruction module, under the same coordinate system on the mounting surface of inertial navigation components, inertial navigation components mounting surface course feature and carrier aircraft benchmark carry out three-dimensional reconstruction;Error calculating module: mounting surface, course feature and carrier aircraft benchmark inertial navigation components installation error based on three-dimensional reconstruction.

Description

Inertial navigation part installation error detection method and device based on computer vision
Technical Field
The application belongs to the technical field of airborne inertial navigation component installation error measurement, and particularly relates to an inertial navigation component installation error detection method and device based on computer vision.
Background
The inertial navigation component is a core component of the inertial integrated navigation system, is used for measuring and resolving navigation parameters such as attitude, acceleration, speed, position and the like of the aerial carrier in real time, and in order to ensure the precision of the inertial navigation component, the installation error of the installation surface of the inertial navigation component relative to the aerial carrier reference needs to be controlled within an allowable error range, and the installation error comprises course error, pitching error and rolling error.
In the prior art, a quadrant instrument is used for measuring a pitching error and a rolling error, and the quadrant instrument can only measure an error relative to a horizontal plane, so that when the quadrant instrument is used for measuring the pitching error and the rolling error, a carrier needs to be adjusted to be in a horizontal state, and the process is complicated; the course error is measured by using the laser tracker, and when the course error is measured by using the laser tracker, two hole site coordinates in front and back of the longitudinal direction under a machine body coordinate system need to be measured by means of an auxiliary tool for precision machining, so that the process is time-consuming and labor-consuming.
Accordingly, a technical solution is desired to overcome or at least alleviate at least one of the above-mentioned drawbacks of the prior art.
Disclosure of Invention
The present application is directed to a method and an apparatus for detecting installation errors of an inertial navigation component based on computer vision, so as to overcome or alleviate at least one of the above-mentioned disadvantages.
The technical scheme of the application is as follows:
in one aspect, a computer vision-based inertial navigation component installation error detection method is provided, and includes the following steps:
three-dimensional reconstruction: three-dimensional reconstruction is carried out on the installation surface of the inertial navigation part, the course characteristics on the installation surface of the inertial navigation part and the aircraft reference under the same coordinate system;
and error calculation: and calculating the installation error of the inertial navigation part based on the three-dimensional reconstructed installation surface, the course characteristic and the aircraft reference.
According to at least one embodiment of the present application, the three-dimensional reconstruction step includes:
a camera calibration step: calibrating the first camera and the second camera to obtain internal and external parameters of the first camera and the second camera;
an image acquisition step: acquiring a first image based on a first camera and acquiring a second image based on a second camera; wherein, the first image and the second image comprise the image of the mounting surface and the image of the reference of the carrier
A characteristic extraction step: extracting a characteristic point of an installation surface image in the first image, wherein the characteristic point is called a first installation surface characteristic point, and extracting a characteristic point representing the heading characteristic of the installation surface on the installation surface image in the first image, and the characteristic point is called a first installation surface heading characteristic point; extracting characteristic points of the carrier reference image in the first image, wherein the characteristic points are called as first carrier reference characteristic points; extracting a characteristic point of the installation surface image in the second image, wherein the characteristic point is called a second installation surface characteristic point, and extracting a characteristic point which represents the heading characteristic of the installation surface on the installation surface image in the second image, and the characteristic point is called a second installation surface heading characteristic point; extracting characteristic points of the carrier reference image in the second image, wherein the characteristic points are called second carrier reference characteristic points;
and (3) matching the characteristic points: matching the first mounting surface characteristic point and the second mounting surface characteristic point to obtain a mounting surface matching characteristic point; matching the first mounting surface course characteristic point with the second mounting surface course characteristic point to obtain a course matching characteristic point; matching the first carrier reference characteristic point and the second carrier reference characteristic point to obtain a carrier reference matching characteristic point;
and (3) coordinate conversion: obtaining a three-dimensional coordinate of the mounting surface matching feature point under the selected camera coordinate system, a three-dimensional coordinate of the course matching feature point under the selected camera coordinate system and a three-dimensional coordinate of the carrier reference matching feature point under the selected camera coordinate system according to pixel coordinates of the mounting surface matching feature point under a first camera pixel coordinate system and a second camera pixel coordinate system, pixel coordinates of the course matching feature point under the first camera pixel coordinate system and the second camera pixel coordinate system, and internal and external parameters;
three-dimensional coordinate reconstruction: performing three-dimensional reconstruction on the mounting surface according to the three-dimensional coordinates of the mounting surface matching feature points in the selected camera coordinate system; three-dimensional reconstruction is carried out on the course characteristic according to the three-dimensional coordinates of the course matching characteristic points under the selected camera coordinate system; performing three-dimensional reconstruction on the carrier reference according to the three-dimensional coordinates of the carrier reference matching feature points in the selected camera coordinate system;
wherein the selected camera is the first camera or the second camera.
According to at least one embodiment of the present application, the camera calibration step includes calibrating the first camera and the second camera by using the chessboard target to obtain the internal parameter f of the first camerap1,up1,vp1,dx1,dy1Internal parameters f of the second camerap2,up2,vp2,dx2,dy2And an external parameter Rc1c2、tc1c2(ii) a Wherein,
fp1is the focal length of the first camera; u. ofp1、vp1Is the pixel coordinate of the first camera image center point; dx1,dy1Physical dimensions of the first camera pixel in an x-axis and a y-axis of the image plane, respectively;
fp2is the focal length of the second camera; u. ofp2、vp2Is the pixel coordinate of the center point of the second camera image; dx2,dy2The physical dimensions of the second camera pixel in the x-axis and y-axis of the image plane, respectively;
Rc1c2、tc1c2representing the relative positional relationship of the first camera coordinate system and the second camera coordinate system, Rc1c2Is a rotation matrix, tc1c2Is a translation vector.
According to at least one embodiment of the present application, the coordinate transforming step includes:
pixel and image coordinate conversion: matching the pixel coordinates and u of the feature point under the first camera pixel coordinate system according to the mounting surfacep1,vp1,dx1,dy1Obtaining the image coordinates of the mounting surface matching feature points in the first camera image coordinate system; matching pixel coordinates and u of the feature point under the first camera pixel coordinate system according to the coursep1、vp1、dx1、dy1Obtaining image coordinates of the course matching feature points in a first camera image coordinate system; matching feature points under a first camera pixel coordinate system according to carrier referencePixel coordinate and up1、vp1、dx1、dy1Obtaining the image coordinates of the reference matching feature points of the carrier under the first camera image coordinate system; matching the pixel coordinates and u of the characteristic point under the second camera pixel coordinate system according to the mounting surfacep2、vp2、dx2、dy2Obtaining the image coordinates of the mounting surface matching feature points in a second camera image coordinate system; matching pixel coordinates and u of the feature point under a second camera pixel coordinate system according to the coursep2、vp2、dx2、dy2Obtaining the image coordinates of the course matching feature points in the second camera image coordinate system; matching the pixel coordinates and u of the feature points under the second camera pixel coordinate system according to the carrier referencep2、vp2、dx2、dy2Obtaining the image coordinates of the reference matching feature points of the carrier under the second camera image coordinate system;
image and camera coordinate conversion: matching image coordinates of the feature points in the first camera image coordinate system, image coordinates thereof in the second camera image coordinate system, and fp1、fp2、Rc1c2、tc1c2Obtaining the three-dimensional coordinates of the mounting surface matching feature points in the selected camera coordinate system; matching image coordinates of the feature points in the first camera image coordinate system, image coordinates thereof in the second camera image coordinate system, and fp1、fp2、Rc1c2、tc1c2Obtaining three-dimensional coordinates of the course matching feature points under the selected camera coordinate system; matching image coordinates of feature points in a first camera image coordinate system according to the carrier reference, image coordinates thereof in a second camera image coordinate system, and fp1、fp2、Rc1c2、tc1c2And obtaining the three-dimensional coordinates of the reference matching feature points of the carrier under the coordinate system of the selected camera.
According to at least one embodiment of the present application, the three-dimensional coordinate reconstructing step includes:
three-dimensional reconstruction of an installation surface: according to the three-dimensional coordinates of the mounting surface matching feature points in the selected camera coordinate system, performing three-dimensional reconstruction on the mounting surface of the inertial navigation component by adopting a multi-element fitting method;
three-dimensional reconstruction of course characteristics: according to the three-dimensional coordinates of the course matching feature points in the selected camera coordinate system, three-dimensional reconstruction is carried out on the course features by adopting a multi-element fitting method;
and (3) carrying machine reference three-dimensional reconstruction: and performing three-dimensional reconstruction on the reference of the carrier by adopting a multi-element fitting method according to the three-dimensional coordinates of the reference matching feature points of the carrier under the selected camera coordinate system.
According to at least one embodiment of the present application, the error calculating step includes:
a pitch error calculation step: calculating a pitching error between the inertial navigation installation surface and the airborne benchmark based on the three-dimensional reconstructed installation surface and the three-dimensional reconstructed airborne benchmark;
and roll error calculation: calculating a roll error between the inertial navigation installation surface and an airborne reference based on the three-dimensional reconstructed installation surface and the three-dimensional reconstructed airborne reference;
course error calculation: and calculating the pitching error between the inertial navigation installation surface and the airborne reference based on the three-dimensional reconstructed course characteristic and the three-dimensional reconstructed airborne reference.
According to at least one embodiment of the application, the pitching error is an included angle between an XZ intersection line and a three-dimensionally reconstructed carrier reference X axis; and the XZ intersection line is the intersection line of the three-dimensional reconstructed inertial navigation mounting surface and the three-dimensional reconstructed carrier reference XZ plane.
According to at least one embodiment of the present application, the roll error is an included angle between a YZ intersection line and a three-dimensionally reconstructed vehicle-based Y axis; and the YZ intersection line is the intersection line of the inertial navigation mounting surface of the three-dimensional reconstruction and the YZ plane of the reference of the three-dimensional reconstruction of the loader.
According to at least one embodiment of the present application, the heading error is a three-dimensional reconstruction of the heading characteristic and the three-dimensional reconstruction of the bearing reference of the aircraft.
Another aspect provides an inertial navigation component installation error detection device based on computer vision, including:
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the installation surface of the inertial navigation component, the course characteristics on the installation surface of the inertial navigation component and the aircraft reference in the same coordinate system;
an error calculation module: and calculating the installation error of the inertial navigation part based on the three-dimensional reconstruction installation surface, the course characteristic and the aircraft reference.
The application has at least the following beneficial technical effects: the method for detecting the installation error of the inertial navigation component based on computer vision is simple, fast and high in accuracy. In addition, the device for detecting the installation error of the inertial navigation part based on the computer vision is also provided.
Drawings
FIG. 1 is a schematic diagram illustrating a method for detecting installation errors of inertial navigation components based on computer vision according to an embodiment of the present application;
FIG. 2 is a flowchart of a computer vision-based inertial navigation component installation error detection method according to an embodiment of the present application.
Wherein:
101: a first camera coordinate system; 103: a second camera coordinate system; 104: an airborne benchmark; 105: a world coordinate system; 106: the installation surface of the inertial navigation part; 107: course characteristics on the mounting surface of the inertial navigation component; 108: a first camera image coordinate system; 109: a second camera image coordinate system; 110: a pixel coordinate system of the first camera image; 111: a pixel coordinate system of the second camera image;
102: relative position relationship between the first camera coordinate system and the second camera coordinate system, and rotation matrix Rc1c2Translation vector tc1c2
112: the relative relation between the world coordinate system and the first camera coordinate system, namely the rotation matrix R and the translation matrix t.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
It should be noted that in the description of the present application, the terms of direction or positional relationship indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present application. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Furthermore, it should be noted that, in the description of the present application, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as being fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood by those skilled in the art as the case may be.
The present application is described in further detail below with reference to fig. 1-2.
The method comprises the steps of acquiring images of an installation surface of an inertial navigation component and an airborne reference in one image by using two cameras, namely a first camera and a second camera, processing the images acquired by the two cameras to extract a characteristic point, converting pixel coordinates of the extracted characteristic point under two-camera pixel coordinates into image coordinates under two-camera image coordinate systems, obtaining a three-dimensional coordinate under one camera coordinate system according to the image coordinates and related parameters of the extracted characteristic point under the two-camera coordinate system, performing three-dimensional reconstruction on the course characteristic of the installation surface of the inertial navigation component, the installation surface of the inertial navigation component and the airborne reference by using the three-dimensional coordinate, and finally calculating the installation error of the inertial navigation component according to the installation surface of the three-dimensional reconstructed navigation component, the course characteristic of the installation surface of the inertial navigation component and the airborne reference, the method is based on a computer vision technology, wherein an imaging model of a camera is described by taking a first camera as an example, as shown in fig. 1:
the above model represents the process of perspective projection of a point P in three-dimensional space onto an image plane, wherein,
(xw,yw,zw,1)Tis a three-dimensional space point P in a world coordinate system Ow-XwYwZwNext coordinates;
(xc1,yc1,zc1,1)Tfor three-dimensional space point P in a first camera coordinate system Oc1-Xc1Yc1Zc1The lower homogeneous coordinate;
(uP1,vP1,1)Tis an image point P1In a first camera pixel coordinate system O1-Up1Vp1Next homogeneous pixel coordinates;
(xP1,yP1,1)Tis an image point P1First camera image coordinate system Op1-Xp1Yp1The next homogeneous image coordinates;
image point P1Is Oc1P is an intersection point with the image plane, which is an imaging point of the three-dimensional space point P in the first camera image;
r, t denotes the world coordinate system Ow-XwYwZwAnd a first camera coordinate system Oc1-Xc1Yc1Zc1Wherein, R is a rotation matrix, and t is a translation vector;
fp1,up1,vp1,dx1,dy1which is an internal parameter of the first camera, may be determined by calibration of the camera, wherein,
dx1,dy1the physical dimensions of each pixel of the first camera in the x-axis and the y-axis of the image plane, respectively;
up1,vp1is the first camera image center point Op1Is arranged in the pixelMarking;
fp1is the focal length of the first camera;
furthermore, Oc1Is the optical center of the first camera, Oc1Zc1Is the first camera optical axis, Op1Is the intersection of the first camera's optical axis and the image plane, oc1xc1//op1xp1,oc1yc1//op1yp1
The imaging model of the second camera is the same as the imaging model of the first camera.
tc1c2=(tx,ty,tz,1)T
(xc2,yc2,zc2)T=Rc1c2(xc1,yc1,zc1)T+tc1c2
The three-dimensional space point P is in the first camera coordinate system Oc1-Xc1Yc1Zc1Three dimensional coordinates of (x) ofc1,yc1,zc1) In a second camera coordinate system Oc2-Xc2Yc2Zc2Three dimensional coordinates of (x) ofc2,yc2,zc2) The calculated relationship of (a), wherein,
(xP2,yP2) Is an image point P2Image coordinates in a second camera image coordinate system;
image point P2Imaging points of the three-dimensional space point P in the second camera image;
Rc1c2、tc1c2representing a relative position transformation relationship between the first camera and the second camera, which can be determined by calibrating the first camera and the second camera, wherein Rc1c2Is a rotation matrix, tc1c2And translating the vector.
The method for detecting the installation error of the inertial navigation component based on the computer vision comprises the following steps:
three-dimensional reconstruction: three-dimensional reconstruction is carried out on the installation surface of the inertial navigation part, the course characteristics on the installation surface of the inertial navigation part and the aircraft reference under the same coordinate system;
and error calculation: and calculating the installation error of the inertial navigation part based on the three-dimensional reconstructed installation surface, the course characteristic and the aircraft reference.
In some alternative embodiments, the three-dimensional reconstruction step comprises:
a camera calibration step: calibrating the first camera and the second camera to obtain internal and external parameters of the first camera and the second camera; when the step is carried out, the arrangement of the first camera and the second camera is required to be finished firstly, the first camera and the second camera can be ensured to shoot the image of the mounting surface and the image of the airborne reference at the same time, and after the calibration is finished, the relative position relation among the first camera, the second camera, the inertial navigation mounting surface and the airborne reference is required to be kept unchanged, so that the image acquisition step is carried out;
an image acquisition step: acquiring a first image based on a first camera and acquiring a second image based on a second camera; the first image comprises an image of a mounting surface and an image of an on-board reference; the second image comprises an image of a mounting surface and an image of an on-board reference;
a characteristic extraction step: extracting a characteristic point of an installation surface image in the first image, wherein the characteristic point is called a first installation surface characteristic point, and extracting a characteristic point representing the heading characteristic of the installation surface on the installation surface image in the first image, and the characteristic point is called a first installation surface heading characteristic point; extracting characteristic points of the carrier reference image in the first image, wherein the characteristic points are called as first carrier reference characteristic points; extracting a characteristic point of the installation surface image in the second image, wherein the characteristic point is called a second installation surface characteristic point, and extracting a characteristic point which represents the heading characteristic of the installation surface on the installation surface image in the second image, and the characteristic point is called a second installation surface heading characteristic point; extracting characteristic points of the carrier reference image in the second image, wherein the characteristic points are called second carrier reference characteristic points;
and (3) matching the characteristic points: matching the first mounting surface characteristic point and the second mounting surface characteristic point to obtain a mounting surface matching characteristic point; matching the first mounting surface course characteristic point with the second mounting surface course characteristic point to obtain a course matching characteristic point; matching the first carrier reference characteristic point and the second carrier reference characteristic point to obtain a carrier reference matching characteristic point;
and (3) coordinate conversion: obtaining a three-dimensional coordinate of the mounting surface matching feature point under the selected camera coordinate system, a three-dimensional coordinate of the course matching feature point under the selected camera coordinate system and a three-dimensional coordinate of the carrier reference matching feature point under the selected camera coordinate system according to pixel coordinates of the mounting surface matching feature point under a first camera pixel coordinate system and a second camera pixel coordinate system, pixel coordinates of the course matching feature point under the first camera pixel coordinate system and the second camera pixel coordinate system, and internal and external parameters;
three-dimensional coordinate reconstruction: performing three-dimensional reconstruction on the mounting surface according to the three-dimensional coordinates of the mounting surface matching feature points in the selected camera coordinate system; three-dimensional reconstruction is carried out on the course characteristic according to the three-dimensional coordinates of the course matching characteristic points under the selected camera coordinate system; performing three-dimensional reconstruction on the carrier reference according to the three-dimensional coordinates of the carrier reference matching feature points in the selected camera coordinate system;
wherein the selected camera is any one of the first camera and the second camera;
for the feature point matching step, as is readily understood by those skilled in the art, the feature points in the first mounting surface feature point that are not matched with the second mounting surface feature points are removed, or the feature points in the second mounting surface feature point that are not matched with the first mounting surface are removed to obtain mounting surface matching feature points; the process of obtaining the course matching feature point and the carrier reference matching feature point is similar;
in some optional embodiments, the camera calibration step includes calibrating the first camera and the second camera by using the chessboard target to obtain the internal parameter f of the first camerap1,up1,vp1,dx1,dy1Internal parameters f of the second camerap2,up2,vp2,dx2,dy2And an external parameter Rc1c2、tc1c2(ii) a Wherein,
the internal and external parameters include: an internal parameter of the first camera, an internal parameter of the second camera, and an external parameter;
fp1is the focal length of the first camera; u. ofp1、vp1Is the pixel coordinate of the first camera image center point; dx1,dy1Physical dimensions of the first camera pixel in an x-axis and a y-axis of the image plane, respectively;
fp2is the focal length of the second camera; u. ofp2、vp2Is the pixel coordinate of the center point of the second camera image; dx2,dy2The physical dimensions of the second camera pixel in the x-axis and y-axis of the image plane, respectively;
Rc1c2、tc1c2representing the relative positional relationship of the first camera coordinate system and the second camera coordinate system, Rc1c2Is a rotation matrix, tc1c2Is a translation vector.
In some optional embodiments, the coordinate transforming step comprises:
pixel and image coordinate conversion: according to mounting face matching featuresPixel coordinates and u of the point in the first camera pixel coordinate systemp1,vp1,dx1,dy1Obtaining the image coordinates of the mounting surface matching feature points in the first camera image coordinate system; matching pixel coordinates and u of the feature point under the first camera pixel coordinate system according to the coursep1、vp1、dx1、dy1Obtaining image coordinates of the course matching feature points in a first camera image coordinate system; matching pixel coordinates and u of the feature points under the first camera pixel coordinate system according to the reference of the carrierp1、vp1、dx1、dy1Obtaining the image coordinates of the reference matching feature points of the carrier under the first camera image coordinate system; matching the pixel coordinates and u of the characteristic point under the second camera pixel coordinate system according to the mounting surfacep2、vp2、dx2、dy2Obtaining the image coordinates of the mounting surface matching feature points in a second camera image coordinate system; matching pixel coordinates and u of the feature point under a second camera pixel coordinate system according to the coursep2、vp2、dx2、dy2Obtaining the image coordinates of the course matching feature points in the second camera image coordinate system; matching the pixel coordinates and u of the feature points under the second camera pixel coordinate system according to the carrier referencep2、vp2、dx2、dy2Obtaining the image coordinates of the reference matching feature points of the carrier under the second camera image coordinate system;
image and camera coordinate conversion: matching image coordinates of the feature points in the first camera image coordinate system, image coordinates thereof in the second camera image coordinate system, and fp1、fp2、Rc1c2、tc1c2Obtaining the three-dimensional coordinates of the mounting surface matching feature points in the selected camera coordinate system; matching image coordinates of the feature points in the first camera image coordinate system, image coordinates thereof in the second camera image coordinate system, and fp1、fp2、Rc1c2、tc1c2Obtaining three-dimensional coordinates of the course matching feature points under the selected camera coordinate system; matching feature points according to carrier benchmarkImage coordinates in a first camera image coordinate system, image coordinates thereof in a second camera image coordinate system, and fp1、fp2、Rc1c2、tc1c2And obtaining the three-dimensional coordinates of the reference matching feature points of the carrier under the coordinate system of the selected camera.
In the step of converting the pixel and the image coordinate, the image coordinates of the installation surface matching feature point, the course matching feature point and the on-board reference matching feature point under the first camera and the second camera pixel coordinate systems are respectively obtained by the pixel coordinates of the installation surface matching feature point, the course matching feature point and the on-board reference matching feature point under the first camera and the second camera image coordinate systems, and the specific process can refer to an imaging point P of a three-dimensional space point P in the first camera image in the first camera imaging model1In a first camera pixel coordinate system O1-Up1Vp1Lower homogeneous pixel coordinate (u)P1,vP1,1)TAnd is in the first camera image coordinate system Op1-Xp1Yp1Lower homogeneous image coordinates (x)P1,yP1,1)TTransformation model between:
in the image and camera coordinate conversion step, the three-dimensional coordinates of the installation surface matching feature point, the course matching feature point and the carrier reference matching feature point in the first camera and the second camera image coordinate systems are respectively obtained by the image coordinates of the installation surface matching feature point, the course matching feature point and the carrier reference matching feature point in the first camera and the second camera image coordinate systems, and the specific process can refer to the three-dimensional space point P in the first camera coordinate system O by a person skilled in the artc1-Xc1Yc1Zc1Three dimensional coordinates of (x) ofc1,yc1,zc1) In a second camera coordinate system Oc2-Xc2Yc2Zc2Three dimensional coordinates of (x) ofc2,yc2,zc2) The calculation of (a):
tc1c2=(tx,ty,tz,1)T
(xc2,yc2,zc2)T=Rc1c2(xc1,yc1,zc1)T+tc1c2
in some optional embodiments, the three-dimensional coordinate reconstructing step includes:
three-dimensional reconstruction of an installation surface: according to the three-dimensional coordinates of the mounting surface matching feature points in the selected camera coordinate system, performing three-dimensional reconstruction on the mounting surface of the inertial navigation component by adopting a multi-element fitting method;
three-dimensional reconstruction of course characteristics: according to the three-dimensional coordinates of the course matching feature points in the selected camera coordinate system, three-dimensional reconstruction is carried out on the course features by adopting a multi-element fitting method;
and (3) carrying machine reference three-dimensional reconstruction: and performing three-dimensional reconstruction on the reference of the carrier by adopting a multi-element fitting method according to the three-dimensional coordinates of the reference matching feature points of the carrier under the selected camera coordinate system.
The method adopts a multi-element fitting method to carry out three-dimensional reconstruction on the installation surface, the course characteristics and the aircraft reference, and can improve the accuracy of the three-dimensional reconstruction.
In some alternative embodiments, the error calculating step comprises:
a pitch error calculation step: calculating a pitching error between the inertial navigation installation surface and the airborne benchmark based on the three-dimensional reconstructed installation surface and the three-dimensional reconstructed airborne benchmark;
and roll error calculation: calculating a roll error between the inertial navigation installation surface and an airborne reference based on the three-dimensional reconstructed installation surface and the three-dimensional reconstructed airborne reference;
course error calculation: and calculating the course error between the inertial navigation installation surface and the airborne reference based on the three-dimensional reconstructed course characteristic and the three-dimensional reconstructed airborne reference.
In some optional embodiments, the pitch error is an included angle between an XZ intersection line and a three-dimensionally reconstructed carrier reference X axis;
and the XZ intersection line is the intersection line of the three-dimensional reconstructed inertial navigation mounting surface and the three-dimensional reconstructed carrier reference XZ plane.
In some optional embodiments, the roll error is an included angle between a YZ intersection line and a three-dimensionally reconstructed vehicle reference Y axis;
and the YZ intersection line is the intersection line of the inertial navigation mounting surface of the three-dimensional reconstruction and the YZ plane of the reference of the three-dimensional reconstruction of the loader.
In some optional embodiments, the heading error is an angle between the three-dimensionally reconstructed heading feature and an X-axis of a three-dimensionally reconstructed vehicle reference.
Another aspect provides an inertial navigation component installation error detection device based on computer vision, including:
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the mounting surface, the course characteristics on the mounting surface and the aircraft reference under the same coordinate system;
an error calculation module: and calculating the installation error of the inertial navigation part based on the three-dimensional reconstructed installation surface, the course characteristic and the aircraft reference.
So far, the technical solutions of the present application have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present application is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the present application, and the technical scheme after the changes or substitutions will fall into the protection scope of the present application.

Claims (10)

1. A computer vision-based inertial navigation component installation error detection method is characterized by comprising the following steps:
three-dimensional reconstruction: three-dimensional reconstruction is carried out on the installation surface of the inertial navigation component, the course characteristics on the installation surface of the inertial navigation component and the aircraft reference under the same coordinate system;
and error calculation: and calculating the installation error of the inertial navigation part based on the three-dimensional reconstructed installation surface, the course characteristic and the aircraft reference.
2. The detection method according to claim 1, wherein the three-dimensional reconstruction step comprises:
a camera calibration step: calibrating a first camera and a second camera to obtain internal and external parameters of the first camera and the second camera;
an image acquisition step: acquiring a first image based on the first camera and acquiring a second image based on the second camera; wherein the first image and the second image each comprise an image of a mounting surface and an image of an on-board reference;
a characteristic extraction step: extracting a characteristic point of the installation surface image in the first image, wherein the characteristic point is called a first installation surface characteristic point, and extracting a characteristic point representing the heading characteristic of the installation surface on the installation surface image in the first image, wherein the characteristic point is called a first installation surface heading characteristic point; extracting characteristic points of the carrier reference image in the first image, wherein the characteristic points are called as first carrier reference characteristic points; extracting a characteristic point of the installation surface image in the second image, wherein the characteristic point is called a second installation surface characteristic point, and extracting a characteristic point representing the heading characteristic of the installation surface on the installation surface image in the second image, wherein the characteristic point is called a second installation surface heading characteristic point; extracting characteristic points of the carrier reference image in the second image, wherein the characteristic points are called second carrier reference characteristic points;
and (3) matching the characteristic points: matching the first mounting surface characteristic point and the second mounting surface characteristic point to obtain a mounting surface matching characteristic point; matching the first installation surface course characteristic point with the second installation surface course characteristic point to obtain a course matching characteristic point; matching the first carrier reference characteristic point and the second carrier reference characteristic point to obtain a carrier reference matching characteristic point;
and (3) coordinate conversion: obtaining the three-dimensional coordinates of the installation surface matching feature point in the selected camera coordinate system, the three-dimensional coordinates of the course matching feature point in the selected camera coordinate system and the three-dimensional coordinates of the carrier reference matching feature point in the selected camera coordinate system according to the pixel coordinates of the installation surface matching feature point in the first camera pixel coordinate system and the second camera pixel coordinate system, the pixel coordinates of the course matching feature point in the first camera pixel coordinate system and the second camera pixel coordinate system, and the internal and external parameters;
three-dimensional coordinate reconstruction: performing three-dimensional reconstruction on the mounting surface according to the three-dimensional coordinates of the mounting surface matching feature points in the selected camera coordinate system; performing three-dimensional reconstruction on the course characteristic according to the three-dimensional coordinates of the course matching characteristic point in the selected camera coordinate system; performing three-dimensional reconstruction on the airborne reference according to the three-dimensional coordinates of the airborne reference matching feature points in the selected camera coordinate system;
wherein the selected camera is the first camera or the second camera.
3. The detection method according to claim 2,
the camera calibration step comprises calibrating the first camera and the second camera by using a chessboard target to obtain an internal parameter f of the first camerap1,up1,vp1,dx1,dy1Internal parameter f of the second camerap2,up2,vp2,dx2,dy2And an external parameter Rc1c2、tc1c2(ii) a Wherein,
fp1is a focal length of the first camera; u. ofp1、vp1Is the pixel coordinate of the first camera image center point; dx1,dy1Physical dimensions of the first camera pixel in an x-axis and a y-axis of an image plane, respectively;
fp2is the focal length of the second camera; u. ofp2、vp2Is the pixel coordinate of the center point of the second camera image; dx2,dy2The physical dimensions of the second camera pixel on the x-axis and the y-axis of the image plane, respectively;
Rc1c2、tc1c2representing a relative positional relationship of the first camera coordinate system and the second camera coordinate system, Rc1c2Is a rotation matrix, tc1c2Is a translation vector.
4. The detection method according to claim 3, wherein the coordinate conversion step includes:
pixel and image coordinate conversion:
according to the pixel coordinates and u of the mounting surface matching feature point in the first camera pixel coordinate systemp1,vp1,dx1,dy1Obtaining the image coordinates of the mounting surface matching feature points in the first camera image coordinate system;
according to the pixel coordinates and u of the course matching feature point in the first camera pixel coordinate systemp1、vp1、dx1、dy1Obtaining the image coordinates of the course matching feature points in the first camera image coordinate system;
according to the pixel coordinates and u of the reference matching feature points of the carrier under the first camera pixel coordinate systemp1、vp1、dx1、dy1Obtaining the image coordinates of the reference matching feature points of the carrier under the first camera image coordinate system;
according to the pixel coordinates and u of the mounting surface matching feature point in the second camera pixel coordinate systemp2、vp2、dx2、dy2Obtaining the image coordinates of the mounting surface matching feature points in the second camera image coordinate system;
according to the pixel coordinates and u of the course matching feature point in the second camera pixel coordinate systemp2、vp2、dx2、dy2Obtaining the image coordinates of the course matching feature points in the second camera image coordinate system;
according to the pixel coordinates and u of the reference matching feature points of the carrier under the second camera pixel coordinate systemp2、vp2、dx2、dy2Obtaining the image coordinates of the reference matching feature points of the carrier under the second camera image coordinate system;
image and camera coordinate conversion:
matching the image coordinates of the feature points according to the installation surface in the first camera image coordinate system, the image coordinates of the feature points in the second camera image coordinate system, and fp1、fp2、Rc1c2、tc1c2Obtaining the three-dimensional coordinates of the mounting surface matching feature points in the selected camera coordinate system;
according to the image coordinates of the course matching feature points in the first camera image coordinate system, the image coordinates of the course matching feature points in the second camera image coordinate system, and fp1、fp2、Rc1c2、tc1c2Obtaining the three-dimensional coordinates of the course matching feature points under the selected camera coordinate system;
matching image coordinates of the feature points in the first camera image coordinate system according to the airborne reference, image coordinates of the feature points in the second camera image coordinate system, and fp1、fp2、Rc1c2、tc1c2And obtaining the three-dimensional coordinates of the reference matching feature points of the carrier under the coordinate system of the selected camera.
5. The detection method according to claim 4, wherein the three-dimensional coordinate reconstruction step comprises:
three-dimensional reconstruction of an installation surface: according to the three-dimensional coordinates of the mounting surface matching feature points in the selected camera coordinate system, performing three-dimensional reconstruction on the mounting surface by adopting a multi-element fitting method;
three-dimensional reconstruction of course characteristics: according to the three-dimensional coordinates of the course matching feature points in the selected camera coordinate system, three-dimensional reconstruction is carried out on the course features by adopting a multi-element fitting method;
and (3) carrying machine reference three-dimensional reconstruction: and according to the three-dimensional coordinates of the matching feature points of the airborne reference under the selected camera coordinate system, performing three-dimensional reconstruction on the airborne reference by adopting a multi-element fitting method.
6. The detection method according to claim 1, wherein the error calculation step includes:
a pitch error calculation step: calculating a pitching error between the inertial navigation installation surface and the airborne reference based on the three-dimensional reconstruction installation surface and the three-dimensional reconstruction airborne reference;
and roll error calculation: calculating a roll error between the inertial navigation installation surface and an airborne reference based on the three-dimensional reconstructed installation surface and the three-dimensional reconstructed airborne reference;
course error calculation: and calculating the course error between the inertial navigation installation surface and the airborne reference based on the three-dimensional reconstructed course characteristic and the three-dimensional reconstructed airborne reference.
7. The detection method according to claim 6,
the pitching error is an included angle between an XZ intersection line and a three-dimensionally reconstructed carrier reference X axis;
and the XZ intersection line is an intersection line of the three-dimensionally reconstructed inertial navigation mounting surface and an XZ plane of the three-dimensionally reconstructed carrier reference.
8. The detection method according to claim 6,
the roll error is an included angle between a YZ intersection line and a three-dimensional reconstructed reference Y axis of the carrier;
and the YZ intersection line is the intersection line of the three-dimensional reconstructed inertial navigation mounting surface and a YZ plane of the three-dimensional reconstructed loader reference.
9. The detection method according to claim 7,
and the course error is an included angle between the three-dimensionally reconstructed course characteristic and an X axis of the three-dimensionally reconstructed carrier reference.
10. An inertial navigation component installation error detection device based on computer vision, comprising:
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the installation surface of the inertial navigation component, the course characteristics on the installation surface of the inertial navigation component and the aircraft reference in the same coordinate system;
an error calculation module: and calculating the installation error of the inertial navigation part based on the three-dimensional reconstructed installation surface, the course characteristic and the aircraft reference.
CN201811354743.3A 2018-11-14 2018-11-14 Inertial navigation part installation error detection method and device based on computer vision Active CN109579871B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811354743.3A CN109579871B (en) 2018-11-14 2018-11-14 Inertial navigation part installation error detection method and device based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811354743.3A CN109579871B (en) 2018-11-14 2018-11-14 Inertial navigation part installation error detection method and device based on computer vision

Publications (2)

Publication Number Publication Date
CN109579871A true CN109579871A (en) 2019-04-05
CN109579871B CN109579871B (en) 2021-03-30

Family

ID=65922568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811354743.3A Active CN109579871B (en) 2018-11-14 2018-11-14 Inertial navigation part installation error detection method and device based on computer vision

Country Status (1)

Country Link
CN (1) CN109579871B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102680004A (en) * 2012-05-30 2012-09-19 北京航空航天大学 Scale factor error calibration and compensation method of flexible gyroscope position and orientation system (POS)
CN103090869A (en) * 2013-01-07 2013-05-08 重庆华渝电气仪表总厂 Digital compensation method for adjusting installation error of strapdown equipment
CN103247053A (en) * 2013-05-16 2013-08-14 大连理工大学 Accurate part positioning method based on binocular microscopy stereo vision
CN105043259A (en) * 2015-08-25 2015-11-11 大连理工大学 Numerical control machine tool rotating shaft error detection method based on binocular vision
WO2017083420A1 (en) * 2015-11-10 2017-05-18 Thales Visionix, Inc. Robust vision-inertial pedestrian tracking with heading auto-alignment
CN106705965A (en) * 2017-01-12 2017-05-24 苏州中德睿博智能科技有限公司 Scene three-dimensional data registration method and navigation system error correction method
CN108288292A (en) * 2017-12-26 2018-07-17 中国科学院深圳先进技术研究院 A kind of three-dimensional rebuilding method, device and equipment
CN108428255A (en) * 2018-02-10 2018-08-21 台州智必安科技有限责任公司 A kind of real-time three-dimensional method for reconstructing based on unmanned plane

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102680004A (en) * 2012-05-30 2012-09-19 北京航空航天大学 Scale factor error calibration and compensation method of flexible gyroscope position and orientation system (POS)
CN103090869A (en) * 2013-01-07 2013-05-08 重庆华渝电气仪表总厂 Digital compensation method for adjusting installation error of strapdown equipment
CN103247053A (en) * 2013-05-16 2013-08-14 大连理工大学 Accurate part positioning method based on binocular microscopy stereo vision
CN105043259A (en) * 2015-08-25 2015-11-11 大连理工大学 Numerical control machine tool rotating shaft error detection method based on binocular vision
WO2017083420A1 (en) * 2015-11-10 2017-05-18 Thales Visionix, Inc. Robust vision-inertial pedestrian tracking with heading auto-alignment
CN106705965A (en) * 2017-01-12 2017-05-24 苏州中德睿博智能科技有限公司 Scene three-dimensional data registration method and navigation system error correction method
CN108288292A (en) * 2017-12-26 2018-07-17 中国科学院深圳先进技术研究院 A kind of three-dimensional rebuilding method, device and equipment
CN108428255A (en) * 2018-02-10 2018-08-21 台州智必安科技有限责任公司 A kind of real-time three-dimensional method for reconstructing based on unmanned plane

Also Published As

Publication number Publication date
CN109579871B (en) 2021-03-30

Similar Documents

Publication Publication Date Title
CN110296691B (en) IMU calibration-fused binocular stereo vision measurement method and system
CN111462213B (en) Equipment and method for acquiring 3D coordinates and dimensions of object in motion process
CN110517325B (en) Coordinate transformation and method and system for positioning objects around vehicle body through coordinate transformation
JP5992184B2 (en) Image data processing apparatus, image data processing method, and image data processing program
CN108665499B (en) Near distance airplane pose measuring method based on parallax method
JP4448187B2 (en) Image geometric correction method and apparatus
CN110411457B (en) Positioning method, system, terminal and storage medium based on stroke perception and vision fusion
JP5134784B2 (en) Aerial photogrammetry
CN113850126A (en) Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle
CN108845335A (en) Unmanned aerial vehicle ground target positioning method based on image and navigation information
CN108413917B (en) Non-contact three-dimensional measurement system, non-contact three-dimensional measurement method and measurement device
CN106625673A (en) Narrow space assembly system and assembly method
CN104422425B (en) Irregular-outline object space attitude dynamic measuring method
CN109631876B (en) Inspection detector positioning method based on single-camera navigation image
CN110458885B (en) Positioning system and mobile terminal based on stroke perception and vision fusion
CN113870366B (en) Calibration method and calibration system of three-dimensional scanning system based on pose sensor
CN114993608A (en) Wind tunnel model three-dimensional attitude angle measuring method
CN104167001B (en) Large-visual-field camera calibration method based on orthogonal compensation
CN111025330B (en) Target inclination angle detection method and device based on depth map
CN113034605B (en) Target object position determining method and device, electronic equipment and storage medium
CN111207688A (en) Method and device for measuring distance of target object in vehicle and vehicle
CN108257184B (en) Camera attitude measurement method based on square lattice cooperative target
CN117029761A (en) Method and device for measuring target track under time-varying of station and vibration measuring method
CN110490934B (en) Monocular camera and robot-based mixer vertical type blade attitude detection method
CN109579871B (en) Inertial navigation part installation error detection method and device based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant