CN111105467B - Image calibration method and device and electronic equipment - Google Patents

Image calibration method and device and electronic equipment Download PDF

Info

Publication number
CN111105467B
CN111105467B CN201911294340.9A CN201911294340A CN111105467B CN 111105467 B CN111105467 B CN 111105467B CN 201911294340 A CN201911294340 A CN 201911294340A CN 111105467 B CN111105467 B CN 111105467B
Authority
CN
China
Prior art keywords
key frame
image
coordinate system
preset
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911294340.9A
Other languages
Chinese (zh)
Other versions
CN111105467A (en
Inventor
钟耳顺
黄科佳
颜鹏鹏
陈国雄
王晨亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Supermap Software Co ltd
Original Assignee
Supermap Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Supermap Software Co ltd filed Critical Supermap Software Co ltd
Priority to CN201911294340.9A priority Critical patent/CN111105467B/en
Publication of CN111105467A publication Critical patent/CN111105467A/en
Application granted granted Critical
Publication of CN111105467B publication Critical patent/CN111105467B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides an image calibration method, an image calibration device and electronic equipment, which are used for determining whether a current image is a key frame or not, if so, calculating three-dimensional coordinates of feature points in the key frame and position information of the photographing equipment in the moving process of the photographing equipment, adjusting the position information according to the three-dimensional coordinates of the feature points, determining a calibration position needing image mapping from the key frame after the position information of the photographing equipment is adjusted, mapping a preset image to the calibration position, converting a coordinate system of the key frame mapping the preset image to the calibration position, and converting the coordinate system into a map coordinate system. Because the final image of the invention is converted into the map coordinate system, the pose of the image relative to the earth horizontal plane (world coordinate system) is aligned with gravity.

Description

Image calibration method and device and electronic equipment
Technical Field
The present invention relates to the field of augmented reality, and in particular, to an image calibration method, an image calibration device, and an electronic device.
Background
The monocular vision SLAM algorithm has some defects that cannot be overcome by the frame, for example, the vision SLAM algorithm generally adopts a first frame as a world coordinate system, so that the estimated pose is the pose relative to the first frame image, rather than the pose relative to the earth horizontal plane (world coordinate system), and the latter is the pose really needed in navigation, in other words, the pose estimated by the vision method cannot be aligned with the gravity direction.
Disclosure of Invention
In view of the above, the present invention provides an image calibration method, an image calibration device and an electronic device, so as to solve the problem.
In order to solve the technical problems, the invention adopts the following technical scheme:
an image calibration method, comprising:
determining whether the current image is a key frame;
if the image capturing device is a key frame, calculating three-dimensional coordinates of feature points in the key frame and position information of the image capturing device in the moving process of the image capturing device, and adjusting the position information according to the three-dimensional coordinates of the feature points;
after the position information of the photographing equipment is adjusted, determining a calibration position needing image mapping from the key frame, and mapping a preset image to the calibration position;
and carrying out coordinate system conversion on the key frame mapping the preset image to the calibration position, and converting the key frame into a map coordinate system.
Optionally, before determining whether the current image is a key frame, further comprising:
acquiring two key frames meeting preset requirements; the preset requirements comprise that the number of the matched characteristic points is larger than a first preset threshold value and the parallax is larger than a second preset threshold value;
calculating a first relative position relation of the two key frames;
acquiring a first actual moving distance of the two key frames and taking the first actual moving distance as an absolute scale;
and determining a space coordinate system according to the first relative position relation and the first actual moving distance, wherein the origin of the space coordinate system is the initial position.
Optionally, determining whether the current image is a key frame includes:
acquiring a current image;
performing feature point matching on the current image and the last key frame, and performing parallax calculation to obtain a parallax calculation result;
and if the number of the matched feature points is larger than a third preset threshold value and the parallax calculation result meets a preset condition, taking the image as a key frame.
Optionally, after determining whether the current image is a key frame, further comprising:
calculating a second relative positional relationship between the current key frame and the previous key frame;
acquiring a second actual moving distance of the current key frame and the last key frame;
and correcting the position of the photographing equipment in the space coordinate system according to the second relative position relation and the second actual moving distance.
Optionally, mapping a preset image to the calibration position includes:
creating a local coordinate system corresponding to the preset image at the calibration position;
and converting the local coordinate system into the space coordinate system, and converting the space coordinate system into a camera coordinate system of the photographing equipment, so that the preset image is mapped to the calibration position.
An image calibration device, comprising:
a frame determining module for determining whether the current image is a key frame;
the data processing module is used for calculating the three-dimensional coordinates of the characteristic points in the key frame and the position information of the photographing equipment in the moving process of the photographing equipment if the key frame is the key frame, and adjusting the position information according to the three-dimensional coordinates of the characteristic points;
the image mapping module is used for determining a calibration position needing image mapping from the key frame after the position information of the photographing equipment is adjusted, and mapping a preset image to the calibration position;
and the coordinate conversion module is used for carrying out coordinate system conversion on the key frame mapping the preset image to the calibration position and converting the key frame into a map coordinate system.
Optionally, the method further comprises:
the frame acquisition module is used for acquiring two key frames meeting preset requirements; the preset requirements comprise that the number of the matched characteristic points is larger than a first preset threshold value and the parallax is larger than a second preset threshold value;
the first relation calculating module is used for calculating a first relative position relation of the two key frames;
the first distance acquisition module is used for acquiring first actual moving distances of the two key frames and taking the first actual moving distances as absolute scales;
and the position determining module is used for determining a space coordinate system according to the first relative position relation and the first actual moving distance, and the origin of the space coordinate system is the initial position.
Optionally, the frame determining module is configured to, when determining whether the current image is a key frame, specifically:
and obtaining a current image, carrying out feature point matching on the current image and the last key frame, carrying out parallax calculation to obtain a parallax calculation result, and taking the image as the key frame if the number of the matched feature points is larger than a third preset threshold value and the parallax calculation result meets a preset condition.
Optionally, the method further comprises:
a second relation calculating module, configured to calculate a second relative positional relation between the current key frame and the previous key frame;
a second distance obtaining module, configured to obtain a second actual moving distance of the current key frame and the previous key frame;
and the correction module is used for correcting the position of the photographing equipment in the space coordinate system according to the second relative position relation and the second actual moving distance.
An electronic device, comprising: a memory and a processor;
wherein the memory is used for storing programs;
the processor invokes the program and is configured to:
determining whether the current image is a key frame;
if the image capturing device is a key frame, calculating three-dimensional coordinates of feature points in the key frame and position information of the image capturing device in the moving process of the image capturing device, and adjusting the position information according to the three-dimensional coordinates of the feature points;
after the position information of the photographing equipment is adjusted, determining a calibration position needing image mapping from the key frame, and mapping a preset image to the calibration position;
and carrying out coordinate system conversion on the key frame mapping the preset image to the calibration position, and converting the key frame into a map coordinate system.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides an image calibration method, an image calibration device and electronic equipment, which are used for determining whether a current image is a key frame or not, if so, calculating three-dimensional coordinates of feature points in the key frame and position information of the photographing equipment in the moving process of the photographing equipment, adjusting the position information according to the three-dimensional coordinates of the feature points, determining a calibration position needing image mapping from the key frame after the position information of the photographing equipment is adjusted, mapping a preset image to the calibration position, converting a coordinate system of the key frame mapping the preset image to the calibration position, and converting the coordinate system into a map coordinate system. Because the final image of the invention is converted into the map coordinate system, the pose of the image relative to the earth horizontal plane (world coordinate system) is aligned with gravity.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for calibrating an image according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method for calibrating an image according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an image calibration device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
An embodiment of the present invention provides an image calibration method, referring to fig. 1, may include:
s11, determining whether the current image is a key frame or not; if yes, go to step S12.
Before step S11 is performed, it is necessary to determine the initial attitude and the initial position of the photographing apparatus.
In practical application, the photographing device may be a camera, a mobile device with a photographing device, and the like, where the photographing device obtains a plurality of images when performing mobile photographing, and before photographing, the photographing device may calibrate the photographing device to obtain an internal parameter, an external parameter, and a distortion parameter of the photographing device, where the calibration method may be a traditional camera calibration method, an active vision camera calibration method, and a camera self-calibration method. The traditional camera calibration method can be that a photo with a calibration plate is shot by a camera against a chessboard calibration plate, and then an own demonstration calibration program in an OpenCV product package is operated, so that the internal parameters, the external parameters and the distortion parameters can be calculated.
In practical applications, when the photographing apparatus is a monocular camera, the algorithm of the image photographed by the monocular camera has the disadvantage that there is no scale information, the scaling of the world cannot be recognized and observed, and in addition, there is no reference of the direction. In order to be able to determine the dimensional information, measurements have to be made with reference to an IMU (inertial measurement unit). According to the embodiment of the invention, the IMU is introduced, the system is kept stable at the initial stage, and the average value measured by the IMU in the first second is set as the gravity vector. The initial pose is then obtained directly from the measurements, where the initial pose includes pitch angle (pitch), roll angle (roll), and yaw angle (yaw) of the photographing apparatus, which are measured by the IMU.
The initial position of the photographing apparatus refers to a real position of the photographing apparatus when the photographing apparatus is started, and in another implementation manner of the present invention, referring to fig. 2, determining the initial position of the photographing apparatus may include:
s21, acquiring two key frames meeting preset requirements.
The preset requirements include that the number of the matched feature points is larger than a first preset threshold value and the parallax is larger than a second preset threshold value.
The two key frames must satisfy feature point matching, which means that the number of feature points matched is greater than a first preset threshold, and parallax matching, which means that the parallax is greater than a second preset threshold.
In this embodiment, the KLT optical flow algorithm may be used to track feature points while keeping a minimum of (100-300) feature points per frame of image.
The above-described deviations of the IMU of the photographing apparatus and the movement of the initial stage may lead to erroneous results. A loose-coupling alignment using IMU pre-integration data and SFM algorithm output is required as a robust initialization process.
The initialization process comprises the following steps:
1) The IMU needs to be ensured to move fully, at the moment, the change of linear acceleration acquired by the IMU can be considered, the standard deviation of the acceleration speed in the sliding window is calculated, and when the standard deviation is larger than 0.25, the IMU is fully excited and initialized sufficiently;
2) Pure vision initialization: the sfm problem is solved for the image frames and camera poses in the sliding window (Structure from Motion, i.e. the spatial and geometrical relationship of the object is determined by the movement of the camera). Firstly, feature matching is obtained through a feature point manager, and enough feature matching (the number of matched feature points is larger than a first preset threshold) and enough parallax (the parallax is larger than a second preset threshold) between the latest key frame and a certain key frame in a sliding window are considered. These two key frames are considered to be the key frames required for this embodiment.
S22, calculating a first relative position relation of the two key frames.
The rotation matrix and the translation matrix are restored between the two key frames through a five-point method, and the rotation matrix and the translation matrix can represent a first relative position relationship between the two key frames. The first relative positional relationship may be a forward movement of 30 cm.
S23, acquiring first actual moving distances of the two key frames, and taking the first actual moving distances as absolute dimensions.
And triangulating three-dimensional feature points. The three-dimensional feature points and the two-dimensional feature points in the sliding window solve PnP (method of solving 3D to 2D point pairs), and are optimized using ceres (a nonlinear optimization library).
c) The IMU is aligned with vision, the ratio required by alignment of IMU and sfm results is calculated by using the existing algorithm, and the absolute scale, namely the first actual moving distance, is obtained.
d) The initialization is successful.
S24, determining a space coordinate system according to the first relative position relation and the first actual moving distance, wherein the origin of the space coordinate system is the initial position.
And adjusting the coordinate system of the IMU at the initial moment according to the first relative position relation and the first actual moving distance to obtain a space coordinate system.
In practical applications, step S11 may specifically include:
1) A current image is acquired.
The current image is the image obtained by photographing by the photographing device.
2) Performing feature point matching on the current image and the last key frame, and performing parallax calculation to obtain a parallax calculation result; and if the number of the matched feature points is larger than a third preset threshold value and the parallax calculation result meets a preset condition, taking the image as a key frame.
In practical use, in this step of the initialization process, a first assessment of the camera position is made based on the visual information.
First, it is checked whether more than 30 identical feature points can be found between the current image and the last key frame of the sliding window. In addition, the average rotation compensation disparity value must be greater than 20 pixels. If so, it is a keyframe. The relative rotation and translation between the two images can be determined using the feature correspondence of the two frames and the so-called "five-point method". Otherwise, the current frame remains in the sliding window and waits for a new frame. Since there is no metric reference value available at this time, the scaling factor is first set to the temporary value. In the next step, the features of the two frames are triangulated to determine their spatial depth.
In another implementation manner of the present invention, after step S11, the method may further include:
1) Calculating a second relative positional relationship between the current key frame and the previous key frame;
2) Acquiring a second actual moving distance of the current key frame and the last key frame;
3) And correcting the position of the photographing equipment in the space coordinate system according to the second relative position relation and the second actual moving distance.
Steps 1 and 2 in this embodiment are similar to the process of determining the first relative positional relationship and the second actual movement distance in the above embodiment, and will not be described herein. For step 3, because of the space coordinate system constructed when the photographing apparatus is at the initial position, the position of the photographing apparatus in the space coordinate system is continuously changed during the movement of the photographing apparatus, and the position of the photographing apparatus in the space coordinate system should be corrected at this time.
S12, if the key frame is the key frame, calculating the three-dimensional coordinates of the feature points in the key frame and the position information of the photographing equipment in the moving process of the photographing equipment, and adjusting the position information according to the three-dimensional coordinates of the feature points;
in this embodiment, the foregoing KLT optical flow algorithm may be still used to track the feature points to obtain feature points of the key frame, and in the moving process of the photographing device, if the a position moves to the B position, the foregoing triangulation method and the foregoing PNP method may be still used to calculate the three-dimensional coordinates of the feature points in the key frame and the positional information of the photographing device.
The adjustment of the position information according to the three-dimensional coordinates of the feature points means that the position of the photographing device in a space coordinate system where the photographing device is located is adjusted in real time. The specific adjustment process is as follows:
and establishing a measurement model of the IMU, calculating a measurement error of the IMU, and calculating a camera measurement error, mainly a reprojection error of the characteristic points.
The gradient descent is found by Bundle Adjustment to optimize the position of the photographing apparatus in the spatial coordinate system in which the photographing apparatus is located.
S13, after the position information of the photographing equipment is adjusted, determining a calibration position needing image mapping from the key frame, and mapping a preset image to the calibration position.
Specifically, a calibration position is found according to the equipment posture and a ray algorithm creation or manual mark recognition mode, and the space coordinates of the calibration position, the current equipment space coordinates and the equipment posture are recorded.
In practical applications, the process of mapping the preset image to the calibration position may include:
1) Creating a local coordinate system corresponding to the preset image at the calibration position;
2) And converting the local coordinate system into the space coordinate system, and converting the space coordinate system into a camera coordinate system of the photographing equipment, so that the preset image is mapped to the calibration position.
The preset image can be a two-dimensional image or a three-dimensional image, and the two-dimensional map can be mapped to the calibration point on the basis according to the current gesture and screen pixels of the device through the determined calibration position and the current world coordinate system. After the calibration is successful, the method is equivalent to the steps that a local coordinate system of the preset image is created at the calibration position in the OpenGLES scene, a two-dimensional map is placed on a two-dimensional local plane in space, the two-dimensional map is placed in a local space in space, at the moment, the operation on the map is performed in the map local coordinate system, the change of the space coordinate system is calculated according to the posture of the equipment, and the equipment is transferred to a pixel coordinate system of a screen of the photographing equipment, so that the effect of drawing and operating in the three-dimensional space in different directions and at different angles is achieved.
S14, converting the key frame which maps the preset image to the calibration position into a map coordinate system.
In the observation process, the real-time movement of the device expresses the change of the device in the real three-dimensional space, after the initialization is finished, the device corresponds to the position coordinate of the world coordinate system, the change of the device is the change of the coordinate in the world coordinate system, the calibrated position has a fixed coordinate in the world coordinate system, and the real-time observation of the calibrated AR map by the device coordinate and the AR map coordinate of the calibrated position can be realized by mapping the device coordinate and the AR map coordinate of the calibrated position into the OpenGL ES scene space.
In this embodiment, whether the current image is a key frame is determined, if yes, three-dimensional coordinates of feature points in the key frame and position information of the photographing device are calculated in the moving process of the photographing device, the position information is adjusted according to the three-dimensional coordinates of the feature points, after the position information of the photographing device is adjusted, a calibration position where image mapping is required is determined from the key frame, a preset image is mapped to the calibration position, coordinate system conversion is performed on the key frame where the preset image is mapped to the calibration position, and the key frame is converted to a map coordinate system. Because the final image of the invention is converted into the map coordinate system, the pose of the image relative to the earth horizontal plane (world coordinate system) is aligned with gravity.
Optionally, in another embodiment of the present invention, an image calibration device is provided, referring to fig. 3, which may include:
a frame determining module 11, configured to determine whether the current image is a key frame;
the data processing module 12 is configured to calculate, if the key frame is used, three-dimensional coordinates of feature points in the key frame and position information of the photographing device in the moving process of the photographing device, and adjust the position information according to the three-dimensional coordinates of the feature points;
the image mapping module 13 is configured to determine a calibration position to be mapped from the key frame after the position information of the photographing device is adjusted, and map a preset image to the calibration position;
the coordinate conversion module 14 is configured to convert a coordinate system of a key frame mapped to the calibration position by the preset image, and convert the key frame into a map coordinate system.
Further, the method further comprises the following steps:
the frame acquisition module is used for acquiring two key frames meeting preset requirements; the preset requirements comprise that the number of the matched characteristic points is larger than a first preset threshold value and the parallax is larger than a second preset threshold value;
the first relation calculating module is used for calculating a first relative position relation of the two key frames;
the first distance acquisition module is used for acquiring first actual moving distances of the two key frames and taking the first actual moving distances as absolute scales;
and the position determining module is used for determining a space coordinate system according to the first relative position relation and the first actual moving distance, and the origin of the space coordinate system is the initial position.
Further, the frame determining module is configured to, when determining whether the current image is a key frame, specifically:
and obtaining a current image, carrying out feature point matching on the current image and the last key frame, carrying out parallax calculation to obtain a parallax calculation result, and taking the image as the key frame if the number of the matched feature points is larger than a third preset threshold value and the parallax calculation result meets a preset condition.
Further, the method further comprises the following steps:
a second relation calculating module, configured to calculate a second relative positional relation between the current key frame and the previous key frame;
a second distance obtaining module, configured to obtain a second actual moving distance of the current key frame and the previous key frame;
and the correction module is used for correcting the position of the photographing equipment in the space coordinate system according to the second relative position relation and the second actual moving distance.
Further, the image mapping module 13 is configured to, when mapping the preset image to the calibration position, specifically:
and creating a local coordinate system corresponding to the preset image at the calibration position, converting the local coordinate system into the space coordinate system, and converting the space coordinate system into a camera coordinate system of the photographing equipment, so that the preset image is mapped to the calibration position.
In this embodiment, whether the current image is a key frame is determined, if yes, three-dimensional coordinates of feature points in the key frame and position information of the photographing device are calculated in the moving process of the photographing device, the position information is adjusted according to the three-dimensional coordinates of the feature points, after the position information of the photographing device is adjusted, a calibration position where image mapping is required is determined from the key frame, a preset image is mapped to the calibration position, coordinate system conversion is performed on the key frame where the preset image is mapped to the calibration position, and the key frame is converted to a map coordinate system. Because the final image of the invention is converted into the map coordinate system, the pose of the image relative to the earth horizontal plane (world coordinate system) is aligned with gravity.
It should be noted that, in the working process of each module in this embodiment, please refer to the corresponding description in the above embodiment, and no further description is given here.
Optionally, on the basis of the embodiments of the image calibration method and apparatus, another embodiment of the present invention provides an electronic device, including: a memory and a processor;
wherein the memory is used for storing programs;
the processor invokes the program and is configured to:
determining whether the current image is a key frame;
if the image capturing device is a key frame, calculating three-dimensional coordinates of feature points in the key frame and position information of the image capturing device in the moving process of the image capturing device, and adjusting the position information according to the three-dimensional coordinates of the feature points;
after the position information of the photographing equipment is adjusted, determining a calibration position needing image mapping from the key frame, and mapping a preset image to the calibration position;
and carrying out coordinate system conversion on the key frame mapping the preset image to the calibration position, and converting the key frame into a map coordinate system.
Further, before determining whether the current image is a key frame, further comprising:
acquiring two key frames meeting preset requirements; the preset requirements comprise that the number of the matched characteristic points is larger than a first preset threshold value and the parallax is larger than a second preset threshold value;
calculating a first relative position relation of the two key frames;
acquiring a first actual moving distance of the two key frames and taking the first actual moving distance as an absolute scale;
and determining a space coordinate system according to the first relative position relation and the first actual moving distance, wherein the origin of the space coordinate system is the initial position.
Further, determining whether the current image is a key frame includes:
acquiring a current image;
performing feature point matching on the current image and the last key frame, and performing parallax calculation to obtain a parallax calculation result;
and if the number of the matched feature points is larger than a third preset threshold value and the parallax calculation result meets a preset condition, taking the image as a key frame.
Further, after determining whether the current image is a key frame, further comprising:
calculating a second relative positional relationship between the current key frame and the previous key frame;
acquiring a second actual moving distance of the current key frame and the last key frame;
and correcting the position of the photographing equipment in the space coordinate system according to the second relative position relation and the second actual moving distance.
Further, mapping a preset image to the calibration location includes:
creating a local coordinate system corresponding to the preset image at the calibration position;
and converting the local coordinate system into the space coordinate system, and converting the space coordinate system into a camera coordinate system of the photographing equipment, so that the preset image is mapped to the calibration position.
In this embodiment, whether the current image is a key frame is determined, if yes, three-dimensional coordinates of feature points in the key frame and position information of the photographing device are calculated in the moving process of the photographing device, the position information is adjusted according to the three-dimensional coordinates of the feature points, after the position information of the photographing device is adjusted, a calibration position where image mapping is required is determined from the key frame, a preset image is mapped to the calibration position, coordinate system conversion is performed on the key frame where the preset image is mapped to the calibration position, and the key frame is converted to a map coordinate system. Because the final image of the invention is converted into the map coordinate system, the pose of the image relative to the earth horizontal plane (world coordinate system) is aligned with gravity.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. An image calibration method, comprising:
determining whether the current image is a key frame;
if the key frame is the key frame, calculating the three-dimensional coordinates of the characteristic points in the key frame and the position information of the photographing equipment in the moving process of the photographing equipment, and adjusting the position information according to the three-dimensional coordinates of the characteristic points;
after the position information of the photographing equipment is adjusted, determining a calibration position needing image mapping from the key frame, and mapping a preset image to the calibration position;
converting a coordinate system of a key frame mapping the preset image to the calibration position, and converting the key frame into a map coordinate system;
determining whether the current image is a key frame includes:
acquiring a current image;
performing feature point matching on the current image and the last key frame, and performing parallax calculation to obtain a parallax calculation result;
if the number of the matched feature points is larger than a third preset threshold value and the parallax calculation result meets a preset condition, taking the image as a key frame;
the adjusting of the position information according to the three-dimensional coordinates of the feature points comprises:
establishing a measurement model of an inertial measurement unit, calculating a measurement error of the inertial measurement unit, and calculating a camera measurement error, wherein the camera measurement error comprises a reprojection error of a characteristic point;
searching gradient descent by using Bundle Adjustment, and optimizing the position of the photographing equipment in a space coordinate system where the photographing equipment is located;
mapping a preset image to the calibration position comprises:
creating a local coordinate system corresponding to the preset image at the calibration position;
and converting the local coordinate system into the space coordinate system, and converting the space coordinate system into a camera coordinate system of the photographing equipment, so that the preset image is mapped to the calibration position.
2. The image calibration method of claim 1, further comprising, prior to determining whether the current image is a key frame:
acquiring two key frames meeting preset requirements; the preset requirements comprise that the number of the matched characteristic points is larger than a first preset threshold value and the parallax is larger than a second preset threshold value;
calculating a first relative position relation of the two key frames;
acquiring a first actual moving distance of the two key frames and taking the first actual moving distance as an absolute scale;
and determining a space coordinate system according to the first relative position relation and the first actual moving distance, wherein the origin of the space coordinate system is an initial position.
3. The image calibration method according to claim 1, wherein after determining whether the current image is a key frame, further comprising:
calculating a second relative positional relationship between the current key frame and the previous key frame;
acquiring a second actual moving distance of the current key frame and the last key frame;
and correcting the position of the photographing equipment in the space coordinate system according to the second relative position relation and the second actual moving distance.
4. An image calibration device, comprising:
a frame determining module for determining whether the current image is a key frame;
the data processing module is used for calculating the three-dimensional coordinates of the characteristic points in the key frame and the position information of the photographing equipment in the moving process of the photographing equipment if the key frame is the key frame, and adjusting the position information according to the three-dimensional coordinates of the characteristic points;
the image mapping module is used for determining a calibration position needing image mapping from the key frame after the position information of the photographing equipment is adjusted, and mapping a preset image to the calibration position;
the coordinate conversion module is used for carrying out coordinate system conversion on the key frames mapping the preset images to the calibration positions and converting the key frames into a map coordinate system;
the frame determining module is used for determining whether the current image is a key frame or not, and is specifically used for:
obtaining a current image, carrying out feature point matching on the current image and a last key frame, carrying out parallax calculation to obtain a parallax calculation result, and taking the image as the key frame if the number of the matched feature points is larger than a third preset threshold value and the parallax calculation result meets a preset condition;
the data processing module is used for adjusting the position information according to the three-dimensional coordinates of the feature points, and is specifically used for:
establishing a measurement model of an inertial measurement unit, calculating a measurement error of the inertial measurement unit, and calculating a camera measurement error, wherein the camera measurement error comprises a reprojection error of a characteristic point;
searching gradient descent by using Bundle Adjustment, and optimizing the position of the photographing equipment in a space coordinate system where the photographing equipment is located;
the image mapping module is used for mapping the preset image to the calibration position, and is specifically used for:
creating a local coordinate system corresponding to the preset image at the calibration position;
and converting the local coordinate system into the space coordinate system, and converting the space coordinate system into a camera coordinate system of the photographing equipment, so that the preset image is mapped to the calibration position.
5. The image calibration device of claim 4, further comprising:
the frame acquisition module is used for acquiring two key frames meeting preset requirements; the preset requirements comprise that the number of the matched characteristic points is larger than a first preset threshold value and the parallax is larger than a second preset threshold value;
the first relation calculating module is used for calculating a first relative position relation of the two key frames;
the first distance acquisition module is used for acquiring first actual moving distances of the two key frames and taking the first actual moving distances as absolute scales;
and the position determining module is used for determining a space coordinate system according to the first relative position relation and the first actual moving distance, and the origin of the space coordinate system is an initial position.
6. The image calibration device of claim 4, further comprising:
a second relation calculating module, configured to calculate a second relative positional relation between the current key frame and the previous key frame;
a second distance obtaining module, configured to obtain a second actual moving distance of the current key frame and the previous key frame;
and the correction module is used for correcting the position of the photographing equipment in the space coordinate system according to the second relative position relation and the second actual moving distance.
7. An electronic device, comprising: a memory and a processor;
wherein the memory is used for storing programs;
the processor invokes the program and is configured to:
determining whether the current image is a key frame;
if the key frame is the key frame, calculating the three-dimensional coordinates of the characteristic points in the key frame and the position information of the photographing equipment in the moving process of the photographing equipment, and adjusting the position information according to the three-dimensional coordinates of the characteristic points;
after the position information of the photographing equipment is adjusted, determining a calibration position needing image mapping from the key frame, and mapping a preset image to the calibration position;
converting a coordinate system of a key frame mapping the preset image to the calibration position, and converting the key frame into a map coordinate system;
determining whether the current image is a key frame includes:
acquiring a current image;
performing feature point matching on the current image and the last key frame, and performing parallax calculation to obtain a parallax calculation result;
if the number of the matched feature points is larger than a third preset threshold value and the parallax calculation result meets a preset condition, taking the image as a key frame;
the adjusting of the position information according to the three-dimensional coordinates of the feature points comprises:
establishing a measurement model of an inertial measurement unit, calculating a measurement error of the inertial measurement unit, and calculating a camera measurement error, wherein the camera measurement error comprises a reprojection error of a characteristic point;
searching gradient descent by using Bundle Adjustment, and optimizing the position of the photographing equipment in a space coordinate system where the photographing equipment is located;
mapping a preset image to the calibration position comprises:
creating a local coordinate system corresponding to the preset image at the calibration position;
and converting the local coordinate system into the space coordinate system, and converting the space coordinate system into a camera coordinate system of the photographing equipment, so that the preset image is mapped to the calibration position.
CN201911294340.9A 2019-12-16 2019-12-16 Image calibration method and device and electronic equipment Active CN111105467B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911294340.9A CN111105467B (en) 2019-12-16 2019-12-16 Image calibration method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911294340.9A CN111105467B (en) 2019-12-16 2019-12-16 Image calibration method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111105467A CN111105467A (en) 2020-05-05
CN111105467B true CN111105467B (en) 2023-08-29

Family

ID=70422956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911294340.9A Active CN111105467B (en) 2019-12-16 2019-12-16 Image calibration method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111105467B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754586A (en) * 2020-06-28 2020-10-09 苏州臻迪智能科技有限公司 External parameter calibration method and device, external parameter calibration system and computer storage medium
CN112651997B (en) * 2020-12-29 2024-04-12 咪咕文化科技有限公司 Map construction method, electronic device and storage medium
CN113932805B (en) * 2021-10-12 2024-02-23 天翼数字生活科技有限公司 Method for improving positioning accuracy and speed of AR virtual object

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013120133A (en) * 2011-12-07 2013-06-17 Fujitsu Ltd Three-dimensional coordinate measuring instrument, three-dimensional coordinate measurement method, and program
WO2017022033A1 (en) * 2015-07-31 2017-02-09 富士通株式会社 Image processing device, image processing method, and image processing program
CN106846467A (en) * 2017-01-23 2017-06-13 阿依瓦(北京)技术有限公司 Entity scene modeling method and system based on the optimization of each camera position
CN108597036A (en) * 2018-05-03 2018-09-28 三星电子(中国)研发中心 Reality environment danger sense method and device
JP2019132664A (en) * 2018-01-30 2019-08-08 株式会社豊田中央研究所 Vehicle position estimating device, vehicle position estimating method, and vehicle position estimating program
JP2019133658A (en) * 2018-01-31 2019-08-08 株式会社リコー Positioning method, positioning device and readable storage medium
CN110490131A (en) * 2019-08-16 2019-11-22 北京达佳互联信息技术有限公司 A kind of localization method of capture apparatus, device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013120133A (en) * 2011-12-07 2013-06-17 Fujitsu Ltd Three-dimensional coordinate measuring instrument, three-dimensional coordinate measurement method, and program
WO2017022033A1 (en) * 2015-07-31 2017-02-09 富士通株式会社 Image processing device, image processing method, and image processing program
CN106846467A (en) * 2017-01-23 2017-06-13 阿依瓦(北京)技术有限公司 Entity scene modeling method and system based on the optimization of each camera position
JP2019132664A (en) * 2018-01-30 2019-08-08 株式会社豊田中央研究所 Vehicle position estimating device, vehicle position estimating method, and vehicle position estimating program
JP2019133658A (en) * 2018-01-31 2019-08-08 株式会社リコー Positioning method, positioning device and readable storage medium
CN108597036A (en) * 2018-05-03 2018-09-28 三星电子(中国)研发中心 Reality environment danger sense method and device
CN110490131A (en) * 2019-08-16 2019-11-22 北京达佳互联信息技术有限公司 A kind of localization method of capture apparatus, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111105467A (en) 2020-05-05

Similar Documents

Publication Publication Date Title
US10984554B2 (en) Monocular vision tracking method, apparatus and non-volatile computer-readable storage medium
CN108717712B (en) Visual inertial navigation SLAM method based on ground plane hypothesis
CN110383343B (en) Inconsistency detection system, mixed reality system, program, and inconsistency detection method
CN111105467B (en) Image calibration method and device and electronic equipment
CN110146099B (en) Synchronous positioning and map construction method based on deep learning
US20180075592A1 (en) Multi view camera registration
CN110728715A (en) Camera angle self-adaptive adjusting method of intelligent inspection robot
CN110176032B (en) Three-dimensional reconstruction method and device
JP6883608B2 (en) Depth data processing system that can optimize depth data by aligning images with respect to depth maps
CN111473739A (en) Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area
CN111192235B (en) Image measurement method based on monocular vision model and perspective transformation
JPWO2018235163A1 (en) Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method
CN108038886B (en) Binocular camera system calibration method and device and automobile
CN110189400B (en) Three-dimensional reconstruction method, three-dimensional reconstruction system, mobile terminal and storage device
EP3547260B1 (en) System and method for automatic calibration of image devices
WO2010133007A1 (en) Techniques for rapid stereo reconstruction from images
CN111091076B (en) Tunnel limit data measuring method based on stereoscopic vision
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN111260720A (en) Target height measuring system based on deep learning method
CN111220120B (en) Moving platform binocular ranging self-calibration method and device
CN111307146B (en) Virtual reality wears display device positioning system based on binocular camera and IMU
US11288877B2 (en) Method for matching a virtual scene of a remote scene with a real scene for augmented reality and mixed reality
CN103900473A (en) Intelligent mobile device six-degree-of-freedom fused pose estimation method based on camera and gravity inductor
CN111998862A (en) Dense binocular SLAM method based on BNN
CN111524174B (en) Binocular vision three-dimensional construction method for moving platform moving target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant