CN113345032A - Wide-angle camera large-distortion image based initial image construction method and system - Google Patents

Wide-angle camera large-distortion image based initial image construction method and system Download PDF

Info

Publication number
CN113345032A
CN113345032A CN202110767999.2A CN202110767999A CN113345032A CN 113345032 A CN113345032 A CN 113345032A CN 202110767999 A CN202110767999 A CN 202110767999A CN 113345032 A CN113345032 A CN 113345032A
Authority
CN
China
Prior art keywords
model
camera
image
point
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110767999.2A
Other languages
Chinese (zh)
Other versions
CN113345032B (en
Inventor
刘志励
范圣印
李一龙
王璀
张煜东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yihang Yuanzhi Technology Co Ltd
Original Assignee
Beijing Yihang Yuanzhi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yihang Yuanzhi Technology Co Ltd filed Critical Beijing Yihang Yuanzhi Technology Co Ltd
Priority to CN202110767999.2A priority Critical patent/CN113345032B/en
Publication of CN113345032A publication Critical patent/CN113345032A/en
Application granted granted Critical
Publication of CN113345032B publication Critical patent/CN113345032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T3/02
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to an initial image construction method and system based on a large distortion image of a wide-angle camera, wherein the method comprises the following steps: step S1, determining a specific camera model through tracking incident rays; step S2, extracting visual feature points of the distorted image, performing data association, and generating incident light matching pairs; step S3, initializing the distortion map according to the corresponding geometric relationship of the incident ray matching: step S4, selecting a geometric model through error comparison; and step S5, recursive scale recovery. The method provides a general camera incident ray tracing scheme, so that any camera parameter model can be converted into the specific camera model provided by the invention, and the specific camera model traces the direction of incident rays of corresponding pixel points to complete the initialization of the visual map.

Description

Wide-angle camera large-distortion image based initial image construction method and system
Technical Field
The invention relates to a camera modeling technology, a computer vision positioning technology and a vision three-dimensional reconstruction technology, in particular to an initial mapping method and system based on a large-distortion map of a wide-angle camera, and particularly relates to a method for realizing initial mapping of a robot or an automatically-driven automobile based on the large-distortion wide-angle camera. The method can be used in the fields of automatic driving, full-automatic robots, unmanned planes, virtual reality, augmented reality and the like.
Background
The visual initialization mapping is mainly applied to synchronous positioning mapping (SLAM) or three-dimensional reconstruction (SFM) technology, and the accuracy of the initialization mapping directly influences the effects of synchronous positioning mapping and three-dimensional mapping.
Chinese patents CN110458885A and CN110411457A refer to the pose change of the carrier measured by the pulse of the wheel encoder as one of the constraints of the optimization equation, and add the pose change to the least square optimization equation to restore the scale, and meanwhile, take the reprojection error as the dependent variable of the objective function, and finally obtain the optimal pose after restoring the scale.
In the system initialization, the absolute pose change between two frames of images is obtained by using the integral of the pulse of the wheel encoder measured between the two frames of images or the integral of the acceleration and the angular velocity of the IMU, meanwhile, the relative pose change without scale information is obtained by calculation through the visual geometric relationship between the two frames of images, the scale of the visual map is restored through the ratio of the displacement obtained by calculation of the two sensors, the system initialization only considers the absolute scale of the initial two frames, and the pose error measurement mode used at the same time is the reprojection error. In addition, the above patents support only a pinhole model.
Compared with a common pinhole camera, a fisheye camera or a panoramic camera is often large in visual angle, the surrounding scene information acquired at the same time is richer, and the observed scene is more stable; but at the same time there is a more complex model of a camera, i.e. the light rays undergo more complex physical processes during the imaging process. The commonly used multi-view geometric theory and optimization theory under the pinhole camera are no longer applicable to visual positioning and three-dimensional reconstruction of a scene on a picture generated by a large-distortion wide-angle lens. And a single camera is used for building a visual map, has scale uncertainty and cannot be used for controlling an actual carrier and measuring obstacles. Therefore, it is very critical to solve the applicability problem of the commonly used pinhole camera-based multi-view geometry theory and optimization theory of the observation model under the large-distortion camera in the wide-angle large-distortion map, and to recover the absolute scale of the visual map constructed by the single wide-angle camera.
Disclosure of Invention
The invention aims to provide an initial map construction method based on a large-distortion map of a wide-angle camera, which provides a general camera incident ray tracking scheme, so that any camera parameter model can be converted into a specific camera model provided by the invention, and the specific camera model tracks the direction of incident rays of corresponding pixel points to complete the initialization of a visual map.
In order to achieve the purpose, the invention adopts the following technical scheme:
step S1, determining a specific camera model by tracing the incident light: traversing all pixel points on the image, back-projecting each pixel point into a corresponding incident ray vector through a pre-calibrated camera model, and back-projecting the same pixel point into a corresponding incident ray vector through an initial specific camera model; constructing a least square error term between an incident ray vector obtained by back projection of a camera model calibrated in advance and an incident ray vector obtained by back projection of an initial specific camera model, adding error terms generated by all pixel points on an image, and iteratively adjusting the specific camera model to minimize the sum of the error terms so as to determine a final specific camera model;
step S2, extracting visual characteristic points of the distortion image, and performing data association to generate an incident light matching pair: extracting visual feature points on the distortion image, and judging whether the image is an effective matching point pair or not by calculating the distance of descriptors of image blocks corresponding to the feature points; then, the specific camera model obtained in the step S1 is applied to each matching point pair, and the visual feature points are converted into direction vectors of incident light rays corresponding to the visual feature points, so as to generate incident light ray matching pairs;
step S3, initializing the distortion map according to the corresponding geometric relationship of the incident ray matching: respectively calculating an epipolar geometric relationship and a homography geometric relationship corresponding to the incident ray matching pair according to the incident ray matching pair obtained in the step S2, decomposing the epipolar matrix and the homography matrix to obtain relative motion of two corresponding frames of images, and recovering points in a three-dimensional space corresponding to the incident ray matching pair under the condition of known relative motion; wherein, the homography geometric relation is referred to as an H model for short, and the epipolar geometric relation is referred to as an F model for short;
step S4, selecting a geometric model by error comparison: according to the calculation result of the step S3, errors of the relative motion of the two frames of images obtained by decomposing the epipolar matrix and the homography matrix are respectively calculated, and the optimal relative motion calculation result is selected through error comparison;
step S5, recursive scale restoration: in the video frame sequence, the pose with the absolute scale of each frame image is obtained through the input of a wheel type encoder and an inertial navigation system, the inter-frame optimal relative motion calculation result obtained in the step S4 is aligned with the pose, and the absolute scale of the three-dimensional map point is adjusted through multi-frame accumulation to obtain the coordinate with the absolute scale of the final three-dimensional space point.
Preferably, the initial specific camera model in step S1 is a specific camera model of a 4 th order polynomial camera, and the specific method is as follows:
(1) firstly, two-dimensional planar affine transformation is carried out on image pixel coordinates
Figure BDA0003151457910000021
wherein ,uc and vcRespectively representing the center coordinates of the image in the horizontal and vertical directions, A represents an affine transformation matrix
Figure BDA0003151457910000022
The initial value is typically set to the identity matrix during calibration of a particular camera, i.e.
Figure BDA0003151457910000023
s represents a scaling factor, the initial value being typically set to 1 during calibration of a particular camera;
(2) calculating the distance of the pixel coordinate relative to the center of the 4-order polynomial camera after affine transformation
Figure BDA0003151457910000031
(3) Calculating the component of incident ray in the z-axis direction in the camera coordinate system
z′=a0+a1ρ+a2ρ2+a3ρ3+a4ρ4
wherein ,a0,a1,a2,a2Coefficients representing a polynomial;
the direction vector of the incident light corresponding to the finally obtained image pixel point is
Figure BDA0003151457910000032
As a preferred aspect of the present invention, when each pixel point is back-projected as a corresponding incident ray vector through the pre-calibrated camera model in step S1, the back-projection process of the pre-calibrated camera model may be abstractly expressed as a function f, and the back-projection of the pre-calibrated camera model tracks that the incident ray of the pixel point x is r', that is: r' ═ f (x)
The formula of the sum of the least square error terms constructed according to the incident light obtained by calibrating the back projection of the camera model in advance and the incident light obtained by back projection of the specific camera model is as follows:
g=mingn(r-r′)2
where g denotes the back projection of the particular camera model that is ultimately used, and n denotes the number of pixel points on the image.
Preferably, in step S2, when determining whether the pair of matching points is valid, the method includes:
(1) acquiring two images which are generated in continuous time and are adjacent in time, respectively extracting a feature point on each image, and performing feature point descriptor matching after extracting the feature points;
(2) for a certain descriptor in the first image, calculating the Hamming distance of the descriptor corresponding to each feature point in the second image, and arranging the corresponding distances from small to large, wherein if the distance of the smallest descriptor is less than 60% of the distance of the next smallest descriptor; meanwhile, if the minimum descriptor distance is less than 45, the feature point in the obtained image I is judged to be effectively matched with the feature point in the image II corresponding to the minimum descriptor distance.
As a preferable aspect of the present invention, in step S3, when the matching pair of incident rays is a homography geometric relationship, the relative motion relationship between the two frame images is expressed as follows:
Figure BDA0003151457910000033
wherein the coordinate of the feature point obtained at the first position of the camera is x1And x1Matching the feature point observed at the second position of the camera to x2R represents the relative rotation of the two positions,
Figure BDA0003151457910000034
indicating the relative displacement of the two positions.
As a preferable aspect of the present invention, in step S3, when the matching pairs of incident rays are in an epipolar geometry, the relative motion relationship between the two images is expressed as follows:
g(x1)*t×R*g(x2)=0
wherein the coordinate of the feature point obtained at the first position of the camera is x1And x1Matching the feature point observed at the second position of the camera to x2R tableShowing the relative rotation of the two positions,
Figure BDA0003151457910000041
indicating the relative displacement of the two positions.
As a preferable aspect of the present invention, in the step S3, when restoring the point in the three-dimensional space corresponding to the matching pair of incident rays, the matched feature point x1And x2The world coordinate X of the point in the three-dimensional space corresponding to the point should satisfy the following relation:
g(x1)×(P1X)=0
g(x2)×(P2X)=0
wherein ,P1Projection matrix of camera in first position, P1=[I 0] ;P2Is a projection matrix of the camera in the second position, P2=[R t];
According to the formula, the value of the spatial position X of the same three-dimensional point observed in two continuous frames of large distortion images can be calculated and solved.
Preferably, in step S4, when selecting the geometric model, the method for calculating the model error includes:
esum=e1+e2
Figure BDA0003151457910000042
Figure BDA0003151457910000043
wherein ,e1Is x1Angle of the incident ray with respect to the plane alpha, e2Is x2The error of the included angle between the polar plane beta and the corresponding polar plane beta; n isαIs x2When the incident light ray is coplanar with the displacement vector t, n is the normal vector of the plane alphaβIs x1The normal vector of the plane beta formed by the incident light and the displacement vector t;
nα=t×R*g(x2)
nβ=(-R-1*t)×R-1*g(x1)
if eF>eHThen H model map initialization is selected if eH>eFThen the F model is selected for map initialization.
In a preferred embodiment of the present invention, in the recursive scale restoration in step S5, the absolute scale of the visual map is calculated by:
Figure BDA0003151457910000044
wherein ,
Figure BDA0003151457910000045
is expressed as mujThe rotation measured by the time wheel encoder or inertial navigation system,
Figure BDA0003151457910000046
is expressed as muiThe displacement measured by the time wheel encoder or inertial navigation system,
Figure BDA0003151457910000047
is expressed as mujThe displacement measured by the time wheel encoder or inertial navigation system,
Figure BDA0003151457910000048
is expressed as mujThe time instant is calculated by the image information as a rotation,
Figure BDA0003151457910000049
is expressed as muiThe moment is a displacement calculated by the image information,
Figure BDA00031514579100000410
is expressed as mujDisplacement obtained by image information calculation at any moment;
and then, restoring the visual map points under the world coordinate system in the three-dimensional space according to the absolute scaleTrue position X ofr
Figure BDA0003151457910000051
Where X is a coordinate value of the initial map point restored in step S3 in the world coordinate system.
The invention also aims to provide an initial mapping system based on the large distortion map of the wide-angle camera, which comprises a specific camera model construction module, a feature point extraction module, a feature point matching module, a data association module, an incident ray association module, an H model analysis module, an F model analysis module, a distortion map initialization module, a model selection module, an absolute scale calculation module and an initial mapping module;
the specific camera model building module is used for determining a specific camera model used by the system according to a least square error term between an incident ray vector obtained by back projection of a built pre-calibrated camera model and an incident ray vector obtained by back projection of an initial specific camera model, and converting the visual feature points into direction vectors of the corresponding incident rays according to the specific camera model;
the characteristic point extraction module is used for extracting visual characteristic points on the distorted image;
the characteristic point matching module is used for matching the characteristic point descriptors of the extracted characteristic points to ensure that the extracted characteristic points have corresponding descriptors;
the data correlation module is used for finding out feature points which can be effectively matched in two images which are generated in continuous time and are adjacent in time;
the incident light correlation module is used for converting the effectively matched feature point corresponding relation into the corresponding relation of incident light at different moments in two images adjacent in time generated in continuous time;
the H model resolving module is used for calculating the relative motion relation of two continuous frames of images when the matching pair of the incident light is in a homography geometric relation;
the F model resolving module is used for calculating the relative motion relation of two continuous frames of images when the matching pair of the incident light is an antipodal geometric relation;
the distortion map initialization module is used for solving the value of the spatial position X of the same three-dimensional point observed in two continuous frames of large distortion images according to the data output by the H model solving module or the F model solving module;
the model selection module determines to select the model to be solved through error comparison;
the absolute scale calculation module is used for processing data measured by the wheel type encoder or the inertial navigation system and data obtained by image information calculation to obtain the absolute scale of the visual map;
and the initialized map building module is used for processing the X value obtained in the distorted map initializing module according to the absolute scale of the visual map to obtain the coordinate of the final three-dimensional space point with the absolute scale.
The invention has the advantages and technical effects that:
1. the initialized mapping method provided by the invention directly utilizes the distortion map adopted by the wide-angle camera, deduces the multi-view geometric theory under a pinhole camera model into a system for visual positioning and mapping of a large distortion map, and establishes a general incident light line model (specific camera model), thereby reducing the calculation consumption generated in the whole map projection process for correcting the distortion map, reducing the requirements of the system on the calculation performance of an operation platform, and being suitable for transplantation and application among platforms.
2. The method provided by the invention reserves the texture information in the original distortion map, fully utilizes the advantage of larger visual field of the wide-angle camera, and adopts the information of the whole map to obtain more durable and stable visual characteristic pixel blocks, so that the initialization is easier in the initialization map building process, and a more accurate initialization map is obtained.
3. In the initial mapping process, the invention provides an error calculation scheme based on the included angle between the incident ray and the polar plane under the double-view-angle geometric condition, and based on the error calculation scheme, the stability and the accuracy of various double-view-angle geometric structures are evaluated, so that the optimal calculation result is ensured to be output.
4. The initialized mapping method provided by the invention adopts a recursive visual map scale recovery method, and the method effectively solves the problems of single-camera positioning and uncertain mapping scale through recursively calculating scale factors of the visual map and the vehicle-mounted odometer within a certain time.
5. The invention combines a synchronous wheel type encoder or an inertial navigation system, recursively calculates and optimizes the absolute scale of a single wide-angle camera during visual map initialization, and solves the problem that the absolute scale is uncertain when a visual map is established only by the single wide-angle camera. In addition, the obtained visual map with absolute scale can also be applied to robot or automatic driving automobile carriers for identification and path planning of obstacles in real space.
Drawings
FIG. 1 is a flow chart of an initial map creation method of the present invention;
FIG. 2 is a block diagram of the initial mapping system of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. Technical solutions of the present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
As shown in fig. 1, the method for initializing a large distortion map based on a wide-angle camera provided by the present invention includes the following steps:
step S1, determining a specific camera model by tracing the incident light: traversing all pixel points on the image, back-projecting each pixel point into a corresponding incident ray vector through a pre-calibrated camera model, and back-projecting the same pixel point into a corresponding incident ray vector through an initial specific camera model; constructing a least square error term between an incident ray vector obtained by back projection of a camera model calibrated in advance and an incident ray vector obtained by back projection of an initial specific camera model, adding error terms generated by all pixel points on an image, and iteratively adjusting the specific camera model to minimize the sum of the error terms so as to determine a final specific camera model;
step S2, extracting visual characteristic points of the distortion image, and performing data association to generate an incident light matching pair: extracting visual feature points on the distortion image, and judging whether the image is an effective matching point pair or not by calculating the distance of descriptors of image blocks corresponding to the feature points; then, the specific camera model obtained in the step S1 is applied to each matching point pair, and the visual feature points are converted into direction vectors of incident light rays corresponding to the visual feature points, so as to generate incident light ray matching pairs;
step S3, initializing the distortion map according to the corresponding geometric relationship of the incident ray matching: respectively calculating an epipolar geometric relationship and a homography geometric relationship corresponding to the incident ray matching pair according to the incident ray matching pair obtained in the step S2, decomposing the epipolar matrix and the homography matrix to obtain relative motion of two corresponding frames of images, and recovering points in a three-dimensional space corresponding to the incident ray matching pair under the condition of known relative motion; wherein, the homography geometric relation is referred to as an H model for short, and the epipolar geometric relation is referred to as an F model for short;
step S4, selecting a geometric model by error comparison: according to the calculation result of the step S3, errors of the relative motion of the two frames of images obtained by decomposing the epipolar matrix and the homography matrix are respectively calculated, and the optimal relative motion calculation result is selected through error comparison;
step S5, recursive scale restoration: in the video frame sequence, the pose with the absolute scale of each frame image is obtained through the input of a wheel type encoder and an inertial navigation system, the inter-frame optimal relative motion calculation result obtained in the step S4 is aligned with the pose, and the absolute scale of the three-dimensional map point is adjusted through multi-frame accumulation to obtain the coordinate with the absolute scale of the final three-dimensional space point.
In order to make it clear to those skilled in the art how the above steps of the present application are specifically implemented, the above steps are described in detail below.
Step S1, determining a specific camera model by tracing incident light
(1) For any pixel point r in the image and the direction r of the incident light ray corresponding to the pixel point r, which is (x, y, z), the invention initially uses a method of back projection of a specific camera model of a 4-order polynomial camera to calculate the direction of the incident light ray corresponding to the pixel point of the image, and the specific method is as follows:
(1.1) first, two-dimensional planar affine transformation is performed on the image coordinates
Figure BDA0003151457910000071
wherein ,uc and vcRespectively representing the center coordinates of the image in the horizontal and vertical directions, A represents an affine transformation matrix
Figure BDA0003151457910000081
The initial value is usually set to the identity matrix during camera calibration, i.e.
Figure BDA0003151457910000082
s represents a scaling factor, the initial value being typically set to 1 during a particular camera calibration. That is, in the initial setting of camera calibration, it is generally considered that the projection of the incident light on the image plane is consistent with the direction of the image center pointing to the pixel point.
(1.2) calculating the distance rho of the pixel coordinate relative to the 4 th-order polynomial camera center after affine transformation
Figure BDA0003151457910000083
(1.3) calculating the component of the incident ray in the z-axis direction in the camera coordinate system
z′=a0+a1ρ+a2ρ2+a3ρ3+a4ρ4 (3)
wherein ,a0,a1,a2,a2Coefficients representing a polynomial;
the direction vector of the incident light corresponding to the finally obtained image pixel point is
Figure BDA0003151457910000084
(2) When back projection is carried out through a pre-calibrated camera model to form a corresponding incident ray vector, the back projection process is abstractly expressed as a function f, and the back projection of the pre-calibrated camera model tracks the incident ray of the same pixel point x to be r', namely:
r′=f(x) (5)
(3) according to the least square error term between the incident ray obtained by the back projection of the pre-calibrated camera model and the incident ray obtained by the back projection of the specific camera model, the sum of the least square error terms is constructed by traversing all pixels on the image:
g=mingn(r-r′)2 (6)
where g represents the back projection of the particular camera model ultimately used in the present invention and n represents the number of pixel points on the image.
Step S2, extracting visual feature points of the distorted image, and performing data association
Directly extracting visual feature points on a distortion map obtained by a wide-angle camera, wherein the visual feature points comprise but are not limited to the existing SIFT, SURF and ORB feature points, extracting the feature points and then performing feature point descriptor matching, for example, extracting image feature points by using an ORB algorithm in a system, and for the extracted ORB feature points, 256 brief binary descriptors corresponding to the extracted ORB feature points are provided; for two temporally adjacent pictures generated in continuous time, t1The image generated at the moment of time is one,and t2Image two (t) generated at time1<t2) (ii) a For a certain descriptor in the first image, calculating the Hamming distance of the brief descriptor corresponding to each feature point in the second image respectively, and arranging the corresponding distances from small to large, wherein if the distance of the smallest descriptor is less than 60% of the distance of the next smallest descriptor; meanwhile, if the minimum descriptor distance is less than 45, the feature point in the image I and the feature point in the image II corresponding to the minimum descriptor distance are considered as effective matching point pairs; then, the visual feature points are converted into direction vectors of the corresponding incident light rays by using the back projection of the specific camera model obtained in step S1, and the matching relation of the incident light rays is obtained according to the matching relation of the two-dimensional plane feature points.
Step S3, distortion map initialization
(1) And solving the relative motion of two adjacent frames of images under different geometrical relationships through the matching of normal vectors of unit incident light rays and the corresponding geometrical relationships satisfied between the unit incident light rays.
The corresponding geometrical relationships satisfied between unit incident rays mainly include two types:
the first is a homography geometric relationship, referred to as an H model for short, that is, points in a physical space corresponding to feature points on a matching are all on a plane, and the plane equation is
Figure BDA0003151457910000091
wherein ,
Figure BDA0003151457910000092
represents the normal vector of the plane and d represents the constant term of the plane equation.
At this time, the relative motion relationship between two adjacent frames of images is expressed as follows:
Figure BDA0003151457910000093
wherein the coordinate of the feature point obtained at the first position of the camera is x1And x1MatchingThe feature point observed at the second position of the camera is x2R represents the relative rotation of the two positions,
Figure BDA0003151457910000094
representing the relative displacement of two positions;
the second is the epipolar geometry of unit incident rays, referred to as an F model for short, i.e., it is assumed that the point portions or all points in the physical space corresponding to the feature points on the matching are not on the same plane;
at this time, the relative motion relationship between two adjacent frames of images is expressed as follows:
g(x1)*t×R*g(x2)=0 (8)
wherein the coordinate of the feature point obtained at the first position of the camera is x1And x1Matching the feature point observed at the second position of the camera to x2R represents the relative rotation of the two positions,
Figure BDA0003151457910000095
representing the relative displacement of two positions;
and (4) through the relational expression, simultaneously establishing an equation set formed by a plurality of matching point pairs, and solving a rotation matrix R and a displacement vector t of two opposite camera positions.
(2) After the relative poses of two continuous positions of the wide-angle camera are calculated, the positions of three-dimensional space points observed by two corresponding characteristic points are recovered by triangularizing the light rays of unit incident light rays corresponding to the characteristic points.
The coordinate of the feature point obtained at the first position of the camera is x1And x1Matching the feature point observed at the second position of the camera to x2R represents the relative rotation of the two positions,
Figure BDA0003151457910000096
representing the relative displacement of two positions; at the same time, x1And x2The corresponding point world coordinate in the three-dimensional space is X, and the projection matrix at the moment, namely the projection of the camera at the first positionThe matrix is
P1=[I 0] (9)
Wherein, I is expressed as an identity matrix, which represents a 3 × 3, i.e. 3 rows and 3 columns identity matrix, and 0 is expressed as 3 × 1, i.e. 3 rows and 1 columns zero matrix;
the projection matrix of the camera at the second position is
P2=[R t] (10)
The matching of the feature points of the two frames and the point X in the corresponding physical space satisfies the following equation
g(x1)×(P1X)=0 (11)
g(x2)×(P2X)=0 (12)
And calculating and solving the value of the spatial position X of the same three-dimensional point observed in two continuous frames of large distortion images according to the formula.
Step S4, geometric model selection
The rotation R and the translation vector t and the matching point pair x obtained in step S31And x2And determining error calculation modes of the two geometric models according to the point X in the three-dimensional space.
From the image, i.e. the three-dimensional spatial geometrical relationship, there is a plane α such that x2Is coplanar with the displacement vector t, the normal vector of the plane α is:
nα=t×R*g(x2) (13)
plane β is also present, so that x1Is coplanar with the displacement vector t, the normal vector of the plane β at this time is:
nβ=(-R-1*t)×R-1*g(x1) (14)
since t is also in this plane, the optical centers of the two cameras are also in this plane, i.e., the plane equation is
Figure BDA0003151457910000101
As can be seen from the above, x1The angle Δ θ between the incident ray and the plane α is:
Figure BDA0003151457910000102
for the same reason, x2The error of the included angle of the polar plane beta corresponding to the polar plane beta is as follows:
Figure BDA0003151457910000103
the error of the model is defined as:
esum=e1+e2 (18)
if eF>eHSelecting an H model map for initialization; if eH>eFThen the F model is selected for map initialization.
Step S5, recursive Scale recovery
The method comprises the following steps of recursively adjusting the scale of a map frame by frame through observation of multiple frames in a video sequence, wherein when the scale information of a map point observed in the kth frame of the map is adjusted, the scale adjustment needs to be completed through observation of the 1 st frame to the k-1 st frame of the map point, and when the scale of the k-1 st frame is adjusted, the scale adjustment of the map point is completed through observation of the 1 st frame to the k-2 nd frame of the map point. Each frame of image in the video sequence can be obtained through the steps S2 and S3
Figure BDA0003151457910000111
Figure BDA0003151457910000112
Corresponding motion information
Figure BDA0003151457910000113
T is the time corresponding to the current image, K is the image serial number corresponding to the current image, i is the serial number of the image in the video sequence collected by the camera, according to the composition of three-dimensional motion,namely, it is
Figure BDA0003151457910000114
By odometry integration of the data of the wheel encoders or of the inertial navigation system, the absolute odometry position O of the support at the moment mu of the corresponding image can be obtainedμCorresponding to the image serial number to obtain images with different serial numbers corresponding to different moments, wherein the odometer position of the carrier is
Figure BDA0003151457910000115
For i is more than or equal to 0 and less than or equal to K, j is more than or equal to 0 and less than i, the absolute scale of the visual map is calculated as follows:
Figure BDA0003151457910000116
wherein ,
Figure BDA0003151457910000117
is expressed as mujThe rotation measured by the time wheel encoder or inertial navigation system,
Figure BDA0003151457910000118
is expressed as muiThe displacement measured by the time wheel encoder or inertial navigation system,
Figure BDA0003151457910000119
is expressed as mujThe displacement measured by the time wheel encoder or inertial navigation system,
Figure BDA00031514579100001110
is expressed as mujThe time instant is calculated by the image information as a rotation,
Figure BDA00031514579100001111
is expressed as muiBits calculated by image information at timeThe movement of the movable part is carried out,
Figure BDA00031514579100001112
is expressed as mujDisplacement obtained by image information calculation at any moment;
and the real position X of the visual map point under the world coordinate system in the three-dimensional space is recoveredrIs composed of
Figure BDA00031514579100001113
And X is a coordinate value of the initial map point recovered in the third step in the world coordinate system.
Embodiment 2 an initialization map creation system based on large distortion map of wide-angle camera
As shown in fig. 2, the initialization map building system based on the large distortion map of the wide-angle camera provided by the invention comprises a specific camera model building module 1, a feature point extracting module 2, a feature point matching module 3, a data association module 4, an incident ray association module 5, an H model analysis module 6, an F model analysis module 7, a distortion map initialization module 8, a model selecting module 9, an absolute scale calculating module 10 and an initialization map building module 11;
the specific camera model building module 1 is configured to determine a specific camera model used by the system according to a least square error term between an incident light vector obtained by back projection of a built pre-calibrated camera model and an incident light vector obtained by back projection of an initial specific camera model, and convert the visual feature points into direction vectors of the incident lights corresponding to the visual feature points according to the specific camera model;
the characteristic point extraction module 2 is used for extracting visual characteristic points on the distorted image;
the feature point matching module 3 is used for matching feature point descriptors of the extracted feature points, so that the extracted feature points have corresponding descriptors;
the data association module 4 is used for finding out feature points which can be effectively matched in two images which are generated on continuous time and are adjacent in time;
the incident light correlation module 5 is used for converting the effectively matched feature point corresponding relation into the corresponding relation of incident light rays at different moments in two images which are generated in continuous time and are adjacent in time;
the H model resolving module 6 is used for calculating the relative motion relation of two continuous frames of images when the matching pair of the incident light is in a homography geometric relation;
the F model resolving module 7 is used for calculating the relative motion relationship of two continuous frames of images when the matching pair of the incident light is an epipolar geometric relationship;
the distortion map initialization module 8 is used for solving the value of the spatial position X of the same three-dimensional point observed in two continuous frames of large distortion images according to the data output by the H model solving module or the F model solving module;
the model selection module 9 determines to select the model to be solved through error comparison;
the absolute scale calculation module 10 is configured to process data measured by the wheel encoder or the inertial navigation system and data obtained through image information calculation to obtain an absolute scale of the visual map;
and the initialized map building module 11 is configured to process the X value obtained in the distorted map initializing module according to the absolute scale of the visual map, so as to obtain a coordinate of a final three-dimensional space point with the absolute scale.
The initialization map building method and the system can be used in the fields of automatic driving, full-automatic robots, unmanned planes, virtual reality, augmented reality and the like; when the method is used for automatic driving, the initial map building of an automatic driving automobile can be realized; when the method is used in the field of full-automatic robots, the initialization of robot mapping can be realized; when the method is used in the field of unmanned aerial vehicles, the unmanned aerial vehicle can be initialized, the method can obtain a more accurate initialized map, and the effect of subsequent map building is ensured.
In addition, it is necessary to explain: the mapping method can finally obtain the three-dimensional space point coordinates with absolute scale, and the obtained visual map with absolute scale can also be applied to robots or automatic driving automobile carriers for identification and path planning of obstacles in a real space; therefore, any field or application related to the technical scheme of the application belongs to the protection scope of the application.

Claims (10)

1. An initial image construction method based on a large distortion image of a wide-angle camera is characterized by comprising the following steps: the method comprises the following steps:
step S1, determining a specific camera model by tracing the incident light: traversing all pixel points on the image, back-projecting each pixel point into a corresponding incident ray vector through a pre-calibrated camera model, and back-projecting the same pixel point into a corresponding incident ray vector through an initial specific camera model; constructing a least square error term between an incident ray vector obtained by back projection of a camera model calibrated in advance and an incident ray vector obtained by back projection of an initial specific camera model, adding error terms generated by all pixel points on an image, and iteratively adjusting the specific camera model to minimize the sum of the error terms so as to determine a final specific camera model;
step S2, extracting visual characteristic points of the distortion image, and performing data association to generate an incident light matching pair: extracting visual feature points on the distortion image, and judging whether the image is an effective matching point pair or not by calculating the distance of descriptors of image blocks corresponding to the feature points; then, the specific camera model obtained in the step S1 is applied to each matching point pair, and the visual feature points are converted into direction vectors of incident light rays corresponding to the visual feature points, so as to generate incident light ray matching pairs;
step S3, initializing the distortion map according to the corresponding geometric relationship of the incident ray matching: respectively calculating an epipolar geometric relationship and a homography geometric relationship corresponding to the incident ray matching pair according to the incident ray matching pair obtained in the step S2, decomposing the epipolar matrix and the homography matrix to obtain relative motion of two corresponding frames of images, and recovering points in a three-dimensional space corresponding to the incident ray matching pair under the condition of known relative motion; wherein, the homography geometric relation is referred to as an H model for short, and the epipolar geometric relation is referred to as an F model for short;
step S4, selecting a geometric model by error comparison: according to the calculation result of the step S3, errors of the relative motion of the two frames of images obtained by decomposing the epipolar matrix and the homography matrix are respectively calculated, and the optimal relative motion calculation result is selected through error comparison;
step S5, recursive scale restoration: in the video frame sequence, the pose with the absolute scale of each frame image is obtained through the input of a wheel type encoder and an inertial navigation system, the inter-frame optimal relative motion calculation result obtained in the step S4 is aligned with the pose, and the absolute scale of the three-dimensional map point is adjusted through multi-frame accumulation to obtain the coordinate with the absolute scale of the final three-dimensional space point.
2. The initial mapping method based on the large distortion map of the wide-angle camera as claimed in claim 1, wherein: the initial specific camera model in step S1 is a specific camera model of a 4 th order polynomial camera, and the direction vector r of any pixel point x (u, v) and its corresponding incident light in the image is calculated by a back projection method as follows:
(1) firstly, two-dimensional planar affine transformation is carried out on image pixel coordinates
Figure FDA0003151457900000011
wherein ,uc and vcRespectively representing the center coordinates of the image in the horizontal and vertical directions, A represents an affine transformation matrix
Figure FDA0003151457900000021
The initial value is typically set to the identity matrix during calibration of a particular camera, i.e.
Figure FDA0003151457900000022
s represents a scaling factor, the initial value being typically set to 1 during calibration of a particular camera;
(2) calculating the distance rho of the pixel coordinate relative to the center of the 4-order polynomial camera after affine transformation
Figure FDA0003151457900000023
(3) Calculating the component of incident ray in the z-axis direction in the camera coordinate system
z′=a0+a1ρ+a2ρ2+a3ρ3+a4ρ4
wherein ,a0,a1,a2,a2Coefficients representing a polynomial;
the direction vector of the incident light corresponding to the finally obtained image pixel point is
Figure FDA0003151457900000024
3. The initial mapping method based on the large distortion map of the wide-angle camera as claimed in claim 2, characterized in that: when each pixel point is back-projected to a corresponding incident ray vector through the pre-calibrated camera model in step S1, the back-projection process of the pre-calibrated camera model can be abstractly expressed as a function f, and the incident ray of the pixel point x is tracked as r' through the pre-calibrated camera model back-projection, that is: r' ═ f (x)
The formula of the sum of the least square error terms constructed according to the incident light obtained by calibrating the back projection of the camera model in advance and the incident light obtained by back projection of the specific camera model is as follows:
g=mingn(r-r′)2
where g denotes the back projection of the particular camera model that is ultimately used, and n denotes the number of pixel points on the image.
4. The initial mapping method based on the large distortion map of the wide-angle camera as claimed in claim 1, wherein: in step S2, when determining whether the matching point pair is valid, the method adopted is:
(1) acquiring two images which are generated in continuous time and are adjacent in time, respectively extracting a feature point on each image, and performing feature point descriptor matching after extracting the feature points;
(2) for a certain descriptor in the first image, calculating the Hamming distance of the descriptor corresponding to each feature point in the second image, and arranging the corresponding distances from small to large, wherein if the distance of the smallest descriptor is less than 60% of the distance of the next smallest descriptor; meanwhile, if the minimum descriptor distance is less than 45, the feature point in the obtained image I is judged to be effectively matched with the feature point in the image II corresponding to the minimum descriptor distance.
5. The initial mapping method based on the large distortion map of the wide-angle camera as claimed in claim 3, wherein: when the matching pair of the incident rays is a homography geometric relationship in step S3, the relative motion relationship between the two images is expressed as follows:
Figure FDA0003151457900000031
wherein the coordinate of the feature point obtained at the first position of the camera is x1And x1Matching the feature point observed at the second position of the camera to x2R represents the relative rotation of the two positions,
Figure FDA0003151457900000032
indicating the relative displacement of the two positions.
6. The initial mapping method based on the large distortion map of the wide-angle camera as claimed in claim 3, wherein: when the matching pairs of incident rays are in the epipolar geometry in step S3, the relative motion relationship between the two images is expressed as follows:
g(x1)*t×R*g(x2)=0
wherein the coordinate of the feature point obtained at the first position of the camera is x1And x1Matching the feature point observed at the second position of the camera to x2R represents the relative rotation of the two positions,
Figure FDA0003151457900000035
indicating the relative displacement of the two positions.
7. The initial mapping method based on the large distortion map of the wide-angle camera as claimed in claim 3, wherein: in the step S3, when restoring the point in the three-dimensional space corresponding to the matching pair of incident rays, the matched feature point x1And x2The world coordinate X of the point in the three-dimensional space corresponding to the point should satisfy the following relation:
g(x1)×(P1X)=0
g(x2)×(P2X)=0
wherein ,P1Projection matrix of camera in first position, P1=[I 0];P2Is a projection matrix of the camera in the second position, P2=[R t];
According to the formula, the value of the spatial position X of the same three-dimensional point observed in two continuous frames of large distortion images can be calculated and solved.
8. The initial mapping method based on the large distortion map of the wide-angle camera as claimed in claim 3, wherein: in step S4, when selecting the geometric model, the method for calculating the model error is:
esum=e1+e2
Figure FDA0003151457900000033
Figure FDA0003151457900000034
wherein ,e1Is x1Angle of the incident ray with respect to the plane alpha, e2Is x2The error of the included angle between the polar plane beta and the corresponding polar plane beta; n isαIs x2When the incident light ray is coplanar with the displacement vector t, n is the normal vector of the plane alphaβIs x1The normal vector of the plane beta formed by the incident light and the displacement vector t;
nα=t×R*g(x2)
nβ=(-R-1*t)×R-1*g(x1)
if eF>eHThen H model map initialization is selected if eH>eFThen the F model is selected for map initialization.
9. The initial mapping method based on the large distortion map of the wide-angle camera as claimed in claim 3, wherein: when the recursive scale is restored in step S5, the calculation method of the absolute scale of the visual map is as follows:
Figure FDA0003151457900000041
wherein ,
Figure FDA0003151457900000042
is expressed as mujThe rotation measured by the time wheel encoder or inertial navigation system,
Figure FDA0003151457900000043
is expressed as muiThe displacement measured by the time wheel encoder or inertial navigation system,
Figure FDA0003151457900000044
is expressed as mujThe displacement measured by the time wheel encoder or inertial navigation system,
Figure FDA0003151457900000045
is expressed as mujThe time instant is calculated by the image information as a rotation,
Figure FDA0003151457900000046
is expressed as muiThe moment is a displacement calculated by the image information,
Figure FDA0003151457900000047
is expressed as mujDisplacement obtained by image information calculation at any moment;
and then, restoring the real position X of the visual map point in the world coordinate system in the three-dimensional space according to the absolute scaler
Figure FDA0003151457900000048
Where X is a coordinate value of the initial map point restored in step S3 in the world coordinate system.
10. An initial mapping system based on a large distortion map of a wide-angle camera is characterized in that: the system comprises a specific camera model building module, a feature point extracting module, a feature point matching module, a data correlation module, an incident ray correlation module, an H model analysis module, an F model analysis module, a distortion map initialization module, a model selecting module, an absolute scale calculating module and an initialization map building module;
the specific camera model building module is used for determining a specific camera model used by the system according to a least square error term between an incident ray vector obtained by back projection of a built pre-calibrated camera model and an incident ray vector obtained by back projection of an initial specific camera model, and converting the visual feature points into direction vectors of the corresponding incident rays according to the specific camera model;
the characteristic point extraction module is used for extracting visual characteristic points on the distorted image;
the characteristic point matching module is used for matching the characteristic point descriptors of the extracted characteristic points to ensure that the extracted characteristic points have corresponding descriptors;
the data correlation module is used for finding out feature points which can be effectively matched in two images which are generated in continuous time and are adjacent in time;
the incident light correlation module is used for converting the effectively matched feature point corresponding relation into the corresponding relation of incident light at different moments in two images adjacent in time generated in continuous time;
the H model resolving module is used for calculating the relative motion relation of two continuous frames of images when the matching pair of the incident light is in a homography geometric relation;
the F model resolving module is used for calculating the relative motion relation of two continuous frames of images when the matching pair of the incident light is an antipodal geometric relation;
the distortion map initialization module is used for solving the value of the spatial position X of the same three-dimensional point observed in two continuous frames of large distortion images according to the data output by the H model solving module or the F model solving module;
the model selection module determines to select the model to be solved through error comparison;
the absolute scale calculation module is used for processing data measured by the wheel type encoder or the inertial navigation system and data obtained by image information calculation to obtain the absolute scale of the visual map;
and the initialized map building module is used for processing the X value obtained in the distorted map initializing module according to the absolute scale of the visual map to obtain the coordinate of the final three-dimensional space point with the absolute scale.
CN202110767999.2A 2021-07-07 2021-07-07 Initialization map building method and system based on wide-angle camera large distortion map Active CN113345032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110767999.2A CN113345032B (en) 2021-07-07 2021-07-07 Initialization map building method and system based on wide-angle camera large distortion map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110767999.2A CN113345032B (en) 2021-07-07 2021-07-07 Initialization map building method and system based on wide-angle camera large distortion map

Publications (2)

Publication Number Publication Date
CN113345032A true CN113345032A (en) 2021-09-03
CN113345032B CN113345032B (en) 2023-09-15

Family

ID=77482901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110767999.2A Active CN113345032B (en) 2021-07-07 2021-07-07 Initialization map building method and system based on wide-angle camera large distortion map

Country Status (1)

Country Link
CN (1) CN113345032B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592865A (en) * 2021-09-29 2021-11-02 湖北亿咖通科技有限公司 Quality inspection method and equipment for three-dimensional map and storage medium
CN113899357A (en) * 2021-09-29 2022-01-07 北京易航远智科技有限公司 Incremental mapping method and device for visual SLAM, robot and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654484A (en) * 2015-12-30 2016-06-08 西北工业大学 Light field camera external parameter calibration device and method
CN109345471A (en) * 2018-09-07 2019-02-15 贵州宽凳智云科技有限公司北京分公司 High-precision map datum method is drawn based on the measurement of high-precision track data
US20190102868A1 (en) * 2017-10-04 2019-04-04 Intel Corporation Method and system of image distortion correction for images captured by using a wide-angle lens
CN110070615A (en) * 2019-04-12 2019-07-30 北京理工大学 A kind of panoramic vision SLAM method based on polyphaser collaboration
CN110108258A (en) * 2019-04-09 2019-08-09 南京航空航天大学 A kind of monocular vision odometer localization method
US20210082137A1 (en) * 2018-06-07 2021-03-18 Uisee Technologies (Beijing) Ltd. Method and device of simultaneous localization and mapping

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654484A (en) * 2015-12-30 2016-06-08 西北工业大学 Light field camera external parameter calibration device and method
US20190102868A1 (en) * 2017-10-04 2019-04-04 Intel Corporation Method and system of image distortion correction for images captured by using a wide-angle lens
US20210082137A1 (en) * 2018-06-07 2021-03-18 Uisee Technologies (Beijing) Ltd. Method and device of simultaneous localization and mapping
CN109345471A (en) * 2018-09-07 2019-02-15 贵州宽凳智云科技有限公司北京分公司 High-precision map datum method is drawn based on the measurement of high-precision track data
CN110108258A (en) * 2019-04-09 2019-08-09 南京航空航天大学 A kind of monocular vision odometer localization method
CN110070615A (en) * 2019-04-12 2019-07-30 北京理工大学 A kind of panoramic vision SLAM method based on polyphaser collaboration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张剑华;王燕燕;王曾媛;陈胜勇;管秋;: "单目同时定位与建图中的地图恢复融合技术", 中国图象图形学报, no. 03 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592865A (en) * 2021-09-29 2021-11-02 湖北亿咖通科技有限公司 Quality inspection method and equipment for three-dimensional map and storage medium
CN113899357A (en) * 2021-09-29 2022-01-07 北京易航远智科技有限公司 Incremental mapping method and device for visual SLAM, robot and readable storage medium
CN113592865B (en) * 2021-09-29 2022-01-25 湖北亿咖通科技有限公司 Quality inspection method and equipment for three-dimensional map and storage medium
CN113899357B (en) * 2021-09-29 2023-10-31 北京易航远智科技有限公司 Incremental mapping method and device for visual SLAM, robot and readable storage medium

Also Published As

Publication number Publication date
CN113345032B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN111210463B (en) Virtual wide-view visual odometer method and system based on feature point auxiliary matching
CN107564061B (en) Binocular vision mileage calculation method based on image gradient joint optimization
CN108717712B (en) Visual inertial navigation SLAM method based on ground plane hypothesis
Park et al. High-precision depth estimation using uncalibrated LiDAR and stereo fusion
US9613420B2 (en) Method for locating a camera and for 3D reconstruction in a partially known environment
JP2019536170A (en) Virtually extended visual simultaneous localization and mapping system and method
Liu et al. Direct visual odometry for a fisheye-stereo camera
CN113345032B (en) Initialization map building method and system based on wide-angle camera large distortion map
CN111798373A (en) Rapid unmanned aerial vehicle image stitching method based on local plane hypothesis and six-degree-of-freedom pose optimization
CN111932674A (en) Optimization method of line laser vision inertial system
CN110533719B (en) Augmented reality positioning method and device based on environment visual feature point identification technology
CN112085849A (en) Real-time iterative three-dimensional modeling method and system based on aerial video stream and readable medium
CN114708293A (en) Robot motion estimation method based on deep learning point-line feature and IMU tight coupling
CN113516692A (en) Multi-sensor fusion SLAM method and device
Fang et al. Self-supervised camera self-calibration from video
Huang et al. 360vo: Visual odometry using a single 360 camera
CN114494150A (en) Design method of monocular vision odometer based on semi-direct method
CN112556719A (en) Visual inertial odometer implementation method based on CNN-EKF
CN113744308A (en) Pose optimization method, pose optimization device, electronic device, pose optimization medium, and program product
CN113763481B (en) Multi-camera visual three-dimensional map construction and self-calibration method in mobile scene
CN116664621A (en) SLAM system based on vehicle-mounted multi-camera and deep neural network
CN112419411B (en) Realization method of vision odometer based on convolutional neural network and optical flow characteristics
CN115147344A (en) Three-dimensional detection and tracking method for parts in augmented reality assisted automobile maintenance
CN114964276A (en) Dynamic vision SLAM method fusing inertial navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant