CN117112043A - Initialization method and device of visual inertial system, electronic equipment and medium - Google Patents

Initialization method and device of visual inertial system, electronic equipment and medium Download PDF

Info

Publication number
CN117112043A
CN117112043A CN202311364806.4A CN202311364806A CN117112043A CN 117112043 A CN117112043 A CN 117112043A CN 202311364806 A CN202311364806 A CN 202311364806A CN 117112043 A CN117112043 A CN 117112043A
Authority
CN
China
Prior art keywords
frame
coordinate system
key
inertial
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311364806.4A
Other languages
Chinese (zh)
Other versions
CN117112043B (en
Inventor
熊伟成
张亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Smart Mapping Tech Co ltd
Original Assignee
Shenzhen Smart Mapping Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Smart Mapping Tech Co ltd filed Critical Shenzhen Smart Mapping Tech Co ltd
Priority to CN202311364806.4A priority Critical patent/CN117112043B/en
Publication of CN117112043A publication Critical patent/CN117112043A/en
Application granted granted Critical
Publication of CN117112043B publication Critical patent/CN117112043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • G01C25/005Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • G06F17/12Simultaneous equations, e.g. systems of linear equations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Algebra (AREA)
  • Automation & Control Theory (AREA)
  • Databases & Information Systems (AREA)
  • Manufacturing & Machinery (AREA)
  • Computer Security & Cryptography (AREA)
  • Operations Research (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method, a device, electronic equipment and a medium for initializing a visual inertial system, wherein the method comprises the following steps: acquiring visual image data, and determining a plurality of key frames and feature points in the visual image data; acquiring motion data corresponding to each key frame, and constructing an inertial measurement model according to the motion data; acquiring position information of the feature points, and constructing a multi-view geometric model according to the position information; the inertial measurement model and the multi-view geometric model are combined to obtain a linear equation set corresponding to the key frames and the feature points; determining a solving result of the linear equation set; and determining the state quantity of each key frame in the global coordinate system and the position of each characteristic point in the global coordinate system according to the solving result. By combining the inertial measurement model and the multi-view geometric model, the advantages of the visual sensor and the inertial sensor can be combined, the accuracy of estimation of the running condition of the system is ensured, and the accuracy of initialization of the visual inertial system is improved.

Description

Initialization method and device of visual inertial system, electronic equipment and medium
Technical Field
The present application relates to the field of system calibration, and in particular, to a method and apparatus for initializing a visual inertial system, an electronic device, and a medium.
Background
SLAM (Simultaneous Localization And Mapping, instant mapping and positioning) technology can provide high-precision position and posture information in a complex environment, accurately model a scene, and is one of basic core technologies in the fields of intelligent robots, unmanned robots and VR/AR. The current mainstream method of SLAM is integrated IMU (Inertial Measurement Unit ), called visual inertial SLAM; the IMU can provide accurate rotation, position and speed information for the system in a short time, and can greatly improve the precision and robustness of state estimation in the visual SLAM system.
The initialization can provide the initial values of the state quantity and the characteristic point position under the global coordinate system for the visual inertia SLAM system, which is a precondition that the system can operate. The existing initialization method has the problem of inaccurate initialization.
Disclosure of Invention
The application provides an initialization method, an initialization device, electronic equipment and a medium of a visual inertial system, and aims to solve the technical problem that in the prior art, the initialization of an SLAM system is inaccurate.
To solve the above technical problems or at least partially solve the above technical problems, the present application provides an initialization method of a visual inertial system, the method comprising the steps of:
acquiring visual image data, and determining a plurality of key frames and feature points in the visual image data;
acquiring motion data corresponding to each key frame, and constructing an inertial measurement model according to the motion data;
acquiring the position information of the feature points, and constructing a multi-view geometric model according to the position information;
the inertial measurement model and the multi-view geometric model are combined to obtain a linear equation set corresponding to the key frame and the characteristic points;
determining a solving result of the linear equation set;
and determining the state quantity of each key frame in a global coordinate system and the position of each characteristic point in the global coordinate system according to the solving result.
Optionally, the step of acquiring visual image data and determining a plurality of key frames and feature points in the visual image data includes:
acquiring a plurality of image frames in the visual image data, and tracking characteristic points in each image frame;
Setting a first frame image frame in the image frames as a key frame, and taking the number of the feature points tracked for the first time as a first number;
sequentially taking the number of common view feature points contained between the first frame image frame and the non-first frame image frame as a second number for each non-first frame image frame except the first frame image frame;
judging whether the second number is smaller than a preset multiple of the first number, and judging whether the number of frames of the image frames between the non-first frame image frame and the previous key frame is larger than a preset number of frames;
and if the second number is smaller than the preset multiple of the first number and the number of the image frames between the non-first frame image frame and the previous key frame is larger than the preset number of frames, the non-first frame image frame is the key frame.
Optionally, the step of obtaining motion data corresponding to each key frame and constructing an inertial measurement model according to the motion data includes:
acquiring corresponding acceleration data, angular velocity data and time intervals for two continuous key frames, wherein the acceleration data, the angular velocity data and the time intervals are the motion data;
Pre-integrating the inertial state quantity of the previous key frame through the acceleration data, the angular velocity data and the time interval to obtain the inertial state quantity of the following key frame;
determining the inertial state quantity of each key frame according to the continuous integral of all the key frames;
and converting the inertial state quantity of each key frame into a first frame coordinate system to obtain the inertial measurement model.
Optionally, the position information includes a key position of the feature point in a key frame coordinate system of a key frame, and an initial position of the feature point in a first frame coordinate system; the step of obtaining the position information of the feature points and constructing a multi-view geometric model according to the position information comprises the following steps:
constructing a position conversion model through the key position and the initial position;
and converting the position conversion model into a first frame coordinate system to obtain the multi-view geometric model.
Optionally, the step of combining the inertial measurement model and the multiview geometric model to obtain a linear equation set corresponding to the key frame and the feature point includes:
combining the inertial measurement model and the multi-view geometric model to obtain a basic linear equation;
Determining the basic linear equation of the observed key frame corresponding to the feature point aiming at each feature point, wherein the observed key frame is the key frame tracking the feature point;
combining the corresponding basic linear equations of each observation key frame to obtain a characteristic point equation set;
and combining the characteristic point equation sets corresponding to the characteristic points to obtain the linear equation set.
Optionally, the step of determining the solution of the system of linear equations includes:
converting the linear equation set into a linear least square optimization form by taking the gravity acceleration as constraint;
converting the linear least squares optimization form into a least squares optimization form by a Lagrangian multiplier method;
and calculating the solving result corresponding to the least Lagrangian multiplier in the least square optimization form.
Optionally, the solving result includes an initial position of each feature point in the first frame coordinate system, and a speed vector formed by a speed and a gravitational acceleration of the first frame image frame in the first frame coordinate system; the step of determining the global state quantity of each key frame in the global coordinate system according to the solving result and the global position of each feature point in the global coordinate system comprises the following steps:
Calculating initial state quantity of each key frame in the initial frame coordinate system according to the speed vector;
converting the initial state quantity into the global coordinate system to obtain the global state quantity;
and converting the initial position into the global coordinate system to obtain the global position.
In order to achieve the above object, the present invention further provides an initializing device of a visual inertial system, the initializing device of a visual inertial system comprising:
the first acquisition module is used for acquiring visual image data and determining a plurality of key frames and characteristic points in the visual image data;
the second acquisition module is used for acquiring motion data corresponding to each key frame and constructing an inertial measurement model according to the motion data;
the third acquisition module is used for acquiring the position information of the feature points and constructing a multi-view geometric model according to the position information;
the first simultaneous module is used for combining the inertial measurement model and the multi-view geometric model to obtain a linear equation set corresponding to the key frame and the characteristic points;
the first determining module is used for determining a solving result of the linear equation set;
and the second determining module is used for determining the state quantity of each key frame in a global coordinate system and the position of each characteristic point in the global coordinate system according to the solving result.
To achieve the above object, the present invention also provides an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the initialization method of the visual inertial system as described above.
To achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the initialization method of a visual inertial system as described above.
The invention provides an initialization method, an initialization device, electronic equipment and a medium of a visual inertial system, which are used for acquiring visual image data and determining a plurality of key frames and characteristic points in the visual image data; acquiring motion data corresponding to each key frame, and constructing an inertial measurement model according to the motion data; acquiring the position information of the feature points, and constructing a multi-view geometric model according to the position information; the inertial measurement model and the multi-view geometric model are combined to obtain a linear equation set corresponding to the key frame and the characteristic points; determining a solving result of the linear equation set; and determining the state quantity of each key frame in a global coordinate system and the position of each characteristic point in the global coordinate system according to the solving result. By combining the inertial measurement model and the multi-view geometric model, the advantages of the visual sensor and the inertial sensor can be combined, the accuracy of estimation of the running condition of the system is ensured, and the accuracy of initialization of the visual inertial system is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flow chart of a first embodiment of an initialization method for a visual inertial system according to the present application;
FIG. 2 is a schematic overall flow chart of an initialization method of the visual inertial system of the present application;
FIG. 3 is a schematic diagram of key frames and common view feature points in an embodiment of an initialization method of a visual inertial system according to the present application;
fig. 4 is a schematic block diagram of an electronic device according to the present application.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
The invention provides an initialization method of a visual inertial system, referring to fig. 1, fig. 1 is a flow chart of a first embodiment of the initialization method of the visual inertial system of the invention, the method comprises the steps of:
step S10, visual image data are acquired, and a plurality of key frames and characteristic points are determined in the visual image data;
the visual image data are image data acquired by a visual sensor; the vision sensor in this embodiment is a binocular camera.
The visual image data comprises a plurality of frames of image frames; the key frame is an image frame used for executing a subsequent initialization operation in the image frames.
The specific tracking method of the feature points can be set based on the actual application scene, for example, in this embodiment, the feature points in the visual image data are tracked by using a KLT Tracker (Kanade-Lucas-Tomasi Tracker) and a sparse optical flow tracking algorithm.
Step S20, motion data corresponding to each key frame are obtained, and an inertial measurement model is constructed according to the motion data;
the motion data are used for reflecting the motion condition between key frames, and can be obtained through IMU detection; the inertial measurement model is used for reflecting the association relation between the motion states of different key frames, so that the inertial measurement model can be accurately constructed through the motion data corresponding to the key frames.
Step S30, obtaining the position information of the feature points, and constructing a multi-view geometric model according to the position information;
the position information is used for reflecting the position of the feature point in the coordinate system; it is understood that the system includes a plurality of coordinate systems, such as a global coordinate system { G }, a camera coordinate system { C }, and an inertial measurement unit coordinate system { I }; the global coordinate system in the embodiment is an inertial system, and the Z axis of the global coordinate system is perpendicular to the earth horizontal plane; because the relative positions of the binocular camera and the inertial measurement unit are fixed, rigid body change is formed between the camera coordinate system and the inertial measurement unit coordinate system, and the transformation matrix from the camera coordinate system to the inertial measurement unit coordinate system I T C The method comprises the following steps:
wherein, I R C in order to rotate the amount of the transformation, I P C is the translation transformation amount.
It should be noted that, because the positions of the corresponding inertial measurement units are not necessarily the same in different key frames, the inertial measurement unit coordinate systems corresponding to different key frames are different, and in this embodiment, the inertial measurement unit coordinate system corresponding to the key frame of the first frame is the first frame coordinate system { I } 0 The inertial measurement unit coordinate system corresponding to the first key frame after the key frame of the first frame is { I } 1 K-th key frame pair after the first frame key frame The corresponding inertial measurement unit coordinate system is { I k }。
Step S40, the inertia measurement model and the multi-view geometric model are combined to obtain a linear equation set corresponding to the key frame and the characteristic points;
after the inertia measurement model and the multi-view geometric model are combined, the relation between key frames and the positions of characteristic points can be described by combining visual data and inertia data; by bringing the related parameters of the key frames and the feature points into the model after the combination, a linear equation set based on the key frames and the feature points can be obtained, and the linear equation set reflects the relation between the key frames and the positions of the feature points.
Step S50, determining a solving result of the linear equation set;
the calculation mode of the solving result of the specific linear equation set can be selected based on the actual application scene.
And step S60, determining the state quantity of each key frame in a global coordinate system and the position of each characteristic point in the global coordinate system according to the solving result.
After the solving result is determined, the state quantity of each key frame and the position of the characteristic point can be determined; it can be appreciated that the initialization of the visual inertial system requires the determination of relevant data in the global coordinate system; therefore, the state quantity of the key frame and the positions of the feature points need to be converted into a global coordinate system.
According to the embodiment, through the simultaneous inertial measurement model and the multi-view geometric model, the advantages of the visual sensor and the inertial sensor can be combined, the accuracy of estimation of the running condition of the system is guaranteed, and therefore the accuracy of initialization of the visual inertial system is improved.
Further, referring to fig. 2, in a second embodiment of the initialization method of the visual inertial system according to the present invention, the step S10 includes the steps of:
step S11, a plurality of image frames in the visual image data are acquired, and feature points in the image frames are tracked;
step S12, setting a first frame image frame in the image frames as a key frame, and taking the number of the feature points tracked for the first time as a first number;
step S13, regarding each non-first frame image frame except the first frame image frame in sequence, taking the number of common view feature points contained between the first frame image frame and the non-first frame image frame as a second number;
step S14, judging whether the second number is smaller than a preset multiple of the first number, and whether the number of image frames between the non-first frame image frame and the previous key frame is larger than a preset number of frames;
Step S15, if the second number is smaller than the preset multiple of the first number, and the number of frames of the image frames between the non-first frame image frame and the previous key frame is greater than the preset number of frames, the non-first frame image frame is a key frame.
In this embodiment, the characteristic points are tracked by performing Harris corner detection on the visual image data through the KLT Tracker algorithm.
The first frame of image frame is the image frame with earliest acquisition time in the visual image data; the first frame image frame reflects the initial condition of the system, and the subsequent image frames are obtained based on the change of the first frame image frame, so the first frame image frame is taken as a key frame.
The common view feature points are feature points with common view relation among multiple frames of image frames; specifically, in the present embodiment, when the tracked number of times of the feature point in the multi-frame image frame is greater than 2, the feature point is considered to be a co-view feature point.
And the second number is the number of the detected common-view feature points until the current key frame, and when the number of the common-view feature points is larger than a certain number, the common-view feature points are considered to be too many, and the data volume is too large, so that the number of the common-view feature points is limited within a certain range, namely the number of the common-view feature points is a preset multiple of the first number.
It can be understood that the degree of information difference between consecutive image frames is small, so in this embodiment, when a key frame is selected, in order to avoid excessive repeated information, a preset frame number is set, that is, between two selected adjacent key frames, at least a preset frame number is spaced from the image frames, so as to ensure the differentiation of information between the key frames. Specifically, when a non-first frame image frame satisfies the following condition, the image frame is regarded as a key frame:
wherein N is 1 For a first number, N 2 A second number, f k A number of frames that is an image frame between a non-first frame image frame and a previous key frame; a, a 2 The number of frames is preset; it will be appreciated that the first inequality described above can be converted into:
wherein N is 1 The coefficient of (2) is a preset multiple; a, a 1 And a 2 The specific values of (a) can be set based on the practical application requirements, as given in the present embodiment, a 1 Set to 0.7, a 2 Set to 5.
The embodiment can accurately select the key frames.
Further, in a third embodiment of the initialization method for a visual inertial system according to the present invention set forth in the first embodiment of the present invention, the step S20 includes the steps of:
step S21, acquiring corresponding acceleration data, angular velocity data and time intervals aiming at two continuous key frames, wherein the acceleration data, the angular velocity data and the time intervals are the motion data;
Step S22, pre-integrating the inertial state quantity of the previous key frame through the acceleration data, the angular velocity data and the time interval to obtain the inertial state quantity of the following key frame;
step S23, determining the inertial state quantity of each key frame according to the continuous integral of all the key frames;
and step S24, converting the inertial state quantity of each key frame into a first frame coordinate system to obtain the inertial measurement model.
Acceleration data may be obtained by accelerometers in the IMU; angular velocity data may be obtained by a gyroscope in the IMU.
The state quantity in the present embodiment includes a rotation quantity R, a translation quantity P, and a velocity quantity V; in the global coordinate system, the inertial state quantity of the previous key frame comprises the previous rotation quantity G R Ik Amount of forward translation G P Ik Preceding speed quantity G V Ik The method comprises the steps of carrying out a first treatment on the surface of the According to the principle of pre-integration, the inertial state quantity of the following key frame comprises the following rotation quantity G R Ik+1 Amount of translation after G P Ik+1 Post speed measurement G V Ik+1 The method comprises the following steps:
wherein:
wherein Exp (-) corresponds to the mapping of rotation vectors to the li group; b g Is the bias of the gyroscope; n is n g Is a noise term of the gyroscope; b a Is the bias of the accelerometer; n is n a The bias and noise terms are small, so that the bias and noise terms can be ignored in practical application, and the bias and noise terms are set to be zero; Δt (delta t) k,k+1 Is a time interval.
It can be appreciated that the inertial state quantity of the following key frame is based on the global coordinate system, and the subsequent calculation needs to be performed based on the first frame coordinate system in this embodiment, so that the inertial state quantity needs to be converted into the first frame coordinate system; specifically, for the post rotation amount:
wherein, G R Ik+1 in a global coordinate systemIs set to a post-rotation amount of (a), I0 R Ik+1 is the amount of subsequent rotation in the first frame coordinate system, G R I0 a rotation matrix from a first frame coordinate system to a global coordinate system;
similarly, the amount of post-translational motion is:
wherein, G g is the gravitational acceleration in the global coordinate system, I0 g is the gravitational acceleration in the first frame coordinate system, and includes:
similarly, the post-speed measure is:
after pre-integration of two consecutive key frames, all key frames can be integrated continuously, namely, the integration is performed from the first frame of image frame, specifically, the state quantity of the (k+1) th key frame under the first frame of coordinate system is as follows:
wherein Δt is 0,k+1 For the time interval between the first image frame and the (k + 1) th keyframe, I0 v I0 the speed of the first frame image frame under the first frame coordinate system; thereby obtaining an inertial measurement model.
Further, in a fourth embodiment of the initialization method for a visual inertial system according to the present invention set forth in the first embodiment of the present invention, the location information includes a key location of the feature point in a key frame coordinate system of a key frame, and an initial location of the feature point in a first frame coordinate system; the step S30 includes the steps of:
S31, constructing a position conversion model through the key position and the initial position;
and S32, converting the position conversion model into a first frame coordinate system to obtain the multi-view geometric model.
It should be noted that, the position information in this embodiment includes the positions of the feature points in different key frames, so the feature points for constructing the multi-view geometric model are common view feature points.
The key position is the observed value of the feature point in the key frame; specifically, key locationsWherein->Normalized pixel coordinates obtained by correction of camera internal parameters, which can be obtained in advance by camera calibration, n n,k For noise observation, it is negligible.
According to the principle of multi-view geometry, there are:
wherein, I0 P fn is the initial position.
The conversion of the above method can be carried out:
wherein, I R C and I P C for a rotation matrix and a translation matrix of the camera coordinate system to the inertial measurement unit coordinate system, I0 R Ik and I0 P Ik a rotation matrix and a translation matrix of the key frame k under a first frame coordinate system; thereby obtaining a multi-view geometric model.
Further, in a fifth embodiment of the initialization method for a visual inertial system according to the present invention set forth in the first embodiment of the present invention, the step S40 includes the steps of:
S41, combining the inertial measurement model and the multi-view geometric model to obtain a basic linear equation;
step S42, determining the basic linear equation of the observed key frame corresponding to the feature point aiming at each feature point, wherein the observed key frame is the key frame tracking the feature point;
step S43, the corresponding basic linear equations of the observation key frames are combined to obtain a characteristic point equation set;
and S44, combining the characteristic point equation sets corresponding to the characteristic points to obtain the linear equation set.
Combining the inertial measurement model and the multi-view geometric model to obtain a basic linear equation:
wherein,,/>;ΔR 0,k and DeltaP 0,k Can be obtained based on the inertial measurement model and will not be described in detail.
Referring to fig. 3, fig. 3 is a schematic diagram of key frames and common view feature points in an embodiment of an initialization method of a visual inertial system according to the present invention; when the feature point is observed by a plurality of key frames, each key frame can construct a corresponding basic linear equation based on the feature point, and the basic linear equation corresponding to the single feature point can be combined to immediately obtain a feature point equation set corresponding to the feature point, for example, the feature point is observed by the key frame I 1 、I 2 、I 3 Observed, a set of feature point equations can be obtained:
and (3) combining all the characteristic point equation sets to obtain a linear equation set:
it should be noted that, in a specific application, the number of the key frames is uncertain, so, in order to control specific calculation intensity, in this embodiment, a sliding window is set, where the sliding window includes a plurality of continuous key frames, and at this time, the key frames are ordered based on positions in the sliding window, that is, the first key frame in the sliding window is the first key frame; in generating the system of linear equations, only the relevant data of the key frames in the sliding window are employed.
Further, in a sixth embodiment of the initialization method for a visual inertial system according to the present invention set forth in the first embodiment of the present invention, the step S50 includes the steps of:
step S51, taking the gravity acceleration as constraint, and converting the linear equation set into a linear least square optimization form;
step S52, converting the linear least square optimization form into a least square optimization form through a Lagrangian multiplier method;
and step S53, calculating the solution result corresponding to the least Lagrangian multiplier in the least square optimization form.
Firstly, simplifying a linear equation set to obtain:
Wherein, I0 x is an unknown quantity in the linear equation set, and X comprises the initial position of each characteristic point in the first frame coordinate system I0 P f1 L I0 P fn L I0 P fNT The speed of the first frame image frame in the first frame coordinate system I0 v I0 With the acceleration of gravity I0 g velocity vector, i.e I0 X=( I0 P f1 L I0 P fn L I0 P fN I0 v I0 I0 g) The method comprises the steps of carrying out a first treatment on the surface of the A. b is respectively I0 Left and right matrix of X。
It can be understood that, because dimension A is too large and is a non-positive array, the direct solution takes too long and has the problem of unstable solution; whereas the mode length of the gravitational acceleration is known, i.e. | I0 g|| 2 G, typically 9.81 near sea level; the solution of the system of linear equations can be converted into a constrained linear least squares optimization problem by taking the gravitational acceleration as constraint, specifically:
the above problem is then converted into a least squares optimization problem by the lagrangian multiplier method, specifically:
wherein lambda is the Lagrangian multiplier,,/>the method comprises the steps of carrying out a first treatment on the surface of the The feature decomposition method of the polynomial company matrix can obtain a minimum lambda solution which completely meets the constraint, and the corresponding solution result is:
the embodiment realizes quick solution of unknown quantity in the linear equation set by converting the linear equation set into the least square problem with quadratic constraint.
Further, in a seventh embodiment of the method for initializing a visual inertial system according to the present invention, which is set forth based on the first embodiment of the present invention, the solution result includes an initial position of each of the feature points in a first frame coordinate system, and a velocity vector formed by a velocity of a first frame image frame in the first frame coordinate system and a gravitational acceleration; the step S60 includes the steps of:
step S61, calculating initial state quantity of each key frame in the first frame coordinate system according to the speed vector;
step S62, converting the initial state quantity into the global coordinate system to obtain the global state quantity;
and step S63, converting the initial position into the global coordinate system to obtain the global position.
As can be seen from the foregoing description, the solution result is based on the first frame coordinate system; it needs to be converted to a global coordinate system; specifically, from the foregoing description, it is apparent that:
I0 g is the gravitational acceleration of the global coordinate system, I0 g is a known quantity of the total of all the components, I0 g=(0 0 G) T the rotation amount from the first frame coordinate system to the global coordinate system G R I0 The method comprises the following steps:
wherein:
i.e. the Rot (-) function returns a rotation matrix between the two vectors, ⌊ c ⌋ × Is an antisymmetric matrix of vector c.
The initial state quantity of key frame k in the first frame coordinate system includes the rotation quantity I0 R Ik Amount of translation I0 P Ik The speed is measured as I0 V Ik Specifically:
rotation amount from the first frame coordinate system to the global coordinate system G R I0 Global state quantity of the key frame k under the global coordinate system is obtained, and the global state quantity comprises rotation quantity G R Ik Amount of translation G P Ik The speed is measured as G V Ik Specifically:
the global state quantity of all key frames under the global coordinate system can be obtained through the formula.
Similarly, for the initial position of the feature point under the first frame coordinate system I0 P fn In other words, by G R I0 The global position of the feature point under the global coordinate system can be obtained G P fn
The global positions of all the feature points under the global coordinate system can be obtained through the formula.
The embodiment accurately and rapidly realizes the initialization of the visual inertial system.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The application also provides an initializing device of the visual inertial system for implementing the initializing method of the visual inertial system, and the initializing device of the visual inertial system comprises the following components:
the first acquisition module is used for acquiring visual image data and determining a plurality of key frames and characteristic points in the visual image data;
the second acquisition module is used for acquiring motion data corresponding to each key frame and constructing an inertial measurement model according to the motion data;
The third acquisition module is used for acquiring the position information of the feature points and constructing a multi-view geometric model according to the position information;
the first simultaneous module is used for combining the inertial measurement model and the multi-view geometric model to obtain a linear equation set corresponding to the key frame and the characteristic points;
the first determining module is used for determining a solving result of the linear equation set;
and the second determining module is used for determining the state quantity of each key frame in a global coordinate system and the position of each characteristic point in the global coordinate system according to the solving result.
Initializing device of visual inertial system
It should be noted that, the first acquiring module in this embodiment may be used to perform step S10 in the embodiment of the present application, the second acquiring module in this embodiment may be used to perform step S20 in the embodiment of the present application, the third acquiring module in this embodiment may be used to perform step S30 in the embodiment of the present application, the first simultaneous module in this embodiment may be used to perform step S40 in the embodiment of the present application, the first determining module in this embodiment may be used to perform step S50 in the embodiment of the present application, and the second determining module in this embodiment may be used to perform step S60 in the embodiment of the present application.
Further, the first acquisition module includes:
a first acquisition unit configured to acquire a plurality of image frames in the visual image data, and track feature points in each of the image frames;
a first setting unit, configured to set a first frame image frame of the image frames as a key frame, and set the number of feature points tracked for the first time as a first number;
a first execution unit, configured to sequentially take, as a second number, a number of common-view feature points included between the first frame image frame and the non-first frame image frame for each non-first frame image frame other than the first frame image frame;
a first judging unit, configured to judge whether the second number is smaller than a preset multiple of the first number, and whether the number of frames of the image frames between the non-first frame image frame and the previous key frame is larger than a preset number of frames;
and the second execution unit is used for judging whether the second number is smaller than the preset multiple of the first number and the frame number of the image frame between the non-first frame image frame and the previous key frame is larger than the preset frame number, if the second number is smaller than the preset multiple of the first number, the non-first frame image frame is the key frame.
Further, the second acquisition module includes:
The second acquisition unit is used for acquiring corresponding acceleration data, angular velocity data and time intervals for two continuous key frames, wherein the acceleration data, the angular velocity data and the time intervals are the motion data;
the third execution unit is used for pre-integrating the inertial state quantity of the previous key frame through the acceleration data, the angular velocity data and the time interval to obtain the inertial state quantity of the following key frame;
a first determining unit, configured to determine an inertial state quantity of each key frame according to successive integration of all the key frames;
and the first conversion unit is used for converting the inertial state quantity of each key frame into a first frame coordinate system to obtain the inertial measurement model.
Further, the position information comprises key positions of the feature points in a key frame coordinate system of a key frame and initial positions of the feature points in a first frame coordinate system; the third acquisition module includes:
the first construction unit is used for constructing a position conversion model through the key position and the initial position;
and the second conversion unit is used for converting the position conversion model into a first frame coordinate system to obtain the multi-view geometric model.
Further, the first simultaneous module includes:
the first simultaneous unit is used for combining the inertial measurement model and the multi-view geometric model to obtain a basic linear equation;
a second determining unit, configured to determine, for each of the feature points, the basic linear equation of the observed keyframe corresponding to the feature point, where the observed keyframe is a keyframe that tracks to the feature point;
the second linkage unit is used for linking the corresponding basic linear equations of each observation key frame to obtain a characteristic point equation set;
and the third simultaneous unit is used for simultaneously combining the characteristic point equation sets corresponding to the characteristic points to obtain the linear equation set.
Further, the first determining module includes:
the third conversion unit is used for converting the linear equation set into a linear least square optimization form by taking the gravity acceleration as a constraint;
a fourth conversion unit for converting the linear least squares optimization form into a least squares optimization form by a lagrange multiplier method;
the first calculation unit is used for calculating the solving result corresponding to the least Lagrangian multiplier in the least square optimization form.
Further, the solving result comprises initial positions of the feature points in a first frame coordinate system and a speed vector formed by the speed and the gravity acceleration of the first frame image frame in the first frame coordinate system; the second determining module includes:
a second calculating unit, configured to calculate an initial state quantity of each key frame in the first frame coordinate system according to the velocity vector;
a fifth conversion unit, configured to convert the initial state quantity into the global coordinate system, to obtain the global state quantity;
and the sixth conversion unit is used for converting the initial position into the global coordinate system to obtain the global position.
It should be noted that the above modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to what is disclosed in the above embodiments. It should be noted that, the above modules may be implemented in software as a part of the apparatus, or may be implemented in hardware, where the hardware environment includes a network environment.
Referring to fig. 4, the electronic device may include components such as a communication module 10, a memory 20, and a processor 30 in a hardware configuration. In the electronic device, the processor 30 is connected to the memory 20 and the communication module 10, and the memory 20 stores a computer program, and the computer program is executed by the processor 30 at the same time, where the computer program implements the steps of the method embodiments described above when executed.
The communication module 10 is connectable to an external communication device via a network. The communication module 10 may receive a request sent by an external communication device, and may also send a request, an instruction, and information to the external communication device, where the external communication device may be other electronic devices, a server, or an internet of things device, such as a television, and so on.
The memory 20 is used for storing software programs and various data. The memory 20 may mainly include a storage program area that may store an operating system, an application program required for at least one function (such as acquiring visual image data), and the like, and a storage data area; the storage data area may include a database, may store data or information created according to the use of the system, and the like. In addition, the memory 20 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 30, which is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 20, and calling data stored in the memory 20, thereby performing overall monitoring of the electronic device. Processor 30 may include one or more processing units; alternatively, the processor 30 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 30.
Although not shown in fig. 4, the electronic device may further include a circuit control module, where the circuit control module is used to connect to a power source to ensure normal operation of other components. Those skilled in the art will appreciate that the electronic device structure shown in fig. 4 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or may be arranged in different components.
The present invention also proposes a computer-readable storage medium on which a computer program is stored. The computer readable storage medium may be the Memory 20 in the electronic device of fig. 4, or may be at least one of ROM (Read-Only Memory)/RAM (Random Access Memory ), magnetic disk, or optical disk, and the computer readable storage medium includes several instructions for causing a terminal device (which may be a television, an automobile, a mobile phone, a computer, a server, a terminal, or a network device) having a processor to perform the method according to the embodiments of the present invention.
In the present invention, the terms "first", "second", "third", "fourth", "fifth" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance, and the specific meaning of the above terms in the present invention will be understood by those of ordinary skill in the art depending on the specific circumstances.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, the scope of the present invention is not limited thereto, and it should be understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications and substitutions of the above embodiments may be made by those skilled in the art within the scope of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. A method for initializing a visual inertial system, the method comprising:
acquiring visual image data, and determining a plurality of key frames and feature points in the visual image data;
acquiring motion data corresponding to each key frame, and constructing an inertial measurement model according to the motion data;
acquiring the position information of the feature points, and constructing a multi-view geometric model according to the position information;
the inertial measurement model and the multi-view geometric model are combined to obtain a linear equation set corresponding to the key frame and the characteristic points;
determining a solving result of the linear equation set;
and determining the state quantity of each key frame in a global coordinate system and the position of each characteristic point in the global coordinate system according to the solving result.
2. The method of initializing a visual inertial system of claim 1, wherein the steps of acquiring visual image data and determining a plurality of key frames and feature points in the visual image data comprise:
acquiring a plurality of image frames in the visual image data, and tracking characteristic points in each image frame;
Setting a first frame image frame in the image frames as a key frame, and taking the number of the feature points tracked for the first time as a first number;
sequentially taking the number of common view feature points contained between the first frame image frame and the non-first frame image frame as a second number for each non-first frame image frame except the first frame image frame;
judging whether the second number is smaller than a preset multiple of the first number, and judging whether the number of frames of the image frames between the non-first frame image frame and the previous key frame is larger than a preset number of frames;
and if the second number is smaller than the preset multiple of the first number and the number of the image frames between the non-first frame image frame and the previous key frame is larger than the preset number of frames, the non-first frame image frame is the key frame.
3. The method of initializing a visual inertial system of claim 1, wherein the step of obtaining motion data corresponding to each of the key frames and constructing an inertial measurement model from the motion data comprises:
acquiring corresponding acceleration data, angular velocity data and time intervals for two continuous key frames, wherein the acceleration data, the angular velocity data and the time intervals are the motion data;
Pre-integrating the inertial state quantity of the previous key frame through the acceleration data, the angular velocity data and the time interval to obtain the inertial state quantity of the following key frame;
determining the inertial state quantity of each key frame according to the continuous integral of all the key frames;
and converting the inertial state quantity of each key frame into a first frame coordinate system to obtain the inertial measurement model.
4. The method of initializing a visual inertial system of claim 1, wherein the location information includes a key location of the feature point in a key frame coordinate system of a key frame and an initial location of the feature point in a first frame coordinate system; the step of obtaining the position information of the feature points and constructing a multi-view geometric model according to the position information comprises the following steps:
constructing a position conversion model through the key position and the initial position;
and converting the position conversion model into a first frame coordinate system to obtain the multi-view geometric model.
5. The method of initializing a visual inertial system of claim 1, wherein the step of combining the inertial measurement model with the multiview geometric model to obtain a set of linear equations corresponding to the keyframes and the feature points comprises:
Combining the inertial measurement model and the multi-view geometric model to obtain a basic linear equation;
determining the basic linear equation of the observed key frame corresponding to the feature point aiming at each feature point, wherein the observed key frame is the key frame tracking the feature point;
combining the corresponding basic linear equations of each observation key frame to obtain a characteristic point equation set;
and combining the characteristic point equation sets corresponding to the characteristic points to obtain the linear equation set.
6. The method of initializing a visual inertial system of claim 1, wherein the step of determining the solution to the system of linear equations comprises:
converting the linear equation set into a linear least square optimization form by taking the gravity acceleration as constraint;
converting the linear least squares optimization form into a least squares optimization form by a Lagrangian multiplier method;
and calculating the solving result corresponding to the least Lagrangian multiplier in the least square optimization form.
7. The method for initializing a visual inertial system according to claim 1, wherein the solving result includes an initial position of each of the feature points in a first frame coordinate system, and a velocity vector formed by a velocity of a first frame image frame in the first frame coordinate system and a gravitational acceleration; the step of determining the global state quantity of each key frame in the global coordinate system according to the solving result and the global position of each feature point in the global coordinate system comprises the following steps:
Calculating initial state quantity of each key frame in the initial frame coordinate system according to the speed vector;
converting the initial state quantity into the global coordinate system to obtain the global state quantity;
and converting the initial position into the global coordinate system to obtain the global position.
8. An initialization apparatus for a visual inertial system, the initialization apparatus comprising:
the first acquisition module is used for acquiring visual image data and determining a plurality of key frames and characteristic points in the visual image data;
the second acquisition module is used for acquiring motion data corresponding to each key frame and constructing an inertial measurement model according to the motion data;
the third acquisition module is used for acquiring the position information of the feature points and constructing a multi-view geometric model according to the position information;
the first simultaneous module is used for combining the inertial measurement model and the multi-view geometric model to obtain a linear equation set corresponding to the key frame and the characteristic points;
the first determining module is used for determining a solving result of the linear equation set;
and the second determining module is used for determining the state quantity of each key frame in a global coordinate system and the position of each characteristic point in the global coordinate system according to the solving result.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the method of initializing a visual inertial system according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements the steps of the method of initializing a visual inertial system according to any one of claims 1 to 7.
CN202311364806.4A 2023-10-20 2023-10-20 Initialization method and device of visual inertial system, electronic equipment and medium Active CN117112043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311364806.4A CN117112043B (en) 2023-10-20 2023-10-20 Initialization method and device of visual inertial system, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311364806.4A CN117112043B (en) 2023-10-20 2023-10-20 Initialization method and device of visual inertial system, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN117112043A true CN117112043A (en) 2023-11-24
CN117112043B CN117112043B (en) 2024-01-30

Family

ID=88805879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311364806.4A Active CN117112043B (en) 2023-10-20 2023-10-20 Initialization method and device of visual inertial system, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117112043B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816696A (en) * 2019-02-01 2019-05-28 西安全志科技有限公司 A kind of robot localization and build drawing method, computer installation and computer readable storage medium
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information
US20200226782A1 (en) * 2018-05-18 2020-07-16 Boe Technology Group Co., Ltd. Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database
CN112734839A (en) * 2020-12-31 2021-04-30 浙江大学 Monocular vision SLAM initialization method for improving robustness
CN112749665A (en) * 2021-01-15 2021-05-04 东南大学 Visual inertia SLAM method based on image edge characteristics
US20230010105A1 (en) * 2021-07-12 2023-01-12 Midea Group Co., Ltd. Fast and Robust Initialization Method for Feature-Based Monocular Visual SLAM Using Inertial Odometry Assistance
CN115615424A (en) * 2022-08-25 2023-01-17 中国人民解放军火箭军工程大学 Crane inertia vision combination positioning method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200226782A1 (en) * 2018-05-18 2020-07-16 Boe Technology Group Co., Ltd. Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database
CN109816696A (en) * 2019-02-01 2019-05-28 西安全志科技有限公司 A kind of robot localization and build drawing method, computer installation and computer readable storage medium
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information
CN112734839A (en) * 2020-12-31 2021-04-30 浙江大学 Monocular vision SLAM initialization method for improving robustness
CN112749665A (en) * 2021-01-15 2021-05-04 东南大学 Visual inertia SLAM method based on image edge characteristics
US20230010105A1 (en) * 2021-07-12 2023-01-12 Midea Group Co., Ltd. Fast and Robust Initialization Method for Feature-Based Monocular Visual SLAM Using Inertial Odometry Assistance
CN115615424A (en) * 2022-08-25 2023-01-17 中国人民解放军火箭军工程大学 Crane inertia vision combination positioning method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙楠 等: "基于立体视觉――惯导SLAM的四旋翼无人机导航算法", 微电子学与计算机, no. 05, pages 37 - 42 *

Also Published As

Publication number Publication date
CN117112043B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN108765498B (en) Monocular vision tracking, device and storage medium
US20210041236A1 (en) Method and system for calibration of structural parameters and construction of affine coordinate system of vision measurement system
CN110880189B (en) Combined calibration method and combined calibration device thereof and electronic equipment
JP6534664B2 (en) Method for camera motion estimation and correction
CN111007530B (en) Laser point cloud data processing method, device and system
US10247556B2 (en) Method for processing feature measurements in vision-aided inertial navigation
CN110411476B (en) Calibration adaptation and evaluation method and system for visual inertial odometer
KR100855657B1 (en) System for estimating self-position of the mobile robot using monocular zoom-camara and method therefor
CN111156998A (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
GB2498177A (en) Apparatus for determining a floor plan of a building
CN110969665B (en) External parameter calibration method, device, system and robot
KR20150119337A (en) Generation of 3d models of an environment
JP6782903B2 (en) Self-motion estimation system, control method and program of self-motion estimation system
IL182799A (en) Method for estimating the pose of a ptz camera
CN113034594A (en) Pose optimization method and device, electronic equipment and storage medium
JP6525148B2 (en) Trajectory estimation method, trajectory estimation apparatus and trajectory estimation program
CN112729109B (en) Point cloud data correction method and device
CN117112043B (en) Initialization method and device of visual inertial system, electronic equipment and medium
Qian et al. Optical flow based step length estimation for indoor pedestrian navigation on a smartphone
CN117241142A (en) Dynamic correction method and device for pitch angle of pan-tilt camera, equipment and storage medium
Qian et al. Optical flow-based gait modeling algorithm for pedestrian navigation using smartphone sensors
JP3512894B2 (en) Relative moving amount calculating apparatus and relative moving amount calculating method
CN111522441B (en) Space positioning method, device, electronic equipment and storage medium
CN113483762A (en) Pose optimization method and device
CN112414407A (en) Positioning method, positioning device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant