CN110118554A - SLAM method, apparatus, storage medium and device based on visual inertia - Google Patents

SLAM method, apparatus, storage medium and device based on visual inertia Download PDF

Info

Publication number
CN110118554A
CN110118554A CN201910411404.2A CN201910411404A CN110118554A CN 110118554 A CN110118554 A CN 110118554A CN 201910411404 A CN201910411404 A CN 201910411404A CN 110118554 A CN110118554 A CN 110118554A
Authority
CN
China
Prior art keywords
pose
initial
optimization
local map
moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910411404.2A
Other languages
Chinese (zh)
Other versions
CN110118554B (en
Inventor
高军强
林义闽
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Shenzhen Robotics Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shenzhen Robotics Systems Co Ltd filed Critical Cloudminds Shenzhen Robotics Systems Co Ltd
Priority to CN201910411404.2A priority Critical patent/CN110118554B/en
Publication of CN110118554A publication Critical patent/CN110118554A/en
Application granted granted Critical
Publication of CN110118554B publication Critical patent/CN110118554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure relates to a SLAM method, apparatus, storage medium and device based on visual inertia, and relates to the technical field of wireless positioning, the method comprising: the method comprises the steps of fusing image information acquired by a first time image acquisition unit and motion information acquired by an inertial measurement unit IMU according to an extended Kalman filtering algorithm to acquire an initial pose, determining an initial local map according to the image information and the initial pose, optimizing the initial pose and the initial local map according to a preset nonlinear optimization algorithm to acquire an optimized pose and an optimized local map, and updating a target initial pose and a target initial local map according to the optimized pose and the optimized local map. The calculation efficiency and the positioning accuracy of the SLAM can be improved.

Description

SLAM method, apparatus, storage medium and the equipment of view-based access control model inertia
Technical field
This disclosure relates to wireless location technology field, and in particular, to a kind of SLAM method of view-based access control model inertia, dress It sets, storage medium and equipment.
Background technique
With the continuous development of terminal technology, robot has been introduced into many technical fields, each to help people to complete Kind job task.For the robot that needs constantly move during the work time, for the task that preferably fulfils assignment, need Obtain accurate positioning.Robot acquires the external letter of physical environment locating for robot using various information collecting devices Thus breath carries out SLAM (English: Simultaneous Localization and Mapping, Chinese: positioning immediately and ground Figure building).External information generally includes two kinds of visual information (such as image) and motion information (acceleration, direction etc.), needs In conjunction with two kinds of information to realize SLAM.In the prior art, to the combination of visual information and motion information, usually there are two types of processing sides Formula: Kalman filtering algorithm and nonlinear optimization algorithm.Wherein, Kalman filtering algorithm realizes simple that computational efficiency is high, but fixed Position precision is relatively low, it is difficult to be applicable to the application scenarios being accurately positioned, nonlinear optimization algorithm is realized by iteration, essence Degree is higher but computationally intensive, and computational efficiency is low, it is difficult to be applicable to the application scenarios positioned in real time.
Summary of the invention
Purpose of this disclosure is to provide SLAM method, apparatus, storage medium and the equipment of a kind of view-based access control model inertia, to Solve the problems, such as the computational efficiency and positioning accuracy existing in the prior art for being difficult to take into account SLAM.
To achieve the goals above, according to the first aspect of the embodiments of the present disclosure, a kind of view-based access control model inertia is provided SLAM method, which comprises
The motion information of the image information of first moment image acquisition units acquisition and Inertial Measurement Unit IMU acquisition is pressed It is merged according to expanded Kalman filtration algorithm, to obtain initial pose;
According to described image information and the initial pose, initial local map is determined;
The initial pose and the initial local map are optimized according to preset nonlinear optimization algorithm, to obtain Take optimization pose and optimization local map;
According to the optimization pose and the optimization local map, more fresh target initial bit appearance and target initial local Figure, the initial pose of target is that the image information for acquiring the second moment described image acquisition unit and the IMU are acquired Motion information is merged according to the expanded Kalman filtration algorithm, the initial pose of acquisition, the target initial local Figure be determined according to the image information and the initial pose of the target of the second moment described image acquisition unit acquisition just Beginning local map, second moment are at the time of obtaining the optimization pose and the optimization local map.
Optionally, described to acquire the image information of the first moment image acquisition units acquisition and Inertial Measurement Unit IMU Motion information merged according to expanded Kalman filtration algorithm, to obtain initial pose, comprising:
It is defeated to obtain the IMU motion model using the motion information and history pose as the input of IMU motion model Prediction pose out, the history pose are the image information for acquiring third moment described image acquisition unit and the IMU The motion information of acquisition is merged according to the expanded Kalman filtration algorithm, the initial pose of acquisition, the third moment At the time of for before first moment;
Described image information is handled according to preset image processing algorithm, to obtain the vision for including in described image information The constraint relationship;
The prediction pose is updated according to the Vision Constraints relationship, to obtain the initial pose, the view Feel that the constraint relationship includes: that re-projection error is minimum or luminosity error is minimum.
Optionally, described according to described image information and the initial pose, determine initial local map, comprising:
The pixel for meeting preset condition in described image information is determined as characteristic point;
Three-dimensional reconstruction is carried out according to the characteristic point and the initial pose, to obtain the initial local map.
Optionally, it is described according to preset nonlinear optimization algorithm to the initial pose and the initial local map into Row optimization, to obtain optimization pose and optimization local map, comprising:
When described image information is key frame, according to the initial pose, the initial local map and described first The crucial posture information of preset quantity before moment establishes objective function, and the objective function includes that the measurement of the IMU is residual The measurement residual error of difference and described image acquisition unit, the key posture information is the first moment foregoing description Image Acquisition When the history image information of unit acquisition is key frame, the posture information that includes in the history image information;
The objective function is iterated according to preset algorithm, it is right under the conditions of the objective function is the smallest to obtain The optimization pose answered and the optimization local map.
Optionally, described according to the optimization pose and the optimization local map, more fresh target initial bit appearance and target Initial local map, comprising:
It is the optimization local map by the target initial local map rejuvenation;
When the Vision Constraints relationship is re-projection error minimum, by the optimization pose and pre- before the second moment If quantity key posture information is merged according to Kalman filtering algorithm, the initial pose of the target is updated to described The pose of Kalman filtering algorithm output, the key posture information are that the second moment foregoing description image acquisition units are adopted When the history image information integrated is key frame, the posture information that includes in the history image information;
When the Vision Constraints relationship is luminosity error minimum, obtain between the initial pose and the optimization pose Offset;
The initial pose of target is updated according to the offset.
Optionally, described according to the optimization pose and the optimization local map, more fresh target initial bit appearance and target Initial local map, further includes:
The updated target initial local map is stored in global map.
According to the second aspect of an embodiment of the present disclosure, a kind of SLAM device of view-based access control model inertia, described device packet are provided It includes:
Fusion Module, image information and Inertial Measurement Unit IMU for acquiring the first moment image acquisition units are adopted The motion information of collection is merged according to expanded Kalman filtration algorithm, to obtain initial pose;
Determining module, for determining initial local map according to described image information and the initial pose;
Optimization module is used for according to preset nonlinear optimization algorithm to the initial pose and the initial local map It optimizes, to obtain optimization pose and optimization local map;
Update module, for according to the optimization pose and the optimization local map, more fresh target initial bit appearance and mesh Initial local map is marked, the initial pose of target is the image information for acquiring the second moment described image acquisition unit and institute The motion information for stating IMU acquisition is merged according to the expanded Kalman filtration algorithm, the initial pose of acquisition, the target Initial local map is the image information and the target initial bit acquired according to the second moment described image acquisition unit The initial local map that appearance determines, second moment are at the time of obtaining the optimization pose and the optimization local map.
Optionally, the Fusion Module includes:
Submodule is predicted, for using the motion information and history pose as the input of IMU motion model, to obtain The prediction pose of IMU motion model output is stated, the history pose is the figure for acquiring third moment described image acquisition unit As information and the IMU motion information acquired are merged according to the expanded Kalman filtration algorithm, the initial bit of acquisition Appearance, at the time of the third moment is before first moment;
Image procossing submodule, for handling described image information according to preset image processing algorithm, described in obtaining The Vision Constraints relationship for including in image information;
Submodule is updated, for being updated according to the Vision Constraints relationship to the prediction pose, described in obtaining Initial pose, the Vision Constraints relationship include: that re-projection error is minimum or luminosity error is minimum.
Optionally, the determining module includes:
Submodule is determined, for the pixel for meeting preset condition in described image information to be determined as characteristic point;
Figure submodule is built, it is described first to obtain for carrying out three-dimensional reconstruction according to the characteristic point and the initial pose Beginning local map.
Optionally, the optimization module includes:
Setting up submodule is used for when described image information is key frame, according to the initial pose, the initial local Map and the crucial posture information of preset quantity before first moment establish objective function, and the objective function includes institute State IMU measurement residual sum described image acquisition unit measurement residual error, it is described key posture information be first moment it When the history image information of preceding described image acquisition unit acquisition is key frame, the pose for including in the history image information is believed Breath;
Iteration submodule, for being iterated according to preset algorithm to the objective function, to obtain in the target letter The corresponding optimization pose and the optimization local map under the conditions of number is the smallest.
Optionally, the update module includes:
Map rejuvenation submodule, for being the optimization local map by the target initial local map rejuvenation;
Pose updates submodule, is used for when the Vision Constraints relationship is re-projection error minimum, by the optimization position Appearance and the crucial posture information of preset quantity before the second moment are merged according to Kalman filtering algorithm, by the mesh Mark the pose that initial pose is updated to Kalman filtering algorithm output, the key posture information be second moment it When the history image information of preceding described image acquisition unit acquisition is key frame, the pose for including in the history image information is believed Breath;
The pose updates submodule, is also used to when the Vision Constraints relationship is luminosity error minimum, described in acquisition Offset between initial pose and the optimization pose, and the initial pose of target is updated according to the offset.
Optionally, the map rejuvenation submodule is also used to the updated target initial local map deposit is complete Local figure.
According to the third aspect of an embodiment of the present disclosure, a kind of computer readable storage medium is provided, calculating is stored thereon with The step of machine program, the SLAM method for the view-based access control model inertia that realization first aspect provides when which is executed by processor.
According to a fourth aspect of embodiments of the present disclosure, a kind of electronic equipment is provided, comprising:
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, with realize first aspect provide based on The step of SLAM method of vision inertia.
Through the above technical solutions, the disclosure first believes image acquisition units and IMU in the image that the first moment acquired Breath and motion information are merged according to expanded Kalman filtration algorithm, to obtain initial pose, further according to image information and just Beginning pose determines initial local map, later according to preset nonlinear optimization algorithm to the initial pose and described initial Local map optimizes, to obtain optimization pose and optimization local map, finally according to the optimization pose and the optimization Local map updates the initial pose of the second moment corresponding target and target initial local map.By the way that spreading kalman is filtered Input of the output of wave algorithm as nonlinear optimization algorithm, and extension karr is corrected using the output of nonlinear optimization algorithm The output of graceful filtering algorithm, to improve the computational efficiency and positioning accuracy of SLAM.
Other feature and advantage of the disclosure will the following detailed description will be given in the detailed implementation section.
Detailed description of the invention
Attached drawing is and to constitute part of specification for providing further understanding of the disclosure, with following tool Body embodiment is used to explain the disclosure together, but does not constitute the limitation to the disclosure.In the accompanying drawings:
Fig. 1 is a kind of flow chart of the SLAM method of view-based access control model inertia shown according to an exemplary embodiment;
Fig. 2 is the flow chart of the SLAM method of another view-based access control model inertia shown according to an exemplary embodiment;
Fig. 3 is the flow chart of the SLAM method of another view-based access control model inertia shown according to an exemplary embodiment;
Fig. 4 is the flow chart of the SLAM method of another view-based access control model inertia shown according to an exemplary embodiment;
Fig. 5 is the flow chart of the SLAM method of another view-based access control model inertia shown according to an exemplary embodiment;
Fig. 6 is the flow chart of the SLAM method of another view-based access control model inertia shown according to an exemplary embodiment;
Fig. 7 is a kind of block diagram of the SLAM device of view-based access control model inertia shown according to an exemplary embodiment;
Fig. 8 is the block diagram of the SLAM device of another view-based access control model inertia shown according to an exemplary embodiment;
Fig. 9 is the block diagram of the SLAM device of another view-based access control model inertia shown according to an exemplary embodiment;
Figure 10 is the block diagram of the SLAM device of another view-based access control model inertia shown according to an exemplary embodiment;
Figure 11 is the block diagram of the SLAM device of another view-based access control model inertia shown according to an exemplary embodiment;
Figure 12 is the block diagram of a kind of electronic equipment shown according to an exemplary embodiment.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
Before the SLAM method, apparatus of view-based access control model inertia of disclosure offer, storage medium and equipment are provided, first Application scenarios involved by each embodiment of the disclosure are introduced.The application scenarios can be any terminal for needing SLAM (such as: robot), can be set that there are two types of information collecting devices in the terminal, comprising: image acquisition units and IMU (English: Inertial Measurement Unit, Chinese: Inertial Measurement Unit).Wherein, image acquisition units for example can be camera shooting Head or camera may include: the sensors such as gyroscope, accelerometer in IMU, and image acquisition units and IMU can be fixed on one Rise (i.e. image acquisition units with the position consistency where IMU), with ensure image acquisition units acquired image information with The motion information of IMU acquisition is with uniformity.
Fig. 1 is a kind of flow chart of the SLAM method of view-based access control model inertia shown according to an exemplary embodiment, such as Fig. 1 It is shown, method includes the following steps:
Step 101, by the fortune of the image information of the first moment image acquisition units acquisition and Inertial Measurement Unit IMU acquisition Dynamic information is merged according to expanded Kalman filtration algorithm, to obtain initial pose.
Step 102, according to image information and initial pose, initial local map is determined.
For example, the image for obtaining physical environment locating for the terminal that image acquisition units acquires at the first moment first is believed Motion information corresponding with the terminal that IMU is acquired at the first moment is ceased, motion information for example may include: the orientation of terminal, position It sets, speed, amount of bias etc..Wherein, pose, then root can be predicted by inertia measurement by the terminal for including in motion information Prediction pose is updated according to the Vision Constraints relationship (can be understood as observational equation) for including in image information, to determine terminal Pose.Image is believed according to expanded Kalman filtration algorithm (English: Extended Kalman Filtering, abbreviation: EKF) Breath and motion information are merged, to obtain terminal in the initial pose and initial local map at the first moment.Spreading kalman Filtering algorithm calculates comparatively fast, and the occupancy of CPU memory is lower, therefore computational efficiency is high.It is possible, firstly, to which motion information is made For the input of the expanded Kalman filtration algorithm equation of motion, rough prediction is carried out to the pose of terminal, to obtain extension karr The prediction pose of graceful filtering algorithm equation of motion output.Since the sensor for including in IMU is usually the sensor of consumption levels, Error is larger, be easy to cause drift, can be further according to the Vision Constraints relationship for including in image information as extension karr The observational equation of graceful filtering algorithm is updated the pose of terminal, to obtain initial pose.
Further, image information can be handled according to preset image processing method, to extract image information In include texture information obtain capableing of the initial of reactive terminal local environment locating for the first moment in conjunction with initial pose Local map.It should be noted that initial local map is the preset quantity acquired by image acquisition units before the first moment In a key frame, the local map for the road sign point composition that can be observed, it can be understood as pre-set a sliding time Window, for the N-1 key frame and the first moment corresponding image information before storing for the first moment.Initial local map It is corresponding relationship with sliding time window, when the key frame in sliding time window updates, initial local map is also corresponded to It updates.
Step 103, initial pose and initial local map are optimized according to preset nonlinear optimization algorithm, to obtain Take optimization pose and optimization local map.
It is exemplary, since the positioning accuracy of EFK algorithm is lower, initial pose and initial local map can obtained Afterwards, preset nonlinear optimization algorithm is selected, BA (English: Bundle is carried out to initial pose and initial local map Adjustment, Chinese: bundle adjustment) optimization, to obtain the higher optimization pose of positioning accuracy and optimization local map.It can To be interpreted as, using initial pose and initial local map as nonlinear optimization algorithm initial value, nonlinear optimization can be made to calculate Method more rapid convergence (i.e. computational efficiency is higher), and obtain more accurately positioning.
For example, can be first according to the measurement residual error of IMU (English: Measurement Residual) and image acquisition units Measurement residual error establish an objective function, initial pose and initial local map are brought into objective function, recycled non- Linear optimization method is iterated objective function, until objective function reaches optimum state (i.e. minimum objective function), this When objective function determine pose and local map be optimize pose and optimization local map.Wherein, nonlinear optimization algorithm It may include: Gaussian-Newton method, LM (English: Levenberg-Marquardt algorithm, Chinese: Lai Wenbei Lattice-Marquart algorithm) optimize scheduling algorithm, the disclosure is not specifically limited nonlinear optimization algorithm.
Step 104, according to optimization pose and optimization local map, more fresh target initial bit appearance and target initial local Figure, the initial pose of target be the motion information that acquires of the image information for acquiring the second moment image acquisition units and IMU according to Expanded Kalman filtration algorithm is merged, the initial pose of acquisition, and target initial local map is according to the second moment image The initial local map that the image information and the initial pose of target of acquisition unit acquisition determine, the second moment are to obtain optimization pose At the time of with optimization local map.
It is exemplary, obtaining the higher optimization pose of positioning accuracy and optimization local map, can will optimization pose and Optimization local map is fed back in step 101 and step 102, to utilize expanded Kalman filtration algorithm true in Optimization Steps 101 The target initial local map determined in the initial pose of fixed target and step 102, to improve the positioning accuracy of SLAM.It needs Illustrate, since expanded Kalman filtration algorithm is different with the computational efficiency of nonlinear optimization algorithm, step 101, step Rapid 102 and step 103 may be performed simultaneously, it can be interpreted as step 101, step 102 and step 103 respectively correspond two can With the thread executed parallel, such as step 101 and step 102 are front end thread, and step 103 is rear end thread, and front end thread is not It needs to wait rear end thread, can execute always.Front end thread is determining the first moment corresponding initial pose and initial local After map, initial pose and initial local map are sent to rear end thread, optimization pose and optimization office are determined by rear end thread Portion's map, rear end thread will optimize pose at the second moment and optimization local map feeds back to front end thread, Lai Youhua front end of line The initial pose of target and target initial local map that journey determines at the second moment.
After step 101 has determined the first moment corresponding initial pose and initial local map, when can continue first The image letter of subsequent time (it is to be understood that subsequent time=first moment+T) image acquisition units and IMU acquisition after quarter Breath and motion information are merged, to obtain the corresponding initial pose of subsequent time and initial local map.And step 102 is right First moment corresponding initial pose and initial local map optimize, to obtain optimization pose and optimize local map, by It is taken a long time in the iterative process of nonlinear optimization algorithm, therefore the time for obtaining optimization pose and optimization local map has been At the second moment, there may be multiple T (usually a few tens of milliseconds), i.e. the second moment=the first between the second moment and the first moment Moment+x*T.Step 101 is executing always, therefore in the second moment, the figure that the second moment image acquisition units and IMU are acquired As information and motion information are merged, the initial pose of the second moment corresponding target got and target initial local Figure.At this point, the first moment corresponding initial pose and initial local map are no longer valid, do not have practical significance, it is possible to According to optimization pose and optimization local map, the initial pose of the second moment corresponding target and target initial local map are updated, To improve positioning accuracy.
In conclusion image information and movement that the disclosure first acquires image acquisition units and IMU at the first moment Information is merged according to expanded Kalman filtration algorithm, to obtain initial pose, is come further according to image information and initial pose Initial local map is determined, later according to preset nonlinear optimization algorithm to the initial pose and the initial local map It optimizes, to obtain optimization pose and optimization local map, finally according to the optimization pose and the optimization local map, Update the initial pose of the second moment corresponding target and target initial local map.By by the defeated of expanded Kalman filtration algorithm Out as the input of nonlinear optimization algorithm, and expanded Kalman filtration algorithm is corrected using the output of nonlinear optimization algorithm Output, to improve the computational efficiency and positioning accuracy of SLAM.
Fig. 2 is the flow chart of the SLAM method of another view-based access control model inertia shown according to an exemplary embodiment, such as Shown in Fig. 2, step 101 can be accomplished by the following way:
Step 1011, using motion information and history pose as the input of IMU motion model, to obtain IMU motion model The prediction pose of output, history pose are the movement of the image information for acquiring third moment image acquisition units and IMU acquisition Information is merged according to expanded Kalman filtration algorithm, the initial pose of acquisition, the third moment be the first moment before when It carves.
Step 1012, image information is handled according to preset image processing algorithm, to obtain the view for including in image information Feel the constraint relationship.
Step 1013, prediction pose is updated according to Vision Constraints relationship, to obtain initial pose, Vision Constraints are closed System includes: that re-projection error is minimum or luminosity error is minimum.
Expanded Kalman filtration algorithm in the disclosure may include two kinds: MSCKF (English: Multi-State Constraint Kalman Filter, Chinese: multimode constraint Kalman filter) algorithm and ROVIO (English: Robust Visual Inertial Odometry, Chinese: robust vision inertia odometer) algorithm.Two kinds of algorithms are all made of IMU movement mould Type is as the expanded Kalman filtration algorithm equation of motion, and to obtain prediction pose, and the Vision Constraints relationship in MSCKF algorithm is Re-projection error is minimum, in this, as observational equation, is updated to prediction pose, the Vision Constraints relationship in ROVIO algorithm It is luminosity error minimum in this, as observational equation, prediction pose is updated.
Fig. 3 is the flow chart of the SLAM method of another view-based access control model inertia shown according to an exemplary embodiment, such as Shown in Fig. 3, step 102 be may comprise steps of:
Step 1021, the pixel that preset condition is met in image information is determined as characteristic point.
Step 1022, three-dimensional reconstruction is carried out according to characteristic point and initial pose, to obtain initial local map.
Exemplary, the acquisition modes of initial local map can be first choose in image information and meet preset condition Pixel is as characteristic point.Wherein, characteristic point can be it is multiple (such as;100), preset condition for example may is that with it is adjacent Luminance difference between pixel is greater than the point of preset luminance threshold.The characteristic point for recycling multiple moment to be mutually matched later Three-dimensional reconstruction is carried out with the initial pose determined in step 101, it being capable of reactive terminal office locating for the first moment to obtain one Portion's environment, three-dimensional initial local map.
The realization of two kinds of algorithms is illustrated individually below:
In MSCKF algorithm, i.e., Vision Constraints relationship is that re-projection error is minimum, and the pose obtained in step 1011 is believed Breath, motion information and preset quantity key posture information before the first moment (such as can as the input of IMU motion model With the newest crucial posture information before being the first moment), to obtain the prediction pose of IMU motion model output.
For example, input MSCKF algorithm state vector to be estimated can beIts In, For the motion information for the IMU that MSCKF measurement obtains,For orientation of the IMU under global coordinate system,For the amount of bias of gyroscope,For speed of the IMU under global coordinate system,For the amount of bias of accelerometer,For position of the IMU under global coordinate system,For the pose obtained in step 1011 Information,For i-th of crucial posture information in N-1 before the first moment crucial posture information,For i-th of key Orientation of the image acquisition units under global coordinate system in posture information,For Image Acquisition in i-th of crucial posture information Position of the unit under global coordinate system.Using the output of IMU motion model as prediction pose, drift about since IMU exists, It predicts that pose is not accurate enough, further, according to the minimum Vision Constraints relationship of re-projection error, prediction pose is updated, Obtained pose is initial pose.
It should be noted that IMU motion model, describes the constraint relationship between posture information and motion information, due to Image acquisition units and IMU are fixed together, therefore image information and motion information are with uniformity, such as: acceleration measuring The acceleration measured is to time quadratic integral, and obtained value should be identical as the position in posture information, and gyroscope measures To time integral, obtained value should be mutually same with the orientation in posture information in the orientation arrived.
Crucial posture information is understood that a certain moment should when the image information of image acquisition units acquisition is key frame The posture information for including in moment image information.The crucial posture information of preset quantity before first moment, it can be understood as The time window for pre-setting a sliding, for storing N number of posture information, comprising: N-1 before the first moment is crucial Posture information and the first moment corresponding posture information.Wherein, judge image information whether be key frame rule, can be By the image information compared with the image information that the previous keyframe moment obtains carries out pose, when the variable quantity of position has been more than pre- If distance threshold and/or the variable quantity in orientation be more than preset angle threshold, then it is determined that the image information is key Frame.
In ROVIO algorithm, i.e., when Vision Constraints relationship is that luminosity error (the luminosity error of characteristic point) is minimum, by step Input of the posture information, motion information and road sign point obtained in 1011 as IMU motion model, to obtain IMU motion model The prediction pose of output, road sign point are characterized a corresponding coordinate in local map.
For example, inputting the ROVIO state vector to be estimated for being can be χ=(r, v, q, bf,bw,c,z,μ0,…, μM0,…,ρM), wherein r is coordinate of the IMU under global coordinate system, and v is speed of the IMU under global coordinate system, q IMU Orientation under global coordinate system, bfFor the amount of bias of accelerometer, bwFor the amount of bias of gyroscope, c be image acquisition units with The translational component joined outside IMU, z are the rotational component joined outside image acquisition units and IMU, μ0,…,μMIt is M characteristic point in image Bearing vector, ρ under acquisition unit coordinate system0,…,ρMIt is inverse under image acquisition units coordinate system for M characteristic point Depth parameter, inverse depth are the inverse of distance parameter.Using the output of IMU motion model as prediction pose, since IMU exists Drift, therefore predict that pose is not accurate enough, further, according to the minimum Vision Constraints relationship of luminosity error, to prediction pose into Row updates, and obtained pose is initial pose.
Fig. 4 is the flow chart of the SLAM method of another view-based access control model inertia shown according to an exemplary embodiment, such as Shown in Fig. 4, step 103 includes:
Step 1031, when image information is key frame, according to initial pose, initial local map and before the first moment The crucial posture information of preset quantity establish objective function, objective function includes the measurement residual sum image acquisition units of IMU Measurement residual error, before crucial posture information was the first moment image acquisition units acquire history image information be key frame When, the posture information for including in history image information can also include the information such as speed and amount of bias that IMU measurement obtains.
Step 1032, objective function is iterated according to preset algorithm, to obtain under the conditions of objective function is the smallest Corresponding optimization pose and optimization local map.
For step 103, initial pose and initial local map are optimized, need to establish objective function come into Row winding detection, to obtain optimization pose and optimization local map.Since nonlinear optimization algorithm calculating is slower, and in CPU The occupancy deposited is higher, and therefore, the initial pose and initial local map that can be obtained to step 101 screen, just for In the case that image information is key frame, just initial pose and initial local map are optimized, i.e. front end thread is in determination When image information is key frame, initial pose and initial local map are just sent to rear end thread, determined by rear end thread excellent Change pose and optimization local map.When image information is not key frame, it can be understood as the image information at the first moment with it is upper Change between the image information at one moment less, without being optimized to initial pose and initial local map.Under normal conditions, The time interval of the continuous key frame of two got in step 101 is long (such as 100ms), therefore reduces step 103 calculation amount, to further increase the computational efficiency and positioning accuracy of SLAM.
When image information is key frame, according to initial pose, initial local map and the present count before the first moment The crucial posture information of amount establishes objective function.Wherein, objective function may include the measurement residual sum image acquisition units of IMU Measurement residual error, such as: IMU pre-integration residual sum vision luminosity error etc..After selected target function, to objective function according to Preset algorithm (such as: Gaussian-Newton method) is iterated, until objective function reaches optimum state, (i.e. objective function is most It is small), the pose and local map that objective function determines at this time are to optimize pose and optimization local map.
Fig. 5 is the flow chart of the SLAM method of another view-based access control model inertia shown according to an exemplary embodiment, such as Shown in Fig. 5, step 104 includes:
It step 1041, is optimization local map by target initial local map rejuvenation.
For example, local map optimization can be divided into step 104 and pose optimizes two parts, is existed for step 101 The target initial local map that second moment determined directly can substitute target initial local map with optimization local map, right In the initial pose of target that step 101 determines at the second moment, and can be according in step 101 in expanded Kalman filtration algorithm The difference of Vision Constraints is divided into two kinds of processing modes.
Step 1042, when Vision Constraints relationship is re-projection error minimum, by optimization pose and before the second moment Preset quantity key posture information is merged according to Kalman filtering algorithm, and the initial pose of target is updated to Kalman The pose of filtering algorithm output, the history image information that image acquisition units acquire before crucial posture information was the second moment are When key frame, the posture information that includes in history image information.
For example, crucial using preset quantity of the Kalman filtering algorithm to optimization pose and before the second moment When posture information is merged, it can be and merge Optimal State amount and Key state, wherein Optimal State amount includes Optimize pose, can also include optimizing obtained speed and IMU amount of bias by step 103, Key state includes crucial position Appearance information and corresponding speed and IMU amount of bias.Target original state amount is updated to Kalman filtering algorithm output later Quantity of state, wherein target original state amount includes: the initial pose of target and corresponding speed and IMU amount of bias.
It further, may include: to use optimization locally for the update of the target original state amount at the second moment It is residual that the quantity of state of figure, the second moment corresponding key frame and the N-1 key frame establishes vision back projection residual sum IMU pre-integration Poor item carries out nonlinear iteration optimization (such as Gaussian-Newton method etc.), obtains the second moment high-precision quantity of state.With Optimization in the step 103 of rear end is not required to optimization local map and the except that only optimize the quantity of state at the second moment here The quantity of state of N-1 frame (because the two amounts are that rear end step 103 is fed back, accuracy is high).
Step 1043, it when Vision Constraints relationship is luminosity error minimum, obtains between initial pose and optimization pose Offset.
Step 1044, according to offset more fresh target initial bit appearance.
Specifically, that is, Vision Constraints relationship is that re-projection error is minimum in MSCKF algorithm, pose and second will be optimized The crucial posture information of preset quantity before moment is as Kalman filtering algorithm (English: Kalman Filtering, contracting Write: KF) input obtain the pose of Kalman filtering algorithm output using state covariance matrix and process covariance matrix, And the pose and covariance matrix for exporting the initial pose of target and corresponding covariance matrix update for Kalman filtering algorithm. Wherein, the crucial posture information of the preset quantity before the second moment, it can be understood as pre-set the time of a sliding Window, for N-1 crucial posture information before storing for the second moment.
In ROVIO algorithm, i.e., when Vision Constraints relationship is luminosity error minimum, it is corresponding can first to obtain for the first moment Offset between initial pose and optimization pose, recycles the initial pose of offset correction target.Be t with the first moment, Second moment was t+M to illustrate, offset Xdiff=tXbac*tX'fro, wherein the first moment, corresponding initial pose wastX'fro, Optimizing pose istXbac.The so updated initial pose of targett+MX=Xdiff*t+MXfro, whereint+MXfroIt is corresponding for the second moment Update before the initial pose of target.
Fig. 6 is the flow chart of the SLAM method of another view-based access control model inertia shown according to an exemplary embodiment, such as Shown in Fig. 6, step 104 can also include:
Step 1045, updated target initial local map is stored in global map.
It further, can also by updated target initial local after to target initial local map rejuvenation Figure deposit global map, i.e., the global map stored on more new terminal according to target initial local map.For example, can basis The location information that target initial local map is included is referred to the location information in target initial local map substitution global map The corresponding map in the region shown.
In conclusion image information and movement that the disclosure first acquires image acquisition units and IMU at the first moment Information is merged according to expanded Kalman filtration algorithm, to obtain initial pose, is come further according to image information and initial pose Initial local map is determined, later according to preset nonlinear optimization algorithm to the initial pose and the initial local map It optimizes, to obtain optimization pose and optimization local map, finally according to the optimization pose and the optimization local map, Update the initial pose of the second moment corresponding target and target initial local map.By by the defeated of expanded Kalman filtration algorithm Out as the input of nonlinear optimization algorithm, and expanded Kalman filtration algorithm is corrected using the output of nonlinear optimization algorithm Output, to improve the computational efficiency and positioning accuracy of SLAM.
Fig. 7 is a kind of block diagram of the SLAM device of view-based access control model inertia shown according to an exemplary embodiment, such as Fig. 7 institute Show, which includes:
Fusion Module 201, image information and Inertial Measurement Unit IMU for acquiring the first moment image acquisition units The motion information of acquisition is merged according to expanded Kalman filtration algorithm, to obtain initial pose.
Determining module 202, for determining initial local map according to image information and initial pose.
Optimization module 203, for being carried out according to preset nonlinear optimization algorithm to initial pose and initial local map Optimization, to obtain optimization pose and optimization local map.
Update module 204, for according to optimization pose and optimization local map, more fresh target initial bit appearance and target to be initial Local map, the initial pose of target are the movement letter of the image information for acquiring the second moment image acquisition units and IMU acquisition Breath is merged according to expanded Kalman filtration algorithm, the initial pose of acquisition, when target initial local map is according to second The initial local map that the image information and the initial pose of target for carving image acquisition units acquisition determine, the second moment was that acquisition is excellent At the time of changing pose and optimization local map.
Fig. 8 is the block diagram of the SLAM device of another view-based access control model inertia shown according to an exemplary embodiment, such as Fig. 8 Shown, Fusion Module 201 includes:
Submodule 2011 is predicted, for regarding motion information and history pose as the input of IMU motion model, with acquisition The prediction pose of IMU motion model output, history pose is the image information for acquiring third moment image acquisition units and IMU The motion information of acquisition is merged according to expanded Kalman filtration algorithm, the initial pose of acquisition, when the third moment is first At the time of before quarter.
Image procossing submodule 2012, for handling image information according to preset image processing algorithm, to obtain image The Vision Constraints relationship for including in information.
Submodule 2013 is updated, for being updated according to Vision Constraints relationship to prediction pose, to obtain initial pose, Vision Constraints relationship includes: that re-projection error is minimum or luminosity error is minimum.
Fig. 9 is the block diagram of the SLAM device of another view-based access control model inertia shown according to an exemplary embodiment, such as Fig. 9 Shown, determining module 202 includes:
Submodule 2021 is determined, for the pixel for meeting preset condition in image information to be determined as characteristic point.
Figure submodule 2022 is built, for carrying out three-dimensional reconstruction according to characteristic point and initial pose, with obtaining initial local Figure.
Figure 10 is the block diagram of the SLAM device of another view-based access control model inertia shown according to an exemplary embodiment, is such as schemed Shown in 10, optimization module 203 includes:
Setting up submodule 2031, for when image information is key frame, according to initial pose, initial local map and the The crucial posture information of preset quantity before one moment establishes objective function, and objective function includes the measurement residual sum figure of IMU As the measurement residual error of acquisition unit, the history image information that image acquisition units acquire before crucial posture information was the first moment When for key frame, the posture information that includes in history image information.
Iteration submodule 2032, for being iterated according to preset algorithm to objective function, with obtain objective function most Corresponding optimization pose and optimization local map under conditions of small.
Figure 11 is the block diagram of the SLAM device of another view-based access control model inertia shown according to an exemplary embodiment, is such as schemed Shown in 11, update module 204 includes:
Map rejuvenation submodule 2041, for being optimization local map by target initial local map rejuvenation.
Pose update submodule 2042, for when Vision Constraints relationship be re-projection error minimum when, will optimization pose and The crucial posture information of preset quantity before second moment is merged according to Kalman filtering algorithm, by target initial bit Appearance is updated to the pose of Kalman filtering algorithm output, and image acquisition units acquire before crucial posture information was the second moment The posture information for including when history image information is key frame, in history image information.
Pose updates submodule 2042, is also used to obtain initial pose when Vision Constraints relationship is luminosity error minimum With the offset between optimization pose, and according to offset more fresh target initial bit appearance.
Further, map rejuvenation submodule 2041 can be also used for for updated target initial local map being stored in complete Local figure.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, no detailed explanation will be given here.
In conclusion image information and movement that the disclosure first acquires image acquisition units and IMU at the first moment Information is merged according to expanded Kalman filtration algorithm, to obtain initial pose, is come further according to image information and initial pose Initial local map is determined, later according to preset nonlinear optimization algorithm to the initial pose and the initial local map It optimizes, to obtain optimization pose and optimization local map, finally according to the optimization pose and the optimization local map, Update the initial pose of the second moment corresponding target and target initial local map.By by the defeated of expanded Kalman filtration algorithm Out as the input of nonlinear optimization algorithm, and expanded Kalman filtration algorithm is corrected using the output of nonlinear optimization algorithm Output, to improve the computational efficiency and positioning accuracy of SLAM.
Figure 12 is the block diagram of a kind of electronic equipment 300 shown according to an exemplary embodiment.As shown in figure 12, the electronics Equipment 300 may include: processor 301, memory 302.The electronic equipment 300 can also include multimedia component 303, defeated Enter/export one or more of (I/O) interface 304 and communication component 305.
Wherein, processor 301 is used to control the integrated operation of the electronic equipment 300, used to complete above-mentioned view-based access control model All or part of the steps in the SLAM method of property.Memory 302 is for storing various types of data to support in the electronics The operation of equipment 300, these data for example may include any application program or side for operating on the electronic equipment 300 The instruction of method and the relevant data of application program, such as contact data, the message of transmitting-receiving, picture, audio, video etc.. The memory 302 can realize by any kind of volatibility or non-volatile memory device or their combination, such as quiet State random access memory (Static Random Access Memory, abbreviation SRAM), the read-only storage of electrically erasable Device (Electrically Erasable Programmable Read-Only Memory, abbreviation EEPROM), it is erasable to compile Journey read-only memory (Erasable Programmable Read-Only Memory, abbreviation EPROM), may be programmed read-only storage Device (Programmable Read-Only Memory, abbreviation PROM), and read-only memory (Read-Only Memory, referred to as ROM), magnetic memory, flash memory, disk or CD.Multimedia component 303 may include screen and audio component.Wherein Screen for example can be touch screen, and audio component is used for output and/or input audio signal.For example, audio component may include One microphone, microphone is for receiving external audio signal.The received audio signal can be further stored in storage Device 302 is sent by communication component 305.Audio component further includes at least one loudspeaker, is used for output audio signal.I/O Interface 304 provides interface between processor 301 and other interface modules, other above-mentioned interface modules can be keyboard, mouse, Button etc..These buttons can be virtual push button or entity button.Communication component 305 is for the electronic equipment 300 and other Wired or wireless communication is carried out between equipment.Wireless communication, such as Wi-Fi, bluetooth, near-field communication (Near Field Communication, abbreviation NFC), 2G, 3G or 4G or they one or more of combination, therefore corresponding communication Component 305 may include: Wi-Fi module, bluetooth module, NFC module.
In one exemplary embodiment, electronic equipment 300 can be by one or more application specific integrated circuit (Application Specific Integrated Circuit, abbreviation ASIC), digital signal processor (Digital Signal Processor, abbreviation DSP), digital signal processing appts (Digital Signal Processing Device, Abbreviation DSPD), programmable logic device (Programmable Logic Device, abbreviation PLD), field programmable gate array (Field Programmable Gate Array, abbreviation FPGA), controller, microcontroller, microprocessor or other electronics member Part is realized, for executing the SLAM method of above-mentioned view-based access control model inertia.
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should The step of SLAM method of above-mentioned view-based access control model inertia is realized when program instruction is executed by processor.For example, the computer can Reading storage medium can be the above-mentioned memory 302 including program instruction, and above procedure instruction can be by the processing of electronic equipment 300 Device 301 is executed to complete the SLAM method of above-mentioned view-based access control model inertia.
In conclusion image information and movement that the disclosure first acquires image acquisition units and IMU at the first moment Information is merged according to expanded Kalman filtration algorithm, to obtain initial pose, is come further according to image information and initial pose Initial local map is determined, later according to preset nonlinear optimization algorithm to the initial pose and the initial local map It optimizes, to obtain optimization pose and optimization local map, finally according to the optimization pose and the optimization local map, Update the initial pose of the second moment corresponding target and target initial local map.By by the defeated of expanded Kalman filtration algorithm Out as the input of nonlinear optimization algorithm, and expanded Kalman filtration algorithm is corrected using the output of nonlinear optimization algorithm Output, to improve the computational efficiency and positioning accuracy of SLAM.
The preferred embodiment of the disclosure is described in detail in conjunction with attached drawing above, still, the disclosure is not limited to above-mentioned reality The detail in mode is applied, in the range of the technology design of the disclosure, a variety of letters can be carried out to the technical solution of the disclosure Monotropic type, these simple variants belong to the protection scope of the disclosure.
It is further to note that specific technical features described in the above specific embodiments, in not lance In the case where shield, can be combined in any appropriate way, in order to avoid unnecessary repetition, the disclosure to it is various can No further explanation will be given for the combination of energy.
In addition, any combination can also be carried out between a variety of different embodiments of the disclosure, as long as it is without prejudice to originally Disclosed thought equally should be considered as disclosure disclosure of that.

Claims (14)

1. a kind of SLAM method of view-based access control model inertia, which is characterized in that the described method includes:
By the image information of the first moment image acquisition units acquisition and the motion information of Inertial Measurement Unit IMU acquisition according to expansion Exhibition Kalman filtering algorithm is merged, to obtain initial pose;
According to described image information and the initial pose, initial local map is determined;
The initial pose and the initial local map are optimized according to preset nonlinear optimization algorithm, it is excellent to obtain Change pose and optimization local map;
According to the optimization pose and the optimization local map, more fresh target initial bit appearance and target initial local map, institute State the movement letter that the initial pose of target is the image information for acquiring the second moment described image acquisition unit and IMU acquisition Breath is merged according to the expanded Kalman filtration algorithm, the initial pose of acquisition, and the target initial local map is root The initial local determined according to the image information and the initial pose of the target of the second moment described image acquisition unit acquisition Map, second moment are at the time of obtaining the optimization pose and the optimization local map.
2. the method according to claim 1, wherein the image that the first moment image acquisition units are acquired Information and the motion information of Inertial Measurement Unit IMU acquisition are merged according to expanded Kalman filtration algorithm, initial to obtain Pose, comprising:
Using the motion information and history pose as the input of IMU motion model, to obtain the IMU motion model output Predict that pose, the history pose are the image information for acquiring third moment described image acquisition unit and IMU acquisition Motion information merged according to the expanded Kalman filtration algorithm, the initial pose of acquisition, the third moment is institute At the time of stating before the first moment;
Described image information is handled according to preset image processing algorithm, to obtain the Vision Constraints for including in described image information Relationship;
The prediction pose is updated according to the Vision Constraints relationship, to obtain the initial pose, the vision is about Beam relationship includes: that re-projection error is minimum or luminosity error is minimum.
3. the method according to claim 1, wherein described according to described image information and the initial pose, Determine initial local map, comprising:
The pixel for meeting preset condition in described image information is determined as characteristic point;
Three-dimensional reconstruction is carried out according to the characteristic point and the initial pose, to obtain the initial local map.
4. the method according to claim 1, wherein it is described according to preset nonlinear optimization algorithm to it is described just Beginning pose and the initial local map optimize, to obtain optimization pose and optimization local map, comprising:
When described image information is key frame, according to the initial pose, the initial local map and first moment The crucial posture information of preset quantity before establishes objective function, and the objective function includes the measurement residual sum of the IMU The measurement residual error of described image acquisition unit, the key posture information is the first moment foregoing description image acquisition units The posture information for including when the history image information of acquisition is key frame, in the history image information;
The objective function is iterated according to preset algorithm, it is corresponding under the conditions of the objective function is the smallest to obtain The optimization pose and the optimization local map.
5. according to the method described in claim 2, it is characterized in that, it is described according to the optimization pose and it is described optimization locally Figure, more fresh target initial bit appearance and target initial local map, comprising:
It is the optimization local map by the target initial local map rejuvenation;
When the Vision Constraints relationship is re-projection error minimum, by the optimization pose and the present count before the second moment The crucial posture information of amount is merged according to Kalman filtering algorithm, and the initial pose of the target is updated to the karr The pose of graceful filtering algorithm output, the key posture information are the second moment foregoing description image acquisition units acquisition The posture information for including when history image information is key frame, in the history image information;
When the Vision Constraints relationship is luminosity error minimum, obtain inclined between the initial pose and the optimization pose Shifting amount;
The initial pose of target is updated according to the offset.
6. according to the method described in claim 5, it is characterized in that, it is described according to the optimization pose and it is described optimization locally Figure, more fresh target initial bit appearance and target initial local map, further includes:
The updated target initial local map is stored in global map.
7. a kind of SLAM device of view-based access control model inertia, which is characterized in that described device includes:
Fusion Module, what image information and Inertial Measurement Unit IMU for acquiring the first moment image acquisition units acquired Motion information is merged according to expanded Kalman filtration algorithm, to obtain initial pose;
Determining module, for determining initial local map according to described image information and the initial pose;
Optimization module, for being carried out according to preset nonlinear optimization algorithm to the initial pose and the initial local map Optimization, to obtain optimization pose and optimization local map;
Update module, for according to the optimization pose and the optimization local map, at the beginning of more fresh target initial bit appearance and target Beginning local map, the initial pose of target are the image information for acquiring the second moment described image acquisition unit and described The motion information of IMU acquisition is merged according to the expanded Kalman filtration algorithm, the initial pose of acquisition, at the beginning of the target Beginning local map is the image information acquired according to the second moment described image acquisition unit and the initial pose of the target Determining initial local map, second moment are at the time of obtaining the optimization pose and the optimization local map.
8. device according to claim 7, which is characterized in that the Fusion Module includes:
Submodule is predicted, for regarding the motion information and history pose as the input of IMU motion model, described in acquisition The prediction pose of IMU motion model output, the history pose is the image for acquiring third moment described image acquisition unit Information and the motion information of IMU acquisition are merged according to the expanded Kalman filtration algorithm, the initial pose of acquisition, At the time of the third moment is before first moment;
Image procossing submodule, for handling described image information according to preset image processing algorithm, to obtain described image The Vision Constraints relationship for including in information;
Submodule is updated, it is described initial to obtain for being updated according to the Vision Constraints relationship to the prediction pose Pose, the Vision Constraints relationship include: that re-projection error is minimum or luminosity error is minimum.
9. device according to claim 7, which is characterized in that the determining module includes:
Submodule is determined, for the pixel for meeting preset condition in described image information to be determined as characteristic point;
Figure submodule is built, for carrying out three-dimensional reconstruction according to the characteristic point and the initial pose, to obtain the initial office Portion's map.
10. device according to claim 7, which is characterized in that the optimization module includes:
Setting up submodule is used for when described image information is key frame, according to the initial pose, the initial local map Objective function is established with the crucial posture information of preset quantity before first moment, the objective function includes described The measurement residual error of the measurement residual sum described image acquisition unit of IMU, before the key posture information is first moment When the history image information of described image acquisition unit acquisition is key frame, the pose for including in the history image information is believed Breath;
Iteration submodule, for being iterated according to preset algorithm to the objective function, with obtain the objective function most The corresponding optimization pose and the optimization local map under conditions of small.
11. device according to claim 8, which is characterized in that the update module includes:
Map rejuvenation submodule, for being the optimization local map by the target initial local map rejuvenation;
Pose update submodule, for when the Vision Constraints relationship be re-projection error minimum when, by the optimization pose with The crucial posture information of preset quantity before second moment is merged according to Kalman filtering algorithm, will be at the beginning of the target Beginning pose is updated to the pose of the Kalman filtering algorithm output, and the key posture information is institute before second moment When the history image information for stating image acquisition units acquisition is key frame, the posture information that includes in the history image information;
The pose updates submodule, is also used to when the Vision Constraints relationship is luminosity error minimum, obtains described initial Offset between pose and the optimization pose, and the initial pose of target is updated according to the offset.
12. device according to claim 11, which is characterized in that the map rejuvenation submodule, after being also used to update The target initial local map be stored in global map.
13. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The step of any one of claim 1-6 the method is realized when execution.
14. a kind of electronic equipment characterized by comprising
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize described in any one of claim 1-6 The step of method.
CN201910411404.2A 2019-05-16 2019-05-16 SLAM method, apparatus, storage medium and device based on visual inertia Active CN110118554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910411404.2A CN110118554B (en) 2019-05-16 2019-05-16 SLAM method, apparatus, storage medium and device based on visual inertia

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910411404.2A CN110118554B (en) 2019-05-16 2019-05-16 SLAM method, apparatus, storage medium and device based on visual inertia

Publications (2)

Publication Number Publication Date
CN110118554A true CN110118554A (en) 2019-08-13
CN110118554B CN110118554B (en) 2021-07-16

Family

ID=67522698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910411404.2A Active CN110118554B (en) 2019-05-16 2019-05-16 SLAM method, apparatus, storage medium and device based on visual inertia

Country Status (1)

Country Link
CN (1) CN110118554B (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110647609A (en) * 2019-09-17 2020-01-03 上海图趣信息科技有限公司 Visual map positioning method and system
CN110704562A (en) * 2019-09-27 2020-01-17 Oppo广东移动通信有限公司 Map fusion method and device, equipment and storage medium
CN110967018A (en) * 2019-11-25 2020-04-07 斑马网络技术有限公司 Parking lot positioning method and device, electronic equipment and computer readable medium
CN110986930A (en) * 2019-11-29 2020-04-10 北京三快在线科技有限公司 Equipment positioning method and device, electronic equipment and storage medium
CN111060113A (en) * 2019-12-31 2020-04-24 歌尔股份有限公司 Map updating method and device
CN111275769A (en) * 2020-01-17 2020-06-12 联想(北京)有限公司 Monocular vision parameter correction method and device
CN111623773A (en) * 2020-07-17 2020-09-04 国汽(北京)智能网联汽车研究院有限公司 Target positioning method and device based on fisheye vision and inertial measurement
CN111683203A (en) * 2020-06-12 2020-09-18 达闼机器人有限公司 Grid map generation method and device and computer readable storage medium
CN111879306A (en) * 2020-06-17 2020-11-03 杭州易现先进科技有限公司 Visual inertial positioning method, device, system and computer equipment
CN111983635A (en) * 2020-08-17 2020-11-24 浙江商汤科技开发有限公司 Pose determination method and device, electronic equipment and storage medium
CN112067007A (en) * 2020-11-12 2020-12-11 湖北亿咖通科技有限公司 Map generation method, computer storage medium, and electronic device
CN112150550A (en) * 2020-09-23 2020-12-29 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method and device
CN112347840A (en) * 2020-08-25 2021-02-09 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112400122A (en) * 2019-08-26 2021-02-23 北京航迹科技有限公司 System and method for locating target object
CN112394720A (en) * 2019-08-15 2021-02-23 纳恩博(北京)科技有限公司 Robot control method and apparatus, storage medium, and electronic apparatus
CN112577494A (en) * 2020-11-13 2021-03-30 上海航天控制技术研究所 SLAM method, electronic device and storage medium suitable for lunar vehicle
CN112712018A (en) * 2020-12-29 2021-04-27 东软睿驰汽车技术(沈阳)有限公司 Identification map establishing method, visual positioning method and device
CN112923919A (en) * 2021-01-21 2021-06-08 湖南格纳微信息科技有限公司 Pedestrian positioning method and system based on graph optimization
CN112936269A (en) * 2021-02-04 2021-06-11 珠海市一微半导体有限公司 Robot control method based on intelligent terminal
CN112948411A (en) * 2021-04-15 2021-06-11 深圳市慧鲤科技有限公司 Pose data processing method, interface, device, system, equipment and medium
CN113190568A (en) * 2021-05-12 2021-07-30 上海快仓自动化科技有限公司 Map updating method, device and related components
WO2021169420A1 (en) * 2020-02-27 2021-09-02 北京三快在线科技有限公司 Visual positioning on basis of multiple image frames
CN113362387A (en) * 2021-04-12 2021-09-07 深圳大学 Environment map construction method of dynamic environment mobile robot
CN113390422A (en) * 2021-06-10 2021-09-14 奇瑞汽车股份有限公司 Automobile positioning method and device and computer storage medium
CN113514058A (en) * 2021-04-23 2021-10-19 北京华捷艾米科技有限公司 Visual SLAM positioning method and device integrating MSCKF and graph optimization
CN114353781A (en) * 2021-12-31 2022-04-15 广州小鹏自动驾驶科技有限公司 Map updating method, map updating device, electronic device and storage medium
WO2022116154A1 (en) * 2020-12-04 2022-06-09 深圳市优必选科技股份有限公司 Map library establishment method, computer device, and storage medium
CN114827727A (en) * 2022-04-25 2022-07-29 深圳创维-Rgb电子有限公司 Television control method and device, television and computer readable storage medium
CN115342806A (en) * 2022-07-14 2022-11-15 歌尔股份有限公司 Positioning method and device of head-mounted display equipment, head-mounted display equipment and medium
CN115859212A (en) * 2022-11-17 2023-03-28 广东智能无人系统研究院 Method and system for autonomous deployment and recovery of marine equipment
CN117687772A (en) * 2023-07-31 2024-03-12 荣耀终端有限公司 Algorithm scheduling method and electronic equipment
CN117705125A (en) * 2024-02-05 2024-03-15 安徽蔚来智驾科技有限公司 Positioning method, readable storage medium and intelligent device
WO2024087066A1 (en) * 2022-10-26 2024-05-02 北京小米移动软件有限公司 Image localization method, apparatus, electronic device, and storage medium
CN112712018B (en) * 2020-12-29 2024-06-28 东软睿驰汽车技术(沈阳)有限公司 Identification map building method, visual positioning method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108401461A (en) * 2017-12-29 2018-08-14 深圳前海达闼云端智能科技有限公司 Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN108665540A (en) * 2018-03-16 2018-10-16 浙江工业大学 Robot localization based on binocular vision feature and IMU information and map structuring system
CN109211241A (en) * 2018-09-08 2019-01-15 天津大学 The unmanned plane autonomic positioning method of view-based access control model SLAM
CN109307508A (en) * 2018-08-29 2019-02-05 中国科学院合肥物质科学研究院 A kind of panorama inertial navigation SLAM method based on more key frames
EP3451288A1 (en) * 2017-09-04 2019-03-06 Universität Zürich Visual-inertial odometry with an event camera
CN109465832A (en) * 2018-12-18 2019-03-15 哈尔滨工业大学(深圳) High-precision vision and the tight fusion and positioning method of IMU and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3451288A1 (en) * 2017-09-04 2019-03-06 Universität Zürich Visual-inertial odometry with an event camera
CN108401461A (en) * 2017-12-29 2018-08-14 深圳前海达闼云端智能科技有限公司 Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN108665540A (en) * 2018-03-16 2018-10-16 浙江工业大学 Robot localization based on binocular vision feature and IMU information and map structuring system
CN109307508A (en) * 2018-08-29 2019-02-05 中国科学院合肥物质科学研究院 A kind of panorama inertial navigation SLAM method based on more key frames
CN109211241A (en) * 2018-09-08 2019-01-15 天津大学 The unmanned plane autonomic positioning method of view-based access control model SLAM
CN109465832A (en) * 2018-12-18 2019-03-15 哈尔滨工业大学(深圳) High-precision vision and the tight fusion and positioning method of IMU and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘毅: "基于单目视觉的实时高精度定位方法研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112394720A (en) * 2019-08-15 2021-02-23 纳恩博(北京)科技有限公司 Robot control method and apparatus, storage medium, and electronic apparatus
CN112400122A (en) * 2019-08-26 2021-02-23 北京航迹科技有限公司 System and method for locating target object
CN110647609A (en) * 2019-09-17 2020-01-03 上海图趣信息科技有限公司 Visual map positioning method and system
CN110647609B (en) * 2019-09-17 2023-07-18 上海图趣信息科技有限公司 Visual map positioning method and system
CN110704562B (en) * 2019-09-27 2022-07-19 Oppo广东移动通信有限公司 Map fusion method and device, equipment and storage medium
CN110704562A (en) * 2019-09-27 2020-01-17 Oppo广东移动通信有限公司 Map fusion method and device, equipment and storage medium
CN110967018B (en) * 2019-11-25 2024-04-12 斑马网络技术有限公司 Parking lot positioning method and device, electronic equipment and computer readable medium
CN110967018A (en) * 2019-11-25 2020-04-07 斑马网络技术有限公司 Parking lot positioning method and device, electronic equipment and computer readable medium
CN110986930B (en) * 2019-11-29 2022-07-15 北京三快在线科技有限公司 Equipment positioning method and device, electronic equipment and storage medium
CN110986930A (en) * 2019-11-29 2020-04-10 北京三快在线科技有限公司 Equipment positioning method and device, electronic equipment and storage medium
CN111060113B (en) * 2019-12-31 2022-04-08 歌尔股份有限公司 Map updating method and device
CN111060113A (en) * 2019-12-31 2020-04-24 歌尔股份有限公司 Map updating method and device
WO2021135645A1 (en) * 2019-12-31 2021-07-08 歌尔股份有限公司 Map updating method and device
CN111275769A (en) * 2020-01-17 2020-06-12 联想(北京)有限公司 Monocular vision parameter correction method and device
CN111275769B (en) * 2020-01-17 2023-10-24 联想(北京)有限公司 Monocular vision parameter correction method and device
WO2021169420A1 (en) * 2020-02-27 2021-09-02 北京三快在线科技有限公司 Visual positioning on basis of multiple image frames
US11972523B2 (en) 2020-06-12 2024-04-30 Cloudminds Robotics Co., Ltd. Grid map generation method and device, and computer-readable storage medium
CN111683203A (en) * 2020-06-12 2020-09-18 达闼机器人有限公司 Grid map generation method and device and computer readable storage medium
WO2021249469A1 (en) * 2020-06-12 2021-12-16 达闼机器人有限公司 Grid map generation method, device, and computer readable storage medium
CN111683203B (en) * 2020-06-12 2021-11-09 达闼机器人有限公司 Grid map generation method and device and computer readable storage medium
CN111879306A (en) * 2020-06-17 2020-11-03 杭州易现先进科技有限公司 Visual inertial positioning method, device, system and computer equipment
CN111623773A (en) * 2020-07-17 2020-09-04 国汽(北京)智能网联汽车研究院有限公司 Target positioning method and device based on fisheye vision and inertial measurement
CN111983635A (en) * 2020-08-17 2020-11-24 浙江商汤科技开发有限公司 Pose determination method and device, electronic equipment and storage medium
CN111983635B (en) * 2020-08-17 2022-03-29 浙江商汤科技开发有限公司 Pose determination method and device, electronic equipment and storage medium
CN112347840B (en) * 2020-08-25 2022-12-02 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112347840A (en) * 2020-08-25 2021-02-09 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112150550A (en) * 2020-09-23 2020-12-29 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method and device
CN112150550B (en) * 2020-09-23 2021-07-27 华人运通(上海)自动驾驶科技有限公司 Fusion positioning method and device
CN112067007A (en) * 2020-11-12 2020-12-11 湖北亿咖通科技有限公司 Map generation method, computer storage medium, and electronic device
CN112067007B (en) * 2020-11-12 2021-01-29 湖北亿咖通科技有限公司 Map generation method, computer storage medium, and electronic device
CN112577494A (en) * 2020-11-13 2021-03-30 上海航天控制技术研究所 SLAM method, electronic device and storage medium suitable for lunar vehicle
WO2022116154A1 (en) * 2020-12-04 2022-06-09 深圳市优必选科技股份有限公司 Map library establishment method, computer device, and storage medium
CN112712018A (en) * 2020-12-29 2021-04-27 东软睿驰汽车技术(沈阳)有限公司 Identification map establishing method, visual positioning method and device
CN112712018B (en) * 2020-12-29 2024-06-28 东软睿驰汽车技术(沈阳)有限公司 Identification map building method, visual positioning method and device
CN112923919A (en) * 2021-01-21 2021-06-08 湖南格纳微信息科技有限公司 Pedestrian positioning method and system based on graph optimization
CN112936269A (en) * 2021-02-04 2021-06-11 珠海市一微半导体有限公司 Robot control method based on intelligent terminal
CN113362387A (en) * 2021-04-12 2021-09-07 深圳大学 Environment map construction method of dynamic environment mobile robot
WO2022217882A1 (en) * 2021-04-15 2022-10-20 深圳市慧鲤科技有限公司 Pose data processing method, and interface, apparatus, system, device and medium
CN112948411A (en) * 2021-04-15 2021-06-11 深圳市慧鲤科技有限公司 Pose data processing method, interface, device, system, equipment and medium
CN112948411B (en) * 2021-04-15 2022-10-18 深圳市慧鲤科技有限公司 Pose data processing method, interface, device, system, equipment and medium
CN113514058A (en) * 2021-04-23 2021-10-19 北京华捷艾米科技有限公司 Visual SLAM positioning method and device integrating MSCKF and graph optimization
CN113190568A (en) * 2021-05-12 2021-07-30 上海快仓自动化科技有限公司 Map updating method, device and related components
CN113390422B (en) * 2021-06-10 2022-06-10 奇瑞汽车股份有限公司 Automobile positioning method and device and computer storage medium
CN113390422A (en) * 2021-06-10 2021-09-14 奇瑞汽车股份有限公司 Automobile positioning method and device and computer storage medium
CN114353781A (en) * 2021-12-31 2022-04-15 广州小鹏自动驾驶科技有限公司 Map updating method, map updating device, electronic device and storage medium
CN114827727B (en) * 2022-04-25 2024-05-07 深圳创维-Rgb电子有限公司 Television control method, television control device, television and computer readable storage medium
CN114827727A (en) * 2022-04-25 2022-07-29 深圳创维-Rgb电子有限公司 Television control method and device, television and computer readable storage medium
CN115342806A (en) * 2022-07-14 2022-11-15 歌尔股份有限公司 Positioning method and device of head-mounted display equipment, head-mounted display equipment and medium
WO2024087066A1 (en) * 2022-10-26 2024-05-02 北京小米移动软件有限公司 Image localization method, apparatus, electronic device, and storage medium
CN115859212A (en) * 2022-11-17 2023-03-28 广东智能无人系统研究院 Method and system for autonomous deployment and recovery of marine equipment
CN117687772A (en) * 2023-07-31 2024-03-12 荣耀终端有限公司 Algorithm scheduling method and electronic equipment
CN117705125A (en) * 2024-02-05 2024-03-15 安徽蔚来智驾科技有限公司 Positioning method, readable storage medium and intelligent device
CN117705125B (en) * 2024-02-05 2024-04-30 安徽蔚来智驾科技有限公司 Positioning method, readable storage medium and intelligent device

Also Published As

Publication number Publication date
CN110118554B (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN110118554A (en) SLAM method, apparatus, storage medium and device based on visual inertia
US11295456B2 (en) Visual-inertial odometry with an event camera
CN109084732B (en) Positioning and navigation method, device and processing equipment
Cvišić et al. Stereo odometry based on careful feature selection and tracking
CN110246182A (en) Vision-based global map positioning method and device, storage medium and equipment
CN110118556A (en) A kind of robot localization method and device based on covariance mixing together SLAM
JP5987823B2 (en) Method and system for fusing data originating from image sensors and motion or position sensors
CN111210477B (en) Method and system for positioning moving object
WO2018159177A1 (en) Vehicle automated parking system and method
CN108700947A (en) For concurrent ranging and the system and method for building figure
US20220051031A1 (en) Moving object tracking method and apparatus
JP2015532077A (en) Method for determining the position and orientation of an apparatus associated with an imaging apparatus that captures at least one image
CN108700946A (en) System and method for parallel ranging and fault detect and the recovery of building figure
CN109461208A (en) Three-dimensional map processing method, device, medium and calculating equipment
JPWO2005038710A1 (en) Moving body motion calculation method and apparatus, and navigation system
CN109443348A (en) It is a kind of based on the underground garage warehouse compartment tracking for looking around vision and inertial navigation fusion
KR101985344B1 (en) Sliding windows based structure-less localization method using inertial and single optical sensor, recording medium and device for performing the method
CN109059907A (en) Track data processing method, device, computer equipment and storage medium
CN110310304A (en) Monocular vision builds figure and localization method, device, storage medium and mobile device
CN110260861A (en) Pose determines method and device, odometer
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
CN110465942A (en) Pose compensation method, device, storage medium and electronic equipment
Seiskari et al. HybVIO: Pushing the limits of real-time visual-inertial odometry
JP7145770B2 (en) Inter-Vehicle Distance Measuring Device, Error Model Generating Device, Learning Model Generating Device, Methods and Programs Therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210301

Address after: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.

CP03 Change of name, title or address