CN109147058A - Initial method and device and storage medium for the fusion of vision inertial navigation information - Google Patents

Initial method and device and storage medium for the fusion of vision inertial navigation information Download PDF

Info

Publication number
CN109147058A
CN109147058A CN201811012768.5A CN201811012768A CN109147058A CN 109147058 A CN109147058 A CN 109147058A CN 201811012768 A CN201811012768 A CN 201811012768A CN 109147058 A CN109147058 A CN 109147058A
Authority
CN
China
Prior art keywords
terminal
inertial
relative quantity
guidance data
initializing variable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811012768.5A
Other languages
Chinese (zh)
Other versions
CN109147058B (en
Inventor
凌永根
暴林超
刘威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201811012768.5A priority Critical patent/CN109147058B/en
Publication of CN109147058A publication Critical patent/CN109147058A/en
Application granted granted Critical
Publication of CN109147058B publication Critical patent/CN109147058B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Software Systems (AREA)
  • Computational Mathematics (AREA)
  • Algebra (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Navigation (AREA)

Abstract

Present invention discloses a kind of initial methods for the fusion of vision inertial navigation information, comprising: obtains image data and inertial guidance data respectively from the imaging sensor and inertial sensor of terminal;And based on the geometry constraint conditions between adjacent image in described image data, and integral relative quantity of the inertial guidance data between the acquisition time of the adjacent image, to obtain the initializing variable of the terminal, wherein, when obtaining the initializing variable of the terminal, the uncertainty measure generated due to the noise of the inertial guidance data to the integral relative quantity is introduced.The initialization scheme for the fusion of vision inertial navigation information provided according to embodiments of the present invention, the uncertainty measure generated by the noise introduced due to inertial guidance data to integral relative quantity, substantially increases the ease for use and stability of initialization.

Description

Initial method and device and storage medium for the fusion of vision inertial navigation information
Technical field
The present invention relates to field of computer technology, in particular to a kind of initial method for the fusion of vision inertial navigation information With device and computer storage medium and electronic equipment.
Background technique
The fusion of vision inertial navigation information is usually to indicate that one kind merges visual information and inertial navigation information, and be used to synchronize The method of positioning and environment rebuilt.Here visual information generally refers to shoot obtained two dimensional image by camera, and is used to Information is led to generally refer to the angular velocity information of IMU (Inertial Measurement Unit, Inertial Measurement Unit) output and add The information such as speed.
For example, be by the obtained image of mobile terminal camera head shooting it is two-dimensional, that is to say to three-dimensional environment Dimensionality reduction indicate;However, in conjunction with the inertial navigation information that IMU is exported, it can be based on camera captured by different moments, the different location Obtained image reconstructs three-dimensional environment locating for mobile terminal, and infers it in the historical position of different moments.This process is The synchronous positioning of view-based access control model inertial navigation information fusion and environment rebuilt.
Once obtaining position and the environmental information of mobile terminal, have it and the ability of environmental interaction.For example, In VR (Virtua Reality, virtual reality) and AR (Augmented Reality, augmented reality) application, based on known Virtual objects can be placed in true environment by environmental information.Meanwhile in conjunction with known terminal positional information, can pass through True environment and virtual environment are rendered into the image shown on terminal screen by corresponding positional relationship.For example, in quotient In the navigation of field, it is based on known environmental information, the environment that user's identification can be helped locating;Meanwhile believing in conjunction with known position Breath, by the way that in the true environment that terminal screen is shown, the virtual navigation information of Overlapping display can according to demand refer to user Dining room, shop, toilet etc. near guiding to.
With the rapid development of the technologies such as VR and AR, synchronous positioning and environment rebuilt are increasingly becoming computer vision field In a very important research direction, have wide application.However, on the other hand, as described above, synchronous positioning and environment Building is the process for going out high dimensional information from low-dimensional information inference, this process is nonlinearity, thus is difficult to carry out It calculates.In order to enable the synchronous superposition of view-based access control model inertial navigation information fusion can be executed smoothly, it usually needs be This process choosing variable initial value appropriate, in the related art, this often requires that user executes spy in particular manner Fixed movement causes additional operation to constrain to complete to user, and start-up course is also more complex, whole ease for use compared with Difference.
Summary of the invention
Initialization of variable process in order to solve view-based access control model inertial navigation information fusion in the related technology is complicated, ease for use difference Problem, the present invention provides a kind of initial method for the fusion of vision inertial navigation information and device and computer storage mediums And electronic equipment.
According to an embodiment of the invention, providing a kind of initial method for the fusion of vision inertial navigation information, comprising: from end The imaging sensor and inertial sensor at end obtain image data and inertial guidance data respectively;And based on phase in described image data The integral phase of geometry constraint conditions and the inertial guidance data between the acquisition time of the adjacent image between adjacent image To amount, to obtain the initializing variable of the terminal, wherein when obtaining the initializing variable of the terminal, introduce by institute State the uncertainty measure that the noise of inertial guidance data generates the integral relative quantity.
According to an embodiment of the invention, providing a kind of apparatus for initializing for the fusion of vision inertial navigation information, comprising: obtain Module, for from terminal imaging sensor and inertial sensor obtain image data and inertial guidance data respectively;And initialization Module, for based between adjacent image in described image data geometry constraint conditions and the inertial guidance data described Integral relative quantity between the acquisition time of adjacent image, to obtain the initializing variable of the terminal, wherein the initialization Module is introduced when obtaining the initializing variable of the terminal since the noise of the inertial guidance data integrates opposite volume production to described Raw uncertainty measure.
According to an embodiment of the invention, providing a kind of computer readable storage medium, it is stored thereon with computer program, institute It states and realizes the initial method for the fusion of vision inertial navigation information as described above when computer program is executed by processor.
According to an embodiment of the invention, providing a kind of electronic equipment, comprising: processor;And memory, the memory On be stored with computer-readable instruction, when the computer-readable instruction is executed by the processor realize be used for as described above The initial method of vision inertial navigation information fusion.
The technical solution that the embodiment of the present invention provides can include the following benefits:
The initialization scheme for the fusion of vision inertial navigation information provided according to embodiments of the present invention, by introducing due to used The uncertainty measure that the noise of derivative evidence generates integral relative quantity, substantially increases the ease for use and stability of initialization.
It should be understood that the above general description and the following detailed description are merely exemplary, this can not be limited Invention.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention Example, and in specification together principle for explaining the present invention.
Fig. 1 shows the structural schematic diagram for being suitable for the electronic equipment for being used to realize the embodiment of the present invention.
Fig. 2 is a kind of stream of initial method for the fusion of vision inertial navigation information shown according to an exemplary embodiment Cheng Tu.
Fig. 3 is a kind of initial method for the fusion of vision inertial navigation information shown according to another exemplary embodiment Flow chart.
Fig. 4 is a kind of frame of apparatus for initializing for the fusion of vision inertial navigation information shown according to an exemplary embodiment Figure.
Fig. 5 is a kind of apparatus for initializing for the fusion of vision inertial navigation information shown according to another exemplary embodiment Block diagram.
Specific embodiment
Example embodiment is described more fully with reference to the drawings.However, example embodiment can be with a variety of shapes Formula is implemented, and is not understood as limited to example set forth herein;On the contrary, thesing embodiments are provided so that the present invention will more Fully and completely, and by the design of example embodiment comprehensively it is communicated to those skilled in the art.
In addition, described feature, structure or characteristic can be incorporated in one or more implementations in any suitable manner In example.In the following description, many details are provided to provide and fully understand to the embodiment of the present invention.However, It will be appreciated by persons skilled in the art that technical solution of the present invention can be practiced without one or more in specific detail, Or it can be using other methods, constituent element, device, step etc..In other cases, it is not shown in detail or describes known side Method, device, realization or operation are to avoid fuzzy each aspect of the present invention.
Block diagram shown in the drawings is only functional entity, not necessarily must be corresponding with physically separate entity. I.e., it is possible to realize these functional entitys using software form, or realized in one or more hardware modules or integrated circuit These functional entitys, or these functional entitys are realized in heterogeneous networks and/or processor device and/or microcontroller device.
Flow chart shown in the drawings is merely illustrative, it is not necessary to including all content and operation/step, It is not required to execute by described sequence.For example, some operation/steps can also decompose, and some operation/steps can close And or part merge, therefore the sequence actually executed is possible to change according to the actual situation.
Fig. 1 shows the structural schematic diagram for being suitable for the electronic equipment for being used to realize the embodiment of the present disclosure.It should be noted that Electronic equipment 100 shown in fig. 1 is only an example, should not function to the embodiment of the present disclosure and use scope bring any limit System.The electronic equipment 100 can be the mobile terminals such as mobile phone or tablet computer.Referring to Fig.1, terminal 100 may include with next A or multiple components: processing component 102, memory 104, power supply module 106, multimedia component 108, input/output (I/O) connect Mouth 112, sensor module 114 and communication component 116.
The integrated operation of the usual controlling terminal 100 of processing component 102, such as with display, telephone call, data communication, phase Machine operation and record operate associated operation.Processing element 102 may include one or more processors to execute instruction, with It performs all or part of the steps of the methods described above.In addition, processing component 102 may include one or more modules, convenient for place Manage the interaction between component 102 and other assemblies.For example, processing component 102 may include multi-media module, to facilitate multimedia Interaction between component 108 and processing component 102.
Memory 104 is configured as storing various types of data to support the operation in terminal 100.These data are shown Example includes the instruction of any application or method for operating on the terminal 100, contact data, and telephone book data disappears Breath, picture, video etc..Memory 104 can be by any kind of volatibility or non-volatile memory device or their group It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash Device, disk or CD.
Power supply module 106 provides electric power for the various assemblies of terminal 100.Power supply module 106 may include power management system System, one or more power supplys and other with for terminal 100 generate, manage, and distribute the associated component of electric power.
Multimedia component 108 includes the screen of one output interface of offer between the terminal 100 and user.One In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers Body component 108 includes a front camera and/or rear camera.When terminal 100 is in operation mode, such as screening-mode or When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
I/O interface 112 provides interface between processing component 102 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..
Sensor module 114 includes one or more sensors, and the state for providing various aspects for terminal 100 is commented Estimate.For example, sensor module 114 can detecte the state that opens/closes of terminal 100, and the relative positioning of component, for example, it is described Component is the display and keypad of terminal 100, and sensor module 114 can also detect 100 1 components of terminal 100 or terminal Position change, the existence or non-existence that user contacts with terminal 100,100 orientation of terminal or acceleration/deceleration and terminal 100 Temperature change.Sensor module 114 may include proximity sensor, be configured to detect without any physical contact Presence of nearby objects.Sensor module 114 can also include optical sensor, such as CMOS or ccd image sensor, at As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 116 is configured to facilitate the communication of wired or wireless way between terminal 100 and other equipment.Terminal 100 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.In an exemplary implementation In example, communication component 116 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, the communication component 116 further includes near-field communication (NFC) module, to promote short range communication.Example Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology, Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, terminal 100 can be believed by one or more application specific integrated circuit (ASIC), number Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, the method for executing above-described embodiment.
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be Included in electronic equipment described in above-described embodiment;It is also possible to individualism, and without in the supplying electronic equipment. Above-mentioned computer-readable medium carries one or more program, when the electronics is set by one for said one or multiple programs When standby execution, so that method described in electronic equipment realization as the following examples.For example, the electronic equipment can be real Each step now as shown in Figures 2 and 3.
Before elaborating the technical solution of the embodiment of the present invention, some relevant technical solutions and original introduced below Reason.
The fusion of view-based access control model inertial navigation information is described in background technique by taking mobile terminal as an example and synchronizes positioning and environment The scene of reconstruction.It is illustrated for the present invention is still using mobile terminal as the carrier for implementing initialization below, but is only for showing Example and do not constitute a limitation on the scope of protection of the present invention.
When implementing initial method of the present invention for the fusion of vision inertial navigation information on mobile terminals, it can be moved by operation The application (such as VR and AR application) installed in dynamic terminal is realized.Correspondingly, at this moment the application of mobile terminal often has in real time Property, mapping and ease for use several points requirement, be respectively described below.
Firstly, user on mobile terminals using it is above-mentioned in application, the process of synchronous superposition be it is online, That is, needing to calculate mobile terminal in real time in the position of different moments, and the environment on periphery is reconstructed, to meet the need of application It wants.
Secondly, synchronous position map constructed by location information and map structuring obtained, on scale with true generation Boundary is inconsistent.It in simple terms, is one in the projection of camera this is because different size of object is in the case where different distance Sample.In order to solve the problems, such as that scale is uncertain, the inertial guidance data of inertial sensor can be introduced, for example including acceleration measuring Tri-axis angular rate etc. of the 3-axis acceleration and gyroscope of amount from measurement.Since these inertial guidance datas are sat in real world Measurement under mark system merges it with the image data that camera obtains, and may be implemented to make finally obtained synchronous positioning With the scale of map structuring, it is mapped to consistent with the scale of real world.
Finally, as in background technology part it has been already mentioned that since synchronous superposition is one from low-dimensional information The nonlinearity process of high dimensional information is derived, therefore treatment process is extremely difficult.In order to allow synchronous superposition can Smoothly to execute, it usually needs be this process choosing variable initial value appropriate, in the related art, this often requires that user Execute specific movement in particular manner to complete, therefore cause additional operation to constrain to user, start-up course also compared with Complexity, whole ease for use are poor.
For above-mentioned the relevant technologies, the embodiment of the present invention provides a kind of initial method for the fusion of vision inertial navigation information With device, computer readable storage medium and electronic equipment.
The principle of the technical solution of the embodiment of the present invention and realization details are described in detail below.
Fig. 2 is a kind of stream of initial method for the fusion of vision inertial navigation information shown according to an exemplary embodiment Cheng Tu.As shown in Fig. 2, this method can be executed by electronic equipment as shown in Figure 1, it may include following steps 210-230.
In step 210, image data and inertial guidance data are obtained respectively from the imaging sensor of terminal and inertial sensor.
Here imaging sensor includes the imaging device for being able to detect visible light, infrared light or ultraviolet light, for example including Terminal built-in or external camera.Correspondingly, image data here include imaging sensor obtain to subject can Light-exposed, infrared light or ultraviolet light imaging.For simplicity, hereafter equal image taking sensor is terminal camera, image data is eventually It is illustrated for the image of end camera shooting.
In one embodiment, terminal here may include more than one imaging sensor, more so as to provide Image data, and realize the case where stability of enhancing is to cope with part of imaging sensor failure.In addition, combination comes from The data of multiple images sensor, it may be considered that the spatial relationship between different images sensor is more accurately melted to provide Close result.
Here inertial sensor for example may include accelerometer and gyroscope, the former is capable of the acceleration of three axis of measuring terminals Degree, the latter are capable of the angular speed of three axis of measuring terminals.Inertial sensor, which may also comprise other, can measure acceleration and angular speed Inertial Measurement Unit (IMU, Inertia Measurement Unit).Correspondingly, inertial guidance data here may include inertia The acceleration and/or angular velocity measurement value of sensor output.For simplicity, hereafter with inertial sensor include speedometer and Gyroscope, inertial guidance data both include being illustrated for the acceleration and angular speed (being referred to as IMU data) obtained respectively.
It connects and refers to Fig. 2, in step 230, based on the geometry constraint conditions between adjacent image in image data, and it is used Derivative is according to the integral relative quantity between the acquisition time of adjacent image, to obtain the initializing variable of terminal.
In some embodiments, initializing blending algorithm with certain variables when executing the fusion of vision inertial navigation information may It is necessary or beneficial.This variable can be referred to as " initializing variable " herein, may include mobile terminal first Initial value when moment beginning (for example, when mobile terminal brings into operation certain VR or AR application).The accuracy of initializing variable can To influence the accuracy of subsequent vision inertial navigation information fusion.Determine that the process of above-mentioned initializing variable can be referred to as herein " initialization ".
In some embodiments, above-mentioned variable includes expression of the speed of terminal under inertial sensor coordinate system, vision (also referred to as gravity accelerates for the expression of scaling and gravity under an image coordinate between observation and IMU integral result Degree).These variables, position, rotation and speed of the available terminal under real world coordinates are solved by estimation.Therefore, The series of optimum method that the initial method of the embodiment of the present invention can be applied to the positioning of vision inertial navigation fusion and rebuild, it is such as non- Linear optimization, Kalman filtering, expands Kalman filtering and lossless Kalman filtering etc. at figure optimization.
Step 230 described further below obtains the algorithm example of initializing variable.
First, it is assumed that ()wIndicate the coordinate system of real world,WithIt is illustrated respectively in shooting kth frame image When IMU coordinate system and camera coordinates system,It respectively indicates from Y coordinate system to the three-dimensional position of X-coordinate system, speed Degree and rotation.For rotationIt is indicated using corresponding Hamilton quaternary numberSeparately Outside, it is also supposed that the image overcorrection that camera obtains, and known internal reference is (for example including focal length and center in this example Point).Displacement and rotation between IMU data and image are respectivelyWith
Assuming that the rotation and position of continuous K frame image isAt one In embodiment, these variables can be calculated by the light-stream adjustment (Bundle adjustment) of monocular vision.Specifically Calculation method can refer to Bundle Adjustment-A Modern Synthesis, ICCV1999Proceedings of The International Workshop on Vision Algorithms:Theory and Practice (bundle adjustment- Modern comprehensive theory, ICCV 1999 international vision algorithm conference Papers collections: theory and practice), details are not described herein again.
However, the scale of these variables and real world for calculating is inconsistent.Therefore, it is necessary to adjust these changes The scale s of amount is mapped as it consistent with the scale of real world.Meanwhile in order to can with the linear acceleration of IMU and angle speed The integral of degree is associated, and can introduce position of the terminal under true scaleRotationSpeedAnd expression of the gravity under image coordinate
According to the equation of motion, geometry of the terminal between position, rotation and the speed when obtaining+1 frame image of kth can be obtained Constraint condition expression formula is as follows:
Wherein, for rotating R, in order to which operation is convenient, quaternary number representation method corresponding thereto can be usedWherein s is that camera coordinates system is really sat with the world The scaling of system is marked,WithIt is to be displaced and rotate between camera and IMU.
WithIndicate integral relative quantity of the IMU data between kth and the shooting time of k+1 frame image:
Wherein,WithIt is b respectivelytThe acceleration and angular speed at moment;WithIt is one related to sensor Measurement offset, the two amounts can read from sensor configuration file or Parameter File;If it does not exist, then can be with If it is 0.
Integral relative quantity expression formula (3)-(5) based on geometry constraint conditions expression formula (1) and (2) and IMU data, can The objective function for constructing initializing variable is as follows:
Wherein, X indicates shown initializing variable, such as is represented by Δt Indicate the time interval between kth and k+1 frame image data;WithIndicate that the inertial guidance data exists Integral relative quantity between kth and k+1 frame image data;WithIt is illustrated respectively in when obtaining kth frame image data Inertial sensor coordinate system and imaging sensor coordinate system;It respectively indicates from Y Coordinate system is to the three-dimensional position of X-coordinate system, speed and rotation;Indicate that the gravity under imaging sensor coordinate system adds Speed.
Make above-mentioned objective function f by solving1(X) X of minimum value is taken, the initializing variable of terminal can be obtained.
Above-mentioned objective function only considered the algebraic property of expression formula (1) and (2), without considering the uncertain of variable The noise of property and IMU measured value, so that the initializing variable X poor quality calculated.
For this purpose, when step 230 obtains initializing variable, being further introduced into due to inertial navigation number in the embodiment of the present invention According to noise to integral relative quantity generate uncertainty measure.
In some embodiments, it can be calculated in the integral process of expression formula (3) and (5)WithBy IMU The noise bring of measured value is uncertain.In order to derive, above formula (3)-(5) discretization can be obtained first and is expressed as follows:
Wherein, i indicates tkAnd tk+1Between i-th of IMU data, δ t is the time difference between two IMU data,WithAcceleration and angular speed when obtaining i-th of inertial guidance data respectively, and set the primary condition of iteration as
In order to deriveWithUncertainty, can define error state WithIndicate the difference of the calculated value after current discretization is approximate and true value, above-mentioned error state and difference can be distinguished It indicates are as follows:
For rotation, the minimum representation of rotation can be usedAnd set the primary condition of iteration as
In this way, can iteratively calculate integral based on above formula (7)-(9)WithIt simultaneously can be iteratively It calculatesAndThe covariance matrix of compositionRespectively such as Shown in following formula (10) and (11):
IfSoIt is the operation that vector is become to matrix. na、nωThe respectively noise for measuring noise, accelerometer deviation of the measurement noise of accelerometer, angular speed meter And the noise of angular speed meter deviation.Q is unit diagonal matrix, as shown in following formula (12):
It include the noise variance of acceleration in formula (12)The noise variance of angular speedThe gaussian random of acceleration Migration noise varianceWith the gaussian random migration noise variance of angular speed
Based on acquisitionTake out the matrix of its upper left corner 6*6Correspond toWithIt is uncertain Property, it is specific as follows shown:
Wherein,Pa∈R6×9,Pb∈R9×6,Pc∈R9×9
So far it can obtain as follows compared to formula (6) modified objective function expression formula:
It connects, makes above-mentioned objective function f by solving2(X) X of minimum value is taken, the initializing variable of terminal can be obtained, It include: the scaling s between visual observation and the integral result of inertial guidance data, expression of the acceleration of gravity at 0 moment of cameraWith expression of the speed under IMU coordinate systemMeanwhile compared with formula (11), the objective function of formula (13) not only considers IMU integral gained itemWithUncertainty, make simultaneouslyWithIt associates, can greatly improve just The stability of beginningization.
In conclusion the initialization scheme for the fusion of vision inertial navigation information provided according to embodiments of the present invention, passes through The uncertainty measure generated due to the noise of inertial guidance data to integral relative quantity is introduced, the ease for use of initialization is substantially increased And stability.
Fig. 3 is a kind of initial method for the fusion of vision inertial navigation information shown according to another exemplary embodiment Flow chart.As shown in figure 3, this method can be executed by electronic equipment as shown in Figure 1, it may include following steps 310-350.
In the step 310, image data and inertial guidance data are obtained respectively from the imaging sensor of terminal and inertial sensor.
In a step 330, based between adjacent image in image data geometry constraint conditions and inertial guidance data in phase Integral relative quantity between the acquisition time of adjacent image, to obtain the initializing variable of terminal.
Wherein, it when step 330 obtains the initializing variable of terminal, introduces since the noise of inertial guidance data is opposite to integrating Measure the uncertainty measure generated.
The details of above step 310 and 330 can be found in the detailed description of step 210 in Fig. 2 embodiment and 230, herein not It repeats again.
In step 350, according to initializing variable obtain position of the terminal in real world coordinates system, rotation and Speed.
The example of 2 step 230 of hookup, solving obtained initializing variable based on formula (13) may include visual observation and is used to Scaling s between the integral result of derivative evidence, expression of the acceleration of gravity at 0 moment of cameraIt is sat with speed in IMU Expression under mark systemIt can be indicated under world coordinate system with the displacement, rotation and speed of computing terminal accordingly.
One simple calculating process example is described below, but protection scope of the present invention is not construed as limiting.
Firstly, being based on formula (1) and (2), can calculateWithSimultaneously as known acceleration of gravity is alive G is represented by under boundary's coordinate systemw=[0 0 9.8]T, therefore pass throughAnd gwThe two variables, we can calculate
It connects, sees above the description to formula (1) and (2) it is found that for any time k, can be calculatedSimilar is available:
In this way, which can define position of first moment terminal in world coordinate system is 0, i.e.,Accordingly , for any time k, the position and speed that terminal can be obtained in real world is indicated respectively such as following formula (14) and (15) institute Show:
In conclusion the initialization scheme for the fusion of vision inertial navigation information provided according to embodiments of the present invention, passes through The uncertainty measure generated due to the noise of inertial guidance data to integral relative quantity is introduced, the ease for use of initialization is substantially increased And stability, terminal displacement, rotation and the speed obtained based on initializing variable are also more accurate and reliable.
It is following be the device of the invention embodiment, can be used for executing the present invention it is above-mentioned for vision inertial navigation information fusion Initial method embodiment.For undisclosed details in apparatus of the present invention embodiment, aforementioned present invention is please referred to for vision The initial method embodiment of inertial navigation information fusion.
Fig. 4 is a kind of frame of apparatus for initializing for the fusion of vision inertial navigation information shown according to an exemplary embodiment Figure.As shown in figure 4, the device can be realized by electronic equipment as shown in Figure 1, it may include obtain module 410 and initialization module 430。
Wherein, obtain module 410 be used for from the imaging sensor and inertial sensor of terminal obtain respectively image data and Inertial guidance data;Initialization module 430 is used for based on the geometry constraint conditions between adjacent image in described image data, Yi Jisuo Integral relative quantity of the inertial guidance data between the acquisition time of the adjacent image is stated, is become to obtain the initialization of the terminal Amount;Wherein, the initialization module introduces the noise due to the inertial guidance data when obtaining the initializing variable of the terminal The uncertainty measure that the integral relative quantity is generated.
In one embodiment, initialization module 430 includes objective function unit and solution unit.Wherein, objective function Unit is used to construct the objective function of the initializing variable based on the geometrical constraint and the integral relative quantity;Solve unit For the value maximization or minimum by making the objective function, to obtain the initializing variable.
In one embodiment, objective function unit can also be used in the discretization of the iterative acquisition integral relative quantity It indicates, the covariance matrix of the integral relative quantity is obtained based on error state, and described just according to covariance matrix acquisition The update objective function of beginningization variable.Here sample calculation can be found in above to formula (7)-(12) specific descriptions, herein not It repeats again.
In one embodiment, objective function unit, which is based on the geometrical constraint and the integral relative quantity, will initialize The objective function of variable is as shown in above formula (6).
In one embodiment, objective function unit can also be according to the covariance matrix by the target letter of initializing variable Number is updated to as shown in above formula (13).
In conclusion the initialization scheme for the fusion of vision inertial navigation information provided according to embodiments of the present invention, passes through The uncertainty measure generated due to the noise of inertial guidance data to integral relative quantity is introduced, the ease for use of initialization is substantially increased And stability.
Fig. 5 is a kind of apparatus for initializing for the fusion of vision inertial navigation information shown according to another exemplary embodiment Block diagram.As shown in figure 5, the device can be realized by electronic equipment as shown in Figure 1, it may include obtain module 510, initialization module 530 and computing module 550.
Wherein, the operation for obtaining module 510 and initialization module 530 can be found in and obtain module 410 and just in Fig. 4 embodiment The specific descriptions of beginningization module 430, details are not described herein again.
In one embodiment, above-mentioned computing module 550 is used to obtain the terminal true according to the initializing variable Position, rotation and speed in real world coordinate system.Here initializing variable for example may include but be not limited to the speed of terminal Scaling between expression, visual observation and the integral result of the inertial guidance data under inertial sensor coordinate system, with And the acceleration of gravity under imaging sensor coordinate system.
In conclusion the initialization scheme for the fusion of vision inertial navigation information provided according to embodiments of the present invention, passes through The uncertainty measure generated due to the noise of inertial guidance data to integral relative quantity is introduced, the ease for use of initialization is substantially increased And stability, terminal displacement, rotation and the speed obtained based on initializing variable are also more accurate and reliable.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, no detailed explanation will be given here.
It should be noted that although being referred to several modules or list for acting the equipment executed in the above detailed description Member, but this division is not enforceable.In fact, according to embodiment of the present disclosure, it is above-described two or more Module or the feature and function of unit can embody in a module or unit.Conversely, an above-described mould The feature and function of block or unit can be to be embodied by multiple modules or unit with further division.As module or list The component of member display may or may not be physical unit, it can and it is in one place, or may be distributed over In multiple network units.Some or all of the modules therein can be selected to realize disclosure scheme according to the actual needs Purpose.
Through the above description of the embodiments, those skilled in the art is it can be readily appreciated that example described herein is implemented Mode can also be realized by software realization in such a way that software is in conjunction with necessary hardware.Therefore, according to the present invention The technical solution of embodiment can be embodied in the form of software products, which can store non-volatile at one Property storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) in or network on, including some instructions are so that a calculating Equipment (can be personal computer, server, touch control terminal or network equipment etc.) executes embodiment according to the present invention Method.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to of the invention its Its embodiment.This application is intended to cover any variations, uses, or adaptations of the invention, these modifications, purposes or Person's adaptive change follows general principle of the invention and including the undocumented common knowledge in the art of the present invention Or conventional techniques.The description and examples are only to be considered as illustrative, and true scope and spirit of the invention are by following Claim is pointed out.
It should be understood that the present invention is not limited to the precise structure already described above and shown in the accompanying drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present invention is limited only by the attached claims.

Claims (11)

1. a kind of initial method for the fusion of vision inertial navigation information, which is characterized in that the described method includes:
Image data and inertial guidance data are obtained respectively from the imaging sensor and inertial sensor of terminal;And
Based between adjacent image in described image data geometry constraint conditions and the inertial guidance data in the neighbor map Integral relative quantity between the acquisition time of picture, to obtain the initializing variable of the terminal,
Wherein, it when obtaining the initializing variable of the terminal, introduces since the noise of the inertial guidance data is to the integral phase The uncertainty measure that amount is generated.
2. the method as described in claim 1, which is characterized in that the initializing variable of the acquisition terminal, comprising:
The objective function of the initializing variable is constructed based on the geometrical constraint and the integral relative quantity;And
By making the value of the objective function maximize or minimize, to obtain the initializing variable.
3. method according to claim 2, which is characterized in that the initializing variable of the acquisition terminal, further includes:
The discretization of the iterative acquisition integral relative quantity indicates, and the association of the integral relative quantity is obtained based on error state Variance matrix;And
The update objective function of the initializing variable is obtained according to the covariance matrix.
4. method as claimed in claim 3, which is characterized in that described based on the geometrical constraint and the integral relative quantity Construct the objective function of the initializing variable, comprising:
Construct the objective function are as follows:
Wherein, Δ t indicates the time interval between kth and k+1 frame image data,WithIndicate the inertial guidance data Integral relative quantity between kth and k+1 frame image data,WithWhen being illustrated respectively in acquisition kth frame image data Inertial sensor coordinate system and imaging sensor coordinate system,It respectively indicates from Y coordinate system to X-coordinate system Three-dimensional position, speed and rotation,Indicate the acceleration of gravity under imaging sensor coordinate system.
5. method as claimed in claim 4, which is characterized in that described to obtain the initialization according to the covariance matrix The update objective function of variable, comprising:
Obtain the update objective function are as follows:
6. method as claimed in claim 4, which is characterized in that the iterative acquisition is described to integrate the discrete of relative quantity Changing indicates, comprising:
The discretization for obtaining the integral relative quantity based on following formula indicates:
Wherein, i indicates that i-th of inertial guidance data between kth and k+1 frame image data, δ t are obtained between adjacent inertial guidance data Time difference,WithIt is acceleration and angular speed when obtaining i-th of inertial guidance data respectively, and sets the first of iteration Beginning condition is
7. method as claimed in claim 6, which is characterized in that the association for obtaining the integral relative quantity based on error state Variance matrix Pi k, comprising:
The covariance matrix is obtained based on following formula:
Wherein, Q is for indicating the noise of the inertial guidance data and the unit diagonal matrix of gaussian random migration, the inertial navigation number According to including acceleration and angular speed.
8. the method according to claim 1 to 7, which is characterized in that the method also includes:
Position, rotation and speed of the terminal in real world coordinates system are obtained according to the initializing variable;
Wherein, the initializing variable includes expression, vision sight of the speed of the terminal under inertial sensor coordinate system Examine the scaling between the integral result of the inertial guidance data and the acceleration of gravity under imaging sensor coordinate system.
9. a kind of apparatus for initializing for the fusion of vision inertial navigation information, which is characterized in that described device includes:
Obtain module, for from terminal imaging sensor and inertial sensor obtain image data and inertial guidance data respectively;With And
Initialization module, for based between adjacent image in described image data geometry constraint conditions and the inertial navigation Integral relative quantity of the data between the acquisition time of the adjacent image, to obtain the initializing variable of the terminal,
Wherein, the initialization module introduces making an uproar due to the inertial guidance data when obtaining the initializing variable of the terminal The uncertainty measure that sound generates the integral relative quantity.
10. a kind of computer readable storage medium, is stored thereon with computer program, the computer program is executed by processor The Shi Shixian initial method according to any one of claim 1 to 8 for the fusion of vision inertial navigation information.
11. a kind of electronic equipment characterized by comprising
Processor;And
Memory is stored with computer-readable instruction on the memory, and the computer-readable instruction is held by the processor The initial method according to any one of claim 1 to 8 for the fusion of vision inertial navigation information is realized when row.
CN201811012768.5A 2018-08-31 2018-08-31 Initialization method and device for visual and inertial navigation information fusion and storage medium Active CN109147058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811012768.5A CN109147058B (en) 2018-08-31 2018-08-31 Initialization method and device for visual and inertial navigation information fusion and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811012768.5A CN109147058B (en) 2018-08-31 2018-08-31 Initialization method and device for visual and inertial navigation information fusion and storage medium

Publications (2)

Publication Number Publication Date
CN109147058A true CN109147058A (en) 2019-01-04
CN109147058B CN109147058B (en) 2022-09-20

Family

ID=64826070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811012768.5A Active CN109147058B (en) 2018-08-31 2018-08-31 Initialization method and device for visual and inertial navigation information fusion and storage medium

Country Status (1)

Country Link
CN (1) CN109147058B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767470A (en) * 2019-01-07 2019-05-17 浙江商汤科技开发有限公司 A kind of tracking system initial method and terminal device
CN111539982A (en) * 2020-04-17 2020-08-14 北京维盛泰科科技有限公司 Visual inertial navigation initialization method based on nonlinear optimization in mobile platform
CN113465596A (en) * 2021-06-25 2021-10-01 电子科技大学 Four-rotor unmanned aerial vehicle positioning method based on multi-sensor fusion
CN114323010A (en) * 2021-12-30 2022-04-12 北京达佳互联信息技术有限公司 Initial feature determination method and device, electronic equipment and storage medium
WO2022228056A1 (en) * 2021-04-30 2022-11-03 华为技术有限公司 Human-computer interaction method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090248304A1 (en) * 2008-03-28 2009-10-01 Regents Of The University Of Minnesota Vision-aided inertial navigation
CN105492985A (en) * 2014-09-05 2016-04-13 深圳市大疆创新科技有限公司 Multi-sensor environment map building
US20160327395A1 (en) * 2014-07-11 2016-11-10 Regents Of The University Of Minnesota Inverse sliding-window filters for vision-aided inertial navigation systems
CN107255476A (en) * 2017-07-06 2017-10-17 青岛海通胜行智能科技有限公司 A kind of indoor orientation method and device based on inertial data and visual signature
US20170336220A1 (en) * 2016-05-20 2017-11-23 Daqri, Llc Multi-Sensor Position and Orientation Determination System and Device
US20170343356A1 (en) * 2016-05-25 2017-11-30 Regents Of The University Of Minnesota Resource-aware large-scale cooperative 3d mapping using multiple mobile devices
US20180018787A1 (en) * 2016-07-18 2018-01-18 King Abdullah University Of Science And Technology System and method for three-dimensional image reconstruction using an absolute orientation sensor
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN108051002A (en) * 2017-12-04 2018-05-18 上海文什数据科技有限公司 Transport vehicle space-location method and system based on inertia measurement auxiliary vision
CN108427479A (en) * 2018-02-13 2018-08-21 腾讯科技(深圳)有限公司 Wearable device, the processing system of ambient image data, method and readable medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090248304A1 (en) * 2008-03-28 2009-10-01 Regents Of The University Of Minnesota Vision-aided inertial navigation
US20160327395A1 (en) * 2014-07-11 2016-11-10 Regents Of The University Of Minnesota Inverse sliding-window filters for vision-aided inertial navigation systems
CN105492985A (en) * 2014-09-05 2016-04-13 深圳市大疆创新科技有限公司 Multi-sensor environment map building
US20170336220A1 (en) * 2016-05-20 2017-11-23 Daqri, Llc Multi-Sensor Position and Orientation Determination System and Device
US20170343356A1 (en) * 2016-05-25 2017-11-30 Regents Of The University Of Minnesota Resource-aware large-scale cooperative 3d mapping using multiple mobile devices
US20180018787A1 (en) * 2016-07-18 2018-01-18 King Abdullah University Of Science And Technology System and method for three-dimensional image reconstruction using an absolute orientation sensor
CN107255476A (en) * 2017-07-06 2017-10-17 青岛海通胜行智能科技有限公司 A kind of indoor orientation method and device based on inertial data and visual signature
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN108051002A (en) * 2017-12-04 2018-05-18 上海文什数据科技有限公司 Transport vehicle space-location method and system based on inertia measurement auxiliary vision
CN108427479A (en) * 2018-02-13 2018-08-21 腾讯科技(深圳)有限公司 Wearable device, the processing system of ambient image data, method and readable medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
C. GUO: "Efficient Visual-Inertial Navigation using a Rolling-Shutter Camera with Inaccurate Timestamps", 《SCIENCE AND SYSTEMS》 *
RICCARDO ANTONELLO: "Motion reconstruction with a low-cost MEMS IMU for the automation of human operated specimen manipulation", 《2011 IEEE INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS》 *
姚二亮等: "基于Vision-IMU的机器人同时定位与地图创建算法", 《仪器仪表学报》 *
王聪等: "基于惯性导航与立体视觉的风管清扫机器人同时定位与地图创建方法", 《机械工程学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767470A (en) * 2019-01-07 2019-05-17 浙江商汤科技开发有限公司 A kind of tracking system initial method and terminal device
CN109767470B (en) * 2019-01-07 2021-03-02 浙江商汤科技开发有限公司 Tracking system initialization method and terminal equipment
CN111539982A (en) * 2020-04-17 2020-08-14 北京维盛泰科科技有限公司 Visual inertial navigation initialization method based on nonlinear optimization in mobile platform
CN111539982B (en) * 2020-04-17 2023-09-15 北京维盛泰科科技有限公司 Visual inertial navigation initialization method based on nonlinear optimization in mobile platform
WO2022228056A1 (en) * 2021-04-30 2022-11-03 华为技术有限公司 Human-computer interaction method and device
CN113465596A (en) * 2021-06-25 2021-10-01 电子科技大学 Four-rotor unmanned aerial vehicle positioning method based on multi-sensor fusion
CN114323010A (en) * 2021-12-30 2022-04-12 北京达佳互联信息技术有限公司 Initial feature determination method and device, electronic equipment and storage medium
CN114323010B (en) * 2021-12-30 2024-03-01 北京达佳互联信息技术有限公司 Initial feature determination method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109147058B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN109147058A (en) Initial method and device and storage medium for the fusion of vision inertial navigation information
US10674142B2 (en) Optimized object scanning using sensor fusion
US10852847B2 (en) Controller tracking for multiple degrees of freedom
CN109186592A (en) Method and apparatus and storage medium for the fusion of vision inertial navigation information
CN108700947A (en) For concurrent ranging and the system and method for building figure
US10733798B2 (en) In situ creation of planar natural feature targets
CN110599549A (en) Interface display method, device and storage medium
CN109074154A (en) Hovering touch input compensation in enhancing and/or virtual reality
CN109074149A (en) For enhance or the wear-type referential of reality environment in image tracing
CN108008817B (en) Method for realizing virtual-actual fusion
WO2022005717A1 (en) Generating ground truth datasets for virtual reality experiences
US20160210761A1 (en) 3d reconstruction
JP2009278456A (en) Video display device
US20230140737A1 (en) Drift cancelation for portable object detection and tracking
JP7182020B2 (en) Information processing method, device, electronic device, storage medium and program
CN114812609A (en) Parameter calibration method and device for visual inertial system, electronic equipment and medium
CN109040525A (en) Image processing method, device, computer-readable medium and electronic equipment
US10931926B2 (en) Method and apparatus for information display, and display device
CN116348916A (en) Azimuth tracking for rolling shutter camera
Kim et al. Oddeyecam: A sensing technique for body-centric peephole interaction using wfov rgb and nfov depth cameras
WO2023140990A1 (en) Visual inertial odometry with machine learning depth
KR101743888B1 (en) User Terminal and Computer Implemented Method for Synchronizing Camera Movement Path and Camera Movement Timing Using Touch User Interface
US20220335638A1 (en) Depth estimation using a neural network
US20240126369A1 (en) Information processing system and information processing method
US20230421717A1 (en) Virtual selfie stick

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant