CN110163909A - For obtaining the method, apparatus and storage medium of equipment pose - Google Patents
For obtaining the method, apparatus and storage medium of equipment pose Download PDFInfo
- Publication number
- CN110163909A CN110163909A CN201810148359.1A CN201810148359A CN110163909A CN 110163909 A CN110163909 A CN 110163909A CN 201810148359 A CN201810148359 A CN 201810148359A CN 110163909 A CN110163909 A CN 110163909A
- Authority
- CN
- China
- Prior art keywords
- pose
- equipment
- image frame
- imu
- boundling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 238000005259 measurement Methods 0.000 claims abstract description 23
- 230000008878 coupling Effects 0.000 claims abstract description 20
- 238000010168 coupling process Methods 0.000 claims abstract description 20
- 238000005859 coupling reaction Methods 0.000 claims abstract description 20
- 238000013135 deep learning Methods 0.000 claims abstract description 9
- 238000005457 optimization Methods 0.000 claims description 41
- 230000033001 locomotion Effects 0.000 claims description 34
- 238000001914 filtration Methods 0.000 claims description 12
- 230000007774 longterm Effects 0.000 claims description 6
- 210000005252 bulbus oculi Anatomy 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 210000003128 head Anatomy 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 21
- 238000004590 computer program Methods 0.000 description 17
- 238000001514 detection method Methods 0.000 description 14
- 230000003068 static effect Effects 0.000 description 12
- 230000008569 process Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 7
- 238000004804 winding Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 230000004886 head movement Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 210000001508 eye Anatomy 0.000 description 3
- 230000011664 signaling Effects 0.000 description 3
- 241000208340 Araliaceae Species 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000001093 holography Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 210000001525 retina Anatomy 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000005540 close-coupling method Methods 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000465 moulding Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H1/00—Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
- G03H1/04—Processes or apparatus for producing holograms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/002—Specific input/output arrangements not covered by G06F3/01 - G06F3/16
- G06F3/005—Input arrangements through a video camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/038—Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
The present disclosure proposes the methods for obtaining equipment pose.This method comprises: obtaining Inertial Measurement Unit IMU data;The method optimized using boundling, the basic pose and IMU parameter of equipment are estimated based on IMU data;Wherein, it is handled using any one of following operation or any multinomial combination: the basic pose of equipment being constrained using the stationary state of equipment;IMU parameter is optimized using the method that loose coupling boundling optimizes;IMU data are integrated to obtain pose variation, the second pose of equipment is obtained according to pose variation and basic pose, after the update of basic pose, the difference between the second pose and the basic pose of update is compensated;Using deep learning network come the following pose of pre- measurement equipment;According to the pose of the pose of the corresponding equipment of history image frame and the corresponding equipment of current image frame, history image frame is deformed to obtain current image frame.The disclosure also proposed corresponding equipment and storage medium.
Description
Technical field
This disclosure relates to synchronous superposition field, more particularly, to for obtaining equipment pose method,
Device and storage medium.
Background technique
Existing multisensor close coupling method can obtain high-precision equipment pose, and (equipment pose refers to equipment
Position and orientation), however its constraint to be considered is more, causes calculating speed partially slow, postpones bigger.
It needs a kind of calculating fast thus, postpones small pose and determine method.
Summary of the invention
In order at least partly solve or mitigate the above problem, the embodiment of the present disclosure proposes the pose for obtaining equipment
Method, apparatus and computer storage medium.
According to the disclosure in a first aspect, a kind of method for obtaining equipment pose is proposed, this method comprises: obtaining
Inertial Measurement Unit (IMU) data;The first pose and IMU parameter of equipment are estimated based on IMU data;Wherein, use with
Any one of lower operation or any multinomial combination are handled: during boundling optimization, using the equipment
Stationary state the first pose of the equipment is constrained;During boundling optimization, using loose coupling boundling
The method of optimization optimizes IMU parameter;IMU data are integrated to obtain pose variation, according to pose variation and first
Pose obtains the second pose of the equipment, after first pose update, to second pose and first of update
Difference between appearance compensates;Predict the equipment in the pose of future time instance, wherein in the following position for predicting the equipment
During appearance, the following pose of the equipment is predicted using deep learning network;According to the equipment in future time instance
Pose draws dummy object, wherein during drawing dummy object, according to the position of the corresponding equipment of history image frame
The pose of appearance and the corresponding equipment of current image frame, is deformed to obtain current image frame to history image frame.
In some embodiments, it includes: fixation that the method using the optimization of loose coupling boundling, which optimizes IMU parameter,
Pose of the equipment in multiple image, according between the multiple image IMU data and the equipment described more
Pose in frame image estimates IMU parameter.
In some embodiments, it includes: quiet that the stationary state using equipment, which carries out constraint to the pose of the equipment,
The pose of the equipment during only remains unchanged.
In some embodiments, the difference between second pose and the first pose of update compensates packet
It includes: obtaining the difference between second pose and the first pose of update;The first pose updated is obtained to next time first
The number of the integral required for pose updates and/or perceptually relevant information;According to the number and/or perceptually relevant letter
Breath and the difference, obtain compensation rate, are adjusted according to compensation rate to the variation of first pose updated pose.
In some embodiments, the perceptually relevant information includes at least one of following: the movement velocity of eyeball, head
The rotation speed in portion.
In some embodiments, the following pose that the equipment is predicted using deep learning network includes: to utilize
Multilayer shot and long term remembers (LSTM) network to predict the equipment in the pose of future time instance.
In some embodiments, described to remember LSTM network using multilayer shot and long term to predict the equipment in future time instance
Pose include: to obtain the equipment in future time instance using the current state information of the equipment as the input of LSTM network
Status information, and obtain based on the status information of the future time instance the following pose of the equipment.
In some embodiments, the method may additionally include by the current state information input the LSTM network it
Before, the current state information is corrected using Kalman filtering (EKF).
In some embodiments, it includes: by the equipment that the current state information is corrected using Kalman filtering EKF
The input as EKF of position, speed, using the speed of the equipment as the output of EKF.
Status information in above-described embodiment can have any format, such as in some embodiments, status information can
It is provided in the form of through state vector.
In some embodiments, during drawing dummy object, according to the corresponding equipment of history image frame
The pose of pose and the corresponding equipment of current image frame, is deformed to obtain current image frame to history image frame.
It is described that history image frame is deformed to obtain current image frame to include: each picture based on the current image frame
The pose of the equipment corresponding to the depth information and the current image frame and the history image frame of element, described in calculating
Position of each pixel of current image frame in the history image frame;And by each position in the history image frame
The pixel at place copies to corresponding position of the pixel in the current image frame, to draw the current image frame.
Above-mentioned picture frame can be hologram image frame.
According to the second aspect of the disclosure, propose a kind of for obtaining the device of equipment pose.The device includes pose
Estimation module and parameter acquisition module.Parameter acquisition module is for obtaining Inertial Measurement Unit IMU data.Pose estimation module is used
In the method using boundling optimization, the first pose and IMU parameter of equipment are estimated based on IMU data;Wherein, use is following
Any one of operation or any multinomial combination are handled: during boundling optimization, using the equipment
Stationary state constrains the first pose of the equipment;It is excellent using loose coupling boundling during boundling optimization
The method of change optimizes IMU parameter;IMU data are integrated to obtain pose variation, according to pose variation and first
Appearance obtains the second pose of the equipment, after first pose update, to second pose and the first pose updated
Between difference compensate;Predict the equipment in the pose of future time instance, wherein in the following pose for predicting the equipment
During, the following pose of the equipment is predicted using deep learning network;According to the equipment in the position of future time instance
Appearance draws dummy object, wherein during drawing dummy object, according to the pose of the corresponding equipment of history image frame
The pose of the equipment corresponding with current image frame, is deformed to obtain current image frame to history image frame.
According to the third aspect of the disclosure, propose a kind of equipment for obtaining the pose of equipment, comprising: processor and
Memory.Memory is stored with instruction, and described instruction makes the processor execute above-mentioned when being executed by the processor
One method.
According to the fourth aspect of the disclosure, a kind of computer readable storage medium of store instruction, described instruction are proposed
It enables the processor to execute the method according to disclosure first aspect when executed by the processor.
Scheme based on the embodiment of the present disclosure can utilize Inertial Measurement Unit (Inertial Measurement
Unit, IMU) exercise data is integrated, pose output fast, that delay is small is calculated to obtain.
Detailed description of the invention
According to the description to some embodiments provided below in conjunction with attached drawing, these and/or other sides of the disclosure
Face and advantage will become apparent, and it is more readily appreciated that wherein:
Fig. 1 shows an exemplary schematic diagram of boundling optimization.
Fig. 2 is to show the flow chart of the method 200 of the pose for obtaining equipment according to the embodiment of the present disclosure.
Fig. 3 is to show the schematic diagram of the error correction of the awareness driven according to the embodiment of the present disclosure.
Fig. 4 is to show the block diagram of the device 400 of the pose for obtaining equipment according to the embodiment of the present disclosure.
Fig. 5 shows the overall procedure schematic diagram of a specific implementation according to an embodiment of the present invention.
Fig. 6 shows the schematic diagram of the scheme according to the optimization of the close coupling boundling of the increase static detection of above-mentioned realization.
Fig. 7 shows the schematic diagram of the specific implementation of the static detection according to the realization.
Fig. 8 shows the schematic diagram of the strategy of secondary boundling optimization according to an embodiment of the present invention.
Fig. 9 shows the schematic diagram that winding detection leads to the scene of shake.
Figure 10 shows the structural schematic diagram of depth EKF unit according to an embodiment of the present invention.
Figure 11 shows the schematic diagram of the state regression process of depth EKF used herein.
Figure 12 shows the schematic diagram according to an embodiment of the present invention for adjusting by drawing result and reducing the method for delay.
Figure 13 is to show the block diagram arranged according to the exemplary hardware of the exemplary device of the embodiment of the present disclosure.
Specific embodiment
Preferred embodiment of the present disclosure is described in detail with reference to the accompanying drawings, is omitted in the course of the description for this
It is unnecessary details and function for open, to prevent understanding of this disclosure from causing to obscure.In the present specification, Xia Shuyong
Only illustrate in the various embodiments of description disclosure principle, should not be construed as limiting in any way scope of disclosure.Ginseng
According to the exemplary implementation described below for being used to help the disclosure that comprehensive understanding is defined by the claims and their equivalents of attached drawing
Example.Described below includes a variety of details to help to understand, but these details are considered as being only exemplary.Therefore, originally
Field those of ordinary skill should be understood that do not depart from the scope of the present disclosure and spirit in the case where, can be to described herein
Embodiment make various changes and modifications.In addition, for clarity and brevity, retouching for known function and structure is omitted
It states.In addition, running through attached drawing, identical appended drawing reference is used for the same or similar function and operation.Furthermore, it is possible to by following differences
All or part function, feature, unit, module etc. described in embodiment are combined, deleted and/or are modified, new to constitute
Embodiment, and the embodiment is still fallen in the scope of the present disclosure.In addition, in the disclosure, term " includes " and " containing "
And its derivative means including rather than limits.
Illustrate this for being equipped with the AR glasses of visual sensor (binocular camera) and inertial sensor (IMU) below
The background that inventive embodiments are related to, however it should be noted that the technical solution of the embodiment of the present invention may be applicable in void
In near-ring border in any application of the virtual image of rendered object, without by following exemplary limitation, such as can be used for positioning,
In the scenes such as automatic Pilot, user tracking, and it is not limited to the AR scene as embodiment below.It can pass through in following example
Following below scheme obtains pose of the AR glasses in virtual environment (for example, map).
Characteristic point is extracted in the image that camera obtains first.More apparent point, can be used in characteristic point, that is, image
Method have: brisk, sift etc..If it is the first frame that obtained image is in video, then device location is set as former
Point.And the left mesh of binocular camera and right mesh are obtained the characteristic point on image to match, and establish point map, is stored in map
In.If not first frame, then characteristic point and the point map of preservation is matched, obtain one-to-one relationship.Then it uses
Random sampling consistency (Random Sample Consensus, RANSAC) method verifies matching relationship, removes mistake
Matching.It finally obtains on image, the characteristic point not matched is matched, to establish in the left mesh of binocular camera and right mesh
New point map, is stored in map.
After obtaining corresponding relationship, the method that algorithm uses boundling to optimize, estimation equipment pose, the position of new point map,
And IMU parameter (biasing of noise i.e. in IMU data, such as: biasing may be considered the mean value of noise).Due to point map,
There is pose constraints between equipment pose and IMU data (acceleration and angular speed of equipment), it is possible to use optimization
Method find optimal equipment pose, new point map position and IMU parameter so that these constraints are all met as far as possible.
Fig. 1 shows an exemplary schematic diagram of boundling optimization.As shown in Figure 1.Specifically, these constraints are as follows: 1. pass through map
The position of position (position 3D) and its corresponding characteristic point on image (left mesh and/or right mesh) of point in space, can obtain
To the pose of equipment.2. by the pose of equipment, and the position of the corresponding characteristic point of new point map on the image, it can calculate
The new position of point map in space.3. two can be calculated by IMU data and IMU parameter between the two field pictures of front and back
Equipment pose variation between frame image.4. passing through the IMU data between multiple image and the corresponding equipment position of multiple image
Appearance can calculate IMU parameter.Above-mentioned constraint 1,2,3,4 only can apply (such as Parallel application) using part or all.
The pose output frequency of the method is the frequency of picture frame.
When practicing above scheme, inventor has found that the method for above-mentioned multisensor close coupling boundling optimization can be with
Accurate equipment pose is obtained, but its constraint to be considered is more, causes calculating speed partially slow, postponed bigger.For this purpose, at this
In the technical solution of inventive embodiments, equipment position is obtained by being integrated using IMU data of the IMU parameter to high frame per second
Appearance.Quickly due to integrating rate, it is possible to obtain the equipment pose of high frame per second, low latency.In general IMU data include setting
Standby acceleration and angular speed can obtain equipment position in a period of time then integrating according to the time to these two types of data
The variation of appearance.The variation of this pose is superimposed upon on the initial pose of equipment, the current pose of equipment can be obtained, to realize meter
It calculates pose fast, that delay is small and determines method.
Equipment position will be used to obtain according to the embodiment of the present disclosure to be described in detail in conjunction with Fig. 2 and other attached drawings first below
The method of appearance.Fig. 2 is to show the flow chart of the method 200 for obtaining equipment pose according to the embodiment of the present disclosure.
As shown in Fig. 2, this method includes operation S210, Inertial Measurement Unit (IMU) data are obtained.
S220 is operated, the method optimized using boundling estimates the first pose and IMU ginseng of equipment based on IMU data
Number;Wherein, it is handled using any one of following operation or any multinomial combination:
1) during boundling optimizes, the first pose of equipment is constrained using the stationary state of equipment;
2) during boundling optimizes, IMU parameter is optimized using the method that loose coupling boundling optimizes;
3) IMU data are integrated to obtain pose variation, the second of equipment is obtained according to pose variation and the first pose
Pose compensates the difference between the second pose and the first pose of update after the update of the first pose;
4) pose of the pre- measurement equipment in future time instance, wherein during the following pose of pre- measurement equipment, utilize depth
Learning network carrys out the following pose of pre- measurement equipment;
5) pose according to equipment in future time instance draws dummy object, wherein during drawing dummy object, root
According to the pose of the corresponding equipment of history image frame and the pose of the corresponding equipment of current image frame, history image frame is deformed
Obtain current image frame.
In some embodiments, being optimized using the method that loose coupling boundling optimizes to IMU parameter includes: fixed equipment
Pose in multiple image, according to the pose of IMU data and equipment in multiple image between multiple image, estimation
IMU parameter.
In some embodiments, carrying out constraint using pose of the stationary state of equipment to equipment includes: the quiescent period
The pose of equipment remains unchanged.
In some embodiments, compensate to the difference between the second pose and the first pose of update includes: to obtain
Difference between second pose and the first pose of update;It obtains needed for the first pose to the update of the first pose next time updated
The number for the integral wanted and/or perceptually relevant information;According to the number and/or perceptually relevant information and the difference, mended
The amount of repaying is adjusted the variation of the first pose updated pose according to compensation rate.
In some embodiments, perceptually relevant information includes at least one of following: the movement velocity of eyeball, head
Rotation speed.
For example, the speed of compensation and the movement velocity of human eye are related when compensating error caused by winding detection, and/
Or it is related to the head movement of people.If head movement is faster, compensation speed is faster.
It in some embodiments, include: to utilize multilayer length come the following pose of pre- measurement equipment using deep learning network
Phase remembers (LSTM) network and carrys out pre- measurement equipment in the pose of future time instance.
In some embodiments, using multilayer shot and long term remember LSTM network come pre- measurement equipment future time instance pose packet
Include: input using the current state information of equipment as LSTM network obtains equipment in the status information of future time instance, and base
The following pose of equipment is obtained in the status information of future time instance.
In some embodiments, method may additionally include before current state information input LSTM network, use karr
Graceful filtering (EKF) Lai Xiuzheng current state information.
In some embodiments, corrected using Kalman filtering EKF current state information include: by the position of equipment,
Input of the speed as EKF, using the speed of equipment as the output of EKF.
Status information in above-described embodiment can have any format, such as in some embodiments, status information can
It is provided in the form of through state vector.
In some embodiments, during drawing dummy object, according to the pose of the corresponding equipment of history image frame
The pose of equipment corresponding with current image frame, is deformed to obtain current image frame to history image frame.
In some embodiments, history image frame is deformed to obtain current image frame to include: based on current image frame
Each pixel depth information and current image frame and history image frame corresponding to equipment pose, calculate present image
Position of each pixel of frame in history image frame;And the pixel at each position in history image frame is copied into picture
Corresponding position of the element in current image frame, to draw current image frame.
Above-mentioned picture frame can be hologram image frame, be also possible to non holographic image frame, and the embodiment of the present invention is focused on drawing
It is imaged, it is limited without the concrete form by image.
Fig. 4 is to show the block diagram of the device 400 for obtaining equipment pose according to the embodiment of the present disclosure.
As shown in figure 4, the device includes parameter acquisition module 410 and pose estimation module 420.
Parameter acquisition module 410 is for obtaining Inertial Measurement Unit (IMU) data.
Pose estimation module 420 is used to estimate the first pose of equipment based on IMU data using the method for boundling optimization
And IMU parameter;Wherein, it is handled using any one of following operation or any multinomial combination:
1) during boundling optimizes, the first pose of equipment is constrained using the stationary state of equipment;
2) during boundling optimizes, IMU parameter is optimized using the method that loose coupling boundling optimizes;
3) IMU data are integrated to obtain pose variation, the second of equipment is obtained according to pose variation and the first pose
Pose compensates the difference between the second pose and the first pose of update after the update of the first pose;
4) pose of the pre- measurement equipment in future time instance, wherein during the following pose of pre- measurement equipment, utilize depth
Learning network carrys out the following pose of pre- measurement equipment;
5) pose according to equipment in future time instance draws dummy object, wherein during drawing dummy object, root
According to the pose of the corresponding equipment of history image frame and the pose of the corresponding equipment of current image frame, history image frame is deformed
Obtain current image frame.
In some embodiments, being optimized using the method that loose coupling boundling optimizes to IMU parameter includes: fixed equipment
Pose in multiple image, according to the pose of IMU data and equipment in multiple image between multiple image, estimation
IMU parameter.
In some embodiments, carrying out constraint using pose of the stationary state of equipment to equipment includes: the quiescent period
The pose of equipment remains unchanged.
In some embodiments, compensate to the difference between the second pose and the first pose of update includes: to obtain
Difference between second pose and the first pose of update;It obtains needed for the first pose to the update of the first pose next time updated
The number for the integral wanted and/or perceptually relevant information;According to the number and/or perceptually relevant information and the difference, mended
The amount of repaying is adjusted the variation of the first pose updated pose according to compensation rate.
In some embodiments, perceptually relevant information includes at least one of following: the movement velocity of eyeball, head
Rotation speed.
For example, the speed of compensation and the movement velocity of human eye are related when compensating error caused by winding detection, and/
Or it is related to the head movement of people.If head movement is faster, compensation speed is faster.
It in some embodiments, include: to utilize multilayer length come the following pose of pre- measurement equipment using deep learning network
Phase remembers (LSTM) network and carrys out pre- measurement equipment in the pose of future time instance.
In some embodiments, using multilayer shot and long term remember LSTM network come pre- measurement equipment future time instance pose packet
Include: input using the current state information of equipment as LSTM network obtains equipment in the status information of future time instance, and base
The following pose of equipment is obtained in the status information of future time instance.
In some embodiments, before current state information is inputted LSTM network, come using Kalman filtering (EKF)
Correct current state information.
In some embodiments, corrected using Kalman filtering EKF current state information include: by the position of equipment,
Input of the speed as EKF, using the speed of equipment as the output of EKF.
Status information in above-described embodiment can have any format, such as in some embodiments, status information can
It is provided in the form of through state vector.
In some embodiments, during drawing dummy object, according to the pose of the corresponding equipment of history image frame
The pose of equipment corresponding with current image frame, is deformed to obtain current image frame to history image frame.
In some embodiments, history image frame is deformed to obtain current image frame to include: based on current image frame
Each pixel depth information and current image frame and history image frame corresponding to equipment pose, calculate present image
Position of each pixel of frame in history image frame;And the pixel at each position in history image frame is copied into picture
Corresponding position of the element in current image frame, to draw current image frame.
Above-mentioned picture frame can be hologram image frame, be also possible to non holographic image frame, and the embodiment of the present invention is focused on drawing
It is imaged, it is limited without the concrete form by image.Below the specific of the embodiment of the present invention will be illustrated by taking hologram image as an example
Scheme, it should be noted that these schemes similarly can be applied to image/picture frame of any other type.
The technical solution of Fig. 2 and Fig. 4 are described in detail below in conjunction with Fig. 5 to Figure 12.It should be noted that figure
5 are only for realizing the example of technical solution of the embodiment of the present invention to scheme shown in Figure 12, should not serve to protect the present invention
Protect the limitation of range.For example, some step/modules as shown in the figure can increase, reduce or exchange sequence, or make in details
Change, without beyond the scope of the present invention.
Fig. 5 shows the overall procedure schematic diagram of a specific implementation according to an embodiment of the present invention.Its structural domain Fig. 4 institute
The structure of showing device is corresponding.As shown in figure 5, the realization can be made of three functional modules, it is respectively: SLAM module, movement
Prediction module and drafting module, as shown in Figure 5.Wherein, SLAM module utilize Multi-sensor Fusion, high frame per second output equipment work as
Preceding pose;Output and state filtering of the motion prediction module according to SLAM module, the equipment pose after predicting a bit of time;It draws
Molding block is then the variation of equipment pose during being drawn according to dummy object, is adjusted to drawing image, to further decrease
System delay.
Fig. 6 shows the schematic diagram of the scheme according to the optimization of the close coupling boundling of the increase static detection of above-mentioned realization.
As shown in fig. 6, the process can include:
1) camera data and IMU data are obtained.
2) feature point extraction.Characteristic point is extracted in the picture, and the point map saved is matched, and is obtained between the two
Corresponding relationship.If unmatching or in the presence of the point map not saved, new point map being established according to characteristic point.
3) static detection.By analyzing IMU data, come whether detection device remains static.If in static,
Stationary state is imparted to boundling optimization part.
4) double optimization of multisensor.First with the optimization of close coupling boundling to equipment pose, the point map of a small amount of frame
It is optimized with IMU parameter;Then optimization, the optimization to the IMU parameter of more multiframe are closed using loose coupling boundling.
5) IMU is integrated.Using IMU parameter, IMU data are integrated, provide high frame per second, the position of low latency for AR system
Appearance estimation.
6) the pose filtering of awareness driven.Because pose and double optimization that IMU is integrated out every time updated pose it
Between it is variant, the error compensating method of awareness driven of the embodiment of the present invention gradually reduces deviation between the two.
7) prediction pose uses Extended Kalman filter by the current device pose output of SLAM system as observation
The kinematic parameter of (Extended Kalman Filter, EKF) estimation equipment.By equipment current kinetic parameters and the movement side EKF
Journey Prediction Parameters are as input, using the deep learning network based on LSTM, the pose of pre- measurement equipment future time instance.
8) drawing result adjusts.Hologram image is drawn according to input pose, is then changed according to equipment pose, holography is drawn
Result processed is deformed, and is improved holographic drafting speed, is reduced delay.
Fig. 7 shows the schematic diagram of the specific implementation of the static detection according to the realization.
IMU data generally comprise noise, and noise will lead to equipment pose estimation inaccuracy.In order to reduce noise, this hair
Bright embodiment proposes the method that influence of noise is reduced using static detection.This method is by analysis IMU data in part
Variance, comes whether detection device remains static.If equipment remains static.Then fixed setting within the period
Standby pose can increase constraint in this way when carrying out boundling optimization, improve pose and estimate accuracy.It is constrained in the method
It is as follows: 1. by point map position (for example, the position 3D) in space and its corresponding characteristic point image (left mesh and/or
Right mesh) on position, the pose of equipment can be obtained.2. by the pose of equipment, and the new corresponding characteristic point of point map is being schemed
As upper position, the position of new point map in space can be calculated.3. by the IMU data between the two field pictures of front and back,
With IMU parameter, the equipment pose variation between two field pictures can be calculated.4. by the IMU data between multiple image, with
And the corresponding equipment pose of multiple image, IMU parameter can be calculated.5. can determine two by the quiescent phase detected
Pose variation between frame is 0.
As shown in fig. 7, lines thicker in figure are behaved when walking, the variance change curve of IMU data.Due to people's walking
Repeatability, the variance of IMU data are in cyclically-varying.If variance (the side in a time window of the part of IMU data
Difference) be less than the threshold value of prediction, then the stage that center of gravity falls in foot contact ground is corresponded to, this stage is otherwise known as zero-speed section.
The embodiment of the present invention extracts the zero-speed section during equipment moving using threshold value, increases to this section as constraint
In boundling optimization.
Fig. 8 shows the schematic diagram of the strategy of secondary boundling optimization according to an embodiment of the present invention.The double optimization can be with
Preferably estimate IMU parameter, i.e. noise bias.
The optimization of first time boundling increases the close coupling boundling optimization of static detection, as shown in Figure 8.With existing tight coupling
Intersection beam optimization method is compared, and the static section detected increases the constraint to equipment pose, can be improved estimated result
Accuracy.
Second of boundling is optimized for unlike the optimization of loose coupling boundling and the optimization of close coupling boundling, in this optimization process
In, the equipment pose of fixed each frame optimizes IMU parameter using these equipment poses and IMU data.Thus it is not necessarily to
Consider the constraint of point map, calculation amount is smaller.It and can include obtaining more accurate into optimization process more frames
IMU parameter, as shown in Figure 8.It is constrained in the method as follows: by the IMU data and multiple image correspondence between multiple image
Equipment pose, calculate IMU parameter.
It is continuous to the exercise data that IMU data integrate, but what is obtained due to IMU integral result is relative pose
Variation is on the obtained basic pose of double optimization to be added to.So, each double optimization terminates, and basic pose is more
After new, since IMU integrates inaccuracy, result can be made the phenomenon that zonal cooling occur.That is, IMU is integrated to new basic pose
It is that moment obtains as a result, having error between new basic pose.For this problem, the method for proposing pose filtering.It is i.e. every
After secondary basis pose updates, first calculate the difference between the pose and new basic pose of IMU integral, and using this difference as
Biasing is added on new basic pose.Estimating how many more Times IMU integral simultaneously can achieve basic pose update next time,
And according to this number, biasing is reduced, smoothly equipment pose is smoothly moved back to correct position and direction.
Other than the difference that IMU integral can lead between pose there are error, the winding of same map will also result in this
One problem.Fig. 9 shows the schematic diagram that winding detection leads to the scene of shake.As shown in figure 9, one circle of equipment moving arrived with
When the place of preceding process, a certain amount of deviation might have between the point map saved in the past and the point map saved in the recent period,
At this moment it just needs to be updated the position of current map point using winding detection function.That is, there are errors for IMU integral
Pose difference caused by caused pose difference and winding requires to be compensated, such as method through the embodiment of the present invention
To compensate.For this purpose, the present invention proposes the error compensating method of awareness driven.That is object project motion speed and people on the retina
Eye is inversely proportional between the sensibility of the motion perception of object.The movement velocity H of eyeball is detected thusSAnd dummy object
Movement velocity P calculates the speed H that people perceives with thisT.And it determines to bias reduced speed using speed here.People's sense
The speed known is bigger, biases the faster of reduction.HTCalculation formula it is as follows:
Wherein,0.6, f is taken in most casesTFor the size of object,It is projected on the retina for object big
It is small.
In AR system, after the equipment pose that " current " moment is obtained using SLAM algorithm, need to be drawn according to equipment pose
Dummy object processed, and drawing result is shown and is superimposed upon on real-world object.However, when dummy object is complex, or comprising
It when some illumination calculations, draws dummy object needs and takes a certain time, this, which causes dummy object to show, has certain delay, shadow
Ring the effect of AR.Traditional motion forecast method often describes motion state using some linear equations of motion, and pre-
Survey the movement of future time instance, it is difficult to cope with complicated nonlinear motion, be unable to satisfy requirement of the AR equipment for forecasting accuracy.
The embodiment of the present invention devises a kind of deep learning network based on LSTM, to learn a kind of general Nonlinear Equations of Motion,
It can more accurately predict the motion state of future time instance.
Due to including noise in IMU observation data, the pose for directly integrating out IMU is defeated as LSTM unit
Enter, it is difficult to obtain accurately prediction result.Herein using EKF come the observation data of smooth IMU, obtain accurately moving shape
State, and be entered into LSTM unit.Figure 10 shows the structural representation of depth EKF unit according to an embodiment of the present invention
Figure.Output on the right side of Figure 10 is as two inputs on the left of Figure 10.
Motion state is indicated using S, in which:
S={ v, av, w, aw}
Wherein, v is movement velocity, avFor acceleration of motion, w is angular velocity of rotation, awFor rotary acceleration.
By taking any moment i as an example, compare the equipment pose P that SLAM is estimatedi, and on last stage predicted using EKF
Equipment pose P 'iBetween error (lower left corner Figure 10, the output P ' of last moment and current input P all can enter Kg unit,
It is compared, error is used for for correcting state vector S, these are the basic principles of Kalman filtering), and by current time
Kalman gain KgiCorrect the state vector S of EKFi.After the state vector for obtaining current time, S is on the one hand utilizediAnd EKF
Motion model M, to predict the equipment pose P ' at next momenti+1;On the other hand, by SiAs the input of LSTM network, come pre-
Equipment state vector S ' after surveying following a period of timei+N.(in Figure 10, pose Pi according to the observation is to the state vector S in EKF
It is modified, is then input in " LSTM " unit, the output of LSTM is S '.Here, S be include camera position, speed, court
To information such as, angular speed.)
State vector has been used herein, it is noted, however, that any type of expression that can be used for LSTM network is set
The information of standby state can be used for the embodiment of the present invention, and be not limited to state variable.
To predict in following a period of time, the pose P of any timei+x, when proposing to record current using a sliding window
It is carved between the following i+N moment, the motion state sequence of equipment.To can smoothly predict the moment by way of recurrence
The pose P ' of i+xi+x.In view of S 'i+1It is to be estimated by the IMU observation data before the i+1-N moment, does not account for the moment
Between i+1-N to i, the motion conditions of equipment.As a result, equally using the motion state in motion model M prediction sliding window
S′i+N:
S′i+N={ v 'i+N, a 'V, i+N, w 'i+N, a 'W, i+N}
v′i+N=vi+aV, i·(N·Δt)
a′V, i+N=aV, i
a′W, i+k=aW, i
And it is superimposed upon in the way of average weighted on the state sliding window generated by LSTM prediction.Figure 11 is shown
Herein using the schematic diagram of the state regression process of depth EKF.As shown in " calculating " part in Figure 11, final position is obtained
Appearance estimates P "i+x:
" merging " part in Figure 11, with WeightTo generate new status switch { S " }.Work as predicted time
Apart from current time farther out when, give newest predicted state bigger weight, when predicted time is closer apart from current time, to going through
History predicted state greater weight.
Holography drafting is more time-consuming, will increase the delay of AR system.In order to reduce this delay, the embodiment of the present invention is proposed
A kind of method that hologram image is drawn.Holograms different with conventional photographic method for drafting, that this method uses former frame to draw
Picture and depth image are as input, and due to the depth information of each point on this known frame and two frame hologram images are corresponding sets
Standby pose, it is possible to calculate in present frame position of each pixel in former frame in hologram image, as long as in this way will before
The pixel of corresponding position, which copies to current location, in one frame can be obtained the image of present frame.With common holographic method for drafting phase
Than this method only needs to handle two dimensional image, without carrying out the projection of threedimensional model, illumination calculation etc..It calculates
Amount is smaller, can efficiently reduce drafting delay.Figure 12 shows according to an embodiment of the present invention adjust by drawing result and subtracts
The schematic diagram of the method postponed less.
The present invention proposes a kind of vision SLAM method based on prediction and Multi-sensor Fusion, error low to camera frame per second
Accumulation, data transmission computing relay etc. all have stronger robustness.It is integrated using IMU data to obtain between two frame of camera
Motion conditions, can be with the rate-adaptive pacemaker location information of IMU.In system operation, by camera stationary state detect come
Correction of movement parameter reduces the accumulated error of system.In data transmission and calculating there are when larger delay, LSTM is on the one hand utilized
Model predicts camera motion parameter, to estimate the pose of camera future time instance in advance, is brought with offsetting data transmission and calculating
Delay.On the other hand, it when showing dummy object, is integrated by IMU data to obtain the pose during dummy object rendering
Variation, to reduce the error as caused by system delay using anamorphose.
Figure 13 is the frame according to the exemplary hardware arrangement of the example network node and/or user equipment of the embodiment of the present disclosure
Figure.Hardware layout 1300 may include processor 1306.Processor 1306 can be the difference for executing process described herein
The single treatment unit of movement either multiple processing units.Arrangement 1300 can also include for receiving signal from other entities
Input unit 1310 and for other entities provide signal output unit 1304.Input unit 1310 and output are single
Member 1304 can be arranged to the entity that single entities either separate.
In addition, arrangement 1300 may include having at least one non-volatile or form of volatile memory readable storage
Medium 1308, e.g. electrically erasable programmable read-only memory (EEPROM), flash memory, CD, Blu-ray disc and/or hard drive
Device.Readable storage medium storing program for executing 1308 may include computer program 1310, which may include that code/computer can
Reading instruction makes hardware layout 1300 and/or including hardware layout when being executed by the processor 1306 in arrangement 1300
Equipment including 1300 can execute for example above in conjunction with process described in Fig. 1 and/or Fig. 2 and its any deformation.
Computer program 1310 can be configured to the calculating with such as computer program module 1310A~1310C framework
Machine program code.Therefore, in using example embodiment of the hardware layout 1300 as base station, 1300 computer program is arranged
In code can be used for executing method according to Fig.2,.However, can also include for executing sheet in computer program 1310
Other modules of each step of the various methods of text description.
In addition, arranging 1300 computer journey in using example embodiment of the hardware layout 1300 as user equipment
Code in sequence can include: module 1310A, for receiving downlink control signaling.Code in computer program can also wrap
It includes: module 1310B, for according to downlink control signaling and for downlink corresponding with downlink control signaling
The decoding result of transmission, Lai Shengcheng HARQ-ACK code book.Code in computer program may also include that module 1310C, be used for root
HARQ-ACK corresponding with downlink transmission is fed back according to HARQ-ACK code book generated.However, computer program
It can also include other modules for executing each step of various methods described herein in 1310.
Computer program module can substantially execute each movement in process shown in Fig. 1 and/or Fig. 2, with
Simulate various equipment.In other words, when executing different computer program modules in processor 1306, they can correspond to this
The various different units for the various equipment being previously mentioned in text.
Although being implemented as computer program module above in conjunction with the code means in Figure 13 the disclosed embodiments,
Execute hardware layout 1300 above in conjunction with movement described in Fig. 1 and/or Fig. 2, however
In alternative embodiment, at least one in the code means can at least be implemented partly as hardware circuit.
Processor can be single cpu (central processing unit), but also may include two or more processing units.Example
Such as, processor may include general purpose microprocessor, instruction set processor and/or related chip group and/or special microprocessor (example
Such as, specific integrated circuit (ASIC)).Processor can also include the onboard storage device for caching purposes.Computer program can
To be carried by the computer program product for being connected to processor.Computer program product may include being stored thereon with computer
The computer-readable medium of program.For example, computer program product can be flash memory, random access memory (RAM), read-only deposit
Reservoir (ROM), EEPROM, and above-mentioned computer program module can use the form quilt of the memory in UE in an alternative embodiment
It is distributed in different computer program products.
So far preferred embodiment is had been combined the disclosure is described.It should be understood that those skilled in the art are not
In the case where being detached from spirit and scope of the present disclosure, various other changes, replacement and addition can be carried out.Therefore, the disclosure
Range be not limited to above-mentioned specific embodiment, and should be defined by the appended claims.
In addition, being described herein as the function of realizing by pure hardware, pure software and/or firmware, can also lead to
The modes such as the combination of specialized hardware, common hardware and software are crossed to realize.For example, being described as through specialized hardware (for example, existing
Field programmable gate array (FPGA), specific integrated circuit (ASIC) etc.) Lai Shixian function, can be by common hardware (in for example,
Central Processing Unit (CPU), digital signal processor (DSP)) and the mode of combination of software realize that vice versa.
Claims (13)
1. a kind of method for obtaining equipment pose, comprising:
Obtain Inertial Measurement Unit IMU data;
The method optimized using boundling, the first pose and IMU parameter of equipment are estimated based on IMU data;
Wherein, it is handled using any one of following operation or any multinomial combination:
During boundling optimization, the first pose of the equipment is carried out about using the stationary state of the equipment
Beam;
During boundling optimization, IMU parameter is optimized using the method that loose coupling boundling optimizes;
IMU data are integrated to obtain pose variation, the second of the equipment is obtained according to pose variation and the first pose
Appearance compensates the difference between second pose and the first pose of update after first pose update;
Predict the equipment in the pose of future time instance, wherein during predicting the following pose of the equipment, to utilize depth
Learning network is spent to predict the following pose of the equipment;
Pose according to the equipment in future time instance draws dummy object, wherein during drawing dummy object, according to
The pose of the corresponding equipment of history image frame and the pose of the corresponding equipment of current image frame, to history image frame into
Row deformation obtains current image frame.
2. according to the method described in claim 1, wherein, the method using the optimization of loose coupling boundling carries out IMU parameter
Optimization includes: pose of the fixed equipment in multiple image, according to IMU data between the multiple image and described
Pose of the equipment in the multiple image estimates IMU parameter.
3. according to the method described in claim 1, wherein, the stationary state using equipment carries out the pose of the equipment
Constraint includes: that the pose of the equipment of quiescent period remains unchanged.
4. according to the method described in claim 1, wherein, the difference between second pose and the first pose of update
Different compensate include:
Obtain the difference between second pose and the first pose of update;
It obtains the first pose for updating and updates the number of the required integral and/or perceptually relevant to the first pose next time
Information;
According to the number and/or perceptually relevant information and the difference, compensation rate is obtained, according to compensation rate to described
The updated pose variation of one pose is adjusted.
5. according to the method described in claim 4, wherein, the perceptually relevant information includes at least one of following: eyeball
Movement velocity, the rotation speed on head.
6. according to the method described in claim 1, wherein, the following position that the equipment is predicted using deep learning network
Appearance includes: to remember LSTM network using multilayer shot and long term to predict the equipment in the pose of future time instance.
7. described to remember LSTM network using multilayer shot and long term to predict described set according to the method described in claim 6, wherein
The standby pose in future time instance includes: to set described in obtaining using the current state information of the equipment as the input of LSTM network
The standby status information in future time instance, and obtain based on the status information of the future time instance the following pose of the equipment.
8. according to the method described in claim 7, further include, by the current state information input the LSTM network it
Before, the current state information is corrected using Kalman filtering EKF.
9. according to the method described in claim 8, wherein, the current state information packet is corrected using Kalman filtering EKF
It includes: using the position of the equipment, speed as the input of EKF, using the speed of the equipment as the output of EKF.
10. described to be deformed to obtain current image frame packet to history image frame according to the method described in claim 1, wherein
It includes:
The depth information of each pixel based on the current image frame and the current image frame and the history image frame
The pose of the corresponding equipment calculates position of each pixel of the current image frame in the history image frame;
And
Pixel at each position in the history image frame is copied into pair of the pixel in the current image frame
It answers at position, to draw the current image frame.
11. a kind of for obtaining the device of equipment pose, comprising:
Parameter acquisition module, for obtaining Inertial Measurement Unit IMU data;And
Pose estimation module, for using boundling optimization method, estimated based on IMU data equipment the first pose and
IMU parameter;Wherein, it is handled using any one of following operation or any multinomial combination;
During boundling optimization, the first pose of the equipment is carried out about using the stationary state of the equipment
Beam;
During boundling optimization, IMU parameter is optimized using the method that loose coupling boundling optimizes;
IMU data are integrated to obtain pose variation, the second of the equipment is obtained according to pose variation and the first pose
Appearance compensates the difference between second pose and the first pose of update after first pose update;
Predict the equipment in the pose of future time instance, wherein during predicting the following pose of the equipment, to utilize depth
Learning network is spent to predict the following pose of the equipment;
Pose according to the equipment in future time instance draws dummy object, wherein during drawing dummy object, according to
The pose of the corresponding equipment of history image frame and the pose of the corresponding equipment of current image frame, to history image frame into
Row deformation obtains current image frame.
12. a kind of for obtaining the device of equipment pose, comprising:
Processor;
Memory, is stored with instruction, and described instruction makes the processor perform claim require 1 when being executed by the processor
To method described in any one of 10.
13. a kind of computer readable storage medium of store instruction, described instruction make the processing when executed by the processor
Device is able to carry out method described in any one of claims 1 to 10.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810148359.1A CN110163909A (en) | 2018-02-12 | 2018-02-12 | For obtaining the method, apparatus and storage medium of equipment pose |
KR1020180050747A KR102442780B1 (en) | 2018-02-12 | 2018-05-02 | Method for estimating pose of device and thereof |
US16/114,622 US10964030B2 (en) | 2018-02-12 | 2018-08-28 | Device and method with pose estimator based on current predicted motion state array |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810148359.1A CN110163909A (en) | 2018-02-12 | 2018-02-12 | For obtaining the method, apparatus and storage medium of equipment pose |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110163909A true CN110163909A (en) | 2019-08-23 |
Family
ID=67635306
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810148359.1A Pending CN110163909A (en) | 2018-02-12 | 2018-02-12 | For obtaining the method, apparatus and storage medium of equipment pose |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102442780B1 (en) |
CN (1) | CN110163909A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110910447A (en) * | 2019-10-31 | 2020-03-24 | 北京工业大学 | Visual odometer method based on dynamic and static scene separation |
CN110954134A (en) * | 2019-12-04 | 2020-04-03 | 上海有个机器人有限公司 | Gyro offset correction method, correction system, electronic device, and storage medium |
CN111862288A (en) * | 2020-07-29 | 2020-10-30 | 北京小米移动软件有限公司 | Pose rendering method, device and medium |
CN112489224A (en) * | 2020-11-26 | 2021-03-12 | 北京字跳网络技术有限公司 | Image drawing method and device, readable medium and electronic equipment |
CN112734852A (en) * | 2021-03-31 | 2021-04-30 | 浙江欣奕华智能科技有限公司 | Robot mapping method and device and computing equipment |
CN112785682A (en) * | 2019-11-08 | 2021-05-11 | 华为技术有限公司 | Model generation method, model reconstruction method and device |
CN113218389A (en) * | 2021-05-24 | 2021-08-06 | 北京航迹科技有限公司 | Vehicle positioning method, device, storage medium and computer program product |
CN113674412A (en) * | 2021-08-12 | 2021-11-19 | 浙江工商大学 | Pose fusion optimization-based indoor map construction method and system and storage medium |
CN113807124A (en) * | 2020-06-11 | 2021-12-17 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN113838135A (en) * | 2021-10-11 | 2021-12-24 | 重庆邮电大学 | Pose estimation method, system and medium based on LSTM double-current convolution neural network |
CN113847907A (en) * | 2021-09-29 | 2021-12-28 | 深圳市慧鲤科技有限公司 | Positioning method and device, equipment and storage medium |
WO2022077284A1 (en) * | 2020-10-14 | 2022-04-21 | 深圳市大疆创新科技有限公司 | Position and orientation determination method for movable platform and related device and system |
CN114419259A (en) * | 2022-03-30 | 2022-04-29 | 中国科学院国家空间科学中心 | Visual positioning method and system based on physical model imaging simulation |
CN114543797A (en) * | 2022-02-18 | 2022-05-27 | 北京市商汤科技开发有限公司 | Pose prediction method and apparatus, device, and medium |
CN117726678A (en) * | 2023-12-12 | 2024-03-19 | 中山大学·深圳 | Unmanned system pose estimation method, unmanned system pose estimation device, computer equipment and storage medium |
CN117889853A (en) * | 2024-03-15 | 2024-04-16 | 歌尔股份有限公司 | SLAM positioning method, terminal device and readable storage medium |
WO2024108394A1 (en) * | 2022-11-22 | 2024-05-30 | 北京小米移动软件有限公司 | Posture acquisition method, apparatus, virtual reality device, and readable storage medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11100713B2 (en) * | 2018-08-17 | 2021-08-24 | Disney Enterprises, Inc. | System and method for aligning virtual objects on peripheral devices in low-cost augmented reality/virtual reality slip-in systems |
US11430179B2 (en) * | 2020-02-24 | 2022-08-30 | Microsoft Technology Licensing, Llc | Depth buffer dilation for remote rendering |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101826206A (en) * | 2010-03-31 | 2010-09-08 | 北京交通大学 | Camera self-calibration method |
CN106840151A (en) * | 2017-01-23 | 2017-06-13 | 厦门大学 | Model-free deformation of hull measuring method based on delay compensation |
US20170206712A1 (en) * | 2014-11-16 | 2017-07-20 | Eonite Perception Inc. | Optimizing head mounted displays for augmented reality |
CN107065902A (en) * | 2017-01-18 | 2017-08-18 | 中南大学 | UAV Attitude fuzzy adaptive predictive control method and system based on nonlinear model |
CN107193279A (en) * | 2017-05-09 | 2017-09-22 | 复旦大学 | Robot localization and map structuring system based on monocular vision and IMU information |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9652893B2 (en) * | 2014-04-29 | 2017-05-16 | Microsoft Technology Licensing, Llc | Stabilization plane determination based on gaze location |
-
2018
- 2018-02-12 CN CN201810148359.1A patent/CN110163909A/en active Pending
- 2018-05-02 KR KR1020180050747A patent/KR102442780B1/en active IP Right Grant
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101826206A (en) * | 2010-03-31 | 2010-09-08 | 北京交通大学 | Camera self-calibration method |
US20170206712A1 (en) * | 2014-11-16 | 2017-07-20 | Eonite Perception Inc. | Optimizing head mounted displays for augmented reality |
CN107065902A (en) * | 2017-01-18 | 2017-08-18 | 中南大学 | UAV Attitude fuzzy adaptive predictive control method and system based on nonlinear model |
CN106840151A (en) * | 2017-01-23 | 2017-06-13 | 厦门大学 | Model-free deformation of hull measuring method based on delay compensation |
CN107193279A (en) * | 2017-05-09 | 2017-09-22 | 复旦大学 | Robot localization and map structuring system based on monocular vision and IMU information |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110910447A (en) * | 2019-10-31 | 2020-03-24 | 北京工业大学 | Visual odometer method based on dynamic and static scene separation |
CN112785682A (en) * | 2019-11-08 | 2021-05-11 | 华为技术有限公司 | Model generation method, model reconstruction method and device |
CN110954134B (en) * | 2019-12-04 | 2022-03-25 | 上海有个机器人有限公司 | Gyro offset correction method, correction system, electronic device, and storage medium |
CN110954134A (en) * | 2019-12-04 | 2020-04-03 | 上海有个机器人有限公司 | Gyro offset correction method, correction system, electronic device, and storage medium |
CN113807124B (en) * | 2020-06-11 | 2023-12-12 | Oppo广东移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN113807124A (en) * | 2020-06-11 | 2021-12-17 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN111862288A (en) * | 2020-07-29 | 2020-10-30 | 北京小米移动软件有限公司 | Pose rendering method, device and medium |
WO2022077284A1 (en) * | 2020-10-14 | 2022-04-21 | 深圳市大疆创新科技有限公司 | Position and orientation determination method for movable platform and related device and system |
CN112489224A (en) * | 2020-11-26 | 2021-03-12 | 北京字跳网络技术有限公司 | Image drawing method and device, readable medium and electronic equipment |
CN112734852A (en) * | 2021-03-31 | 2021-04-30 | 浙江欣奕华智能科技有限公司 | Robot mapping method and device and computing equipment |
CN113218389A (en) * | 2021-05-24 | 2021-08-06 | 北京航迹科技有限公司 | Vehicle positioning method, device, storage medium and computer program product |
CN113218389B (en) * | 2021-05-24 | 2024-05-17 | 北京航迹科技有限公司 | Vehicle positioning method, device, storage medium and computer program product |
CN113674412A (en) * | 2021-08-12 | 2021-11-19 | 浙江工商大学 | Pose fusion optimization-based indoor map construction method and system and storage medium |
CN113674412B (en) * | 2021-08-12 | 2023-08-29 | 浙江工商大学 | Pose fusion optimization-based indoor map construction method, system and storage medium |
CN113847907A (en) * | 2021-09-29 | 2021-12-28 | 深圳市慧鲤科技有限公司 | Positioning method and device, equipment and storage medium |
CN113838135A (en) * | 2021-10-11 | 2021-12-24 | 重庆邮电大学 | Pose estimation method, system and medium based on LSTM double-current convolution neural network |
CN113838135B (en) * | 2021-10-11 | 2024-03-19 | 重庆邮电大学 | Pose estimation method, system and medium based on LSTM double-flow convolutional neural network |
CN114543797A (en) * | 2022-02-18 | 2022-05-27 | 北京市商汤科技开发有限公司 | Pose prediction method and apparatus, device, and medium |
CN114543797B (en) * | 2022-02-18 | 2024-06-07 | 北京市商汤科技开发有限公司 | Pose prediction method and device, equipment and medium |
CN114419259B (en) * | 2022-03-30 | 2022-07-12 | 中国科学院国家空间科学中心 | Visual positioning method and system based on physical model imaging simulation |
CN114419259A (en) * | 2022-03-30 | 2022-04-29 | 中国科学院国家空间科学中心 | Visual positioning method and system based on physical model imaging simulation |
WO2024108394A1 (en) * | 2022-11-22 | 2024-05-30 | 北京小米移动软件有限公司 | Posture acquisition method, apparatus, virtual reality device, and readable storage medium |
CN117726678A (en) * | 2023-12-12 | 2024-03-19 | 中山大学·深圳 | Unmanned system pose estimation method, unmanned system pose estimation device, computer equipment and storage medium |
CN117889853A (en) * | 2024-03-15 | 2024-04-16 | 歌尔股份有限公司 | SLAM positioning method, terminal device and readable storage medium |
CN117889853B (en) * | 2024-03-15 | 2024-06-04 | 歌尔股份有限公司 | SLAM positioning method, terminal device and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
KR102442780B1 (en) | 2022-09-14 |
KR20190098003A (en) | 2019-08-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110163909A (en) | For obtaining the method, apparatus and storage medium of equipment pose | |
CN111133747B (en) | Method and device for stabilizing video | |
JP6534664B2 (en) | Method for camera motion estimation and correction | |
CN104982027B (en) | For the system and method by the smooth digital video stabilization of rotation based on constraint | |
WO2021135827A1 (en) | Line-of-sight direction determination method and apparatus, electronic device, and storage medium | |
CN102779334B (en) | Correction method and device of multi-exposure motion image | |
EP2544445A1 (en) | Image processing device, image processing method, image processing program and storage medium | |
US20130076915A1 (en) | Framework for reference-free drift-corrected planar tracking using lucas-kanade optical flow | |
JP2003196661A (en) | Appearance model for visual motion analysis and visual tracking | |
CN113077516B (en) | Pose determining method and related equipment | |
CN110390685B (en) | Feature point tracking method based on event camera | |
KR20150011938A (en) | Method and apparatus for stabilizing panorama video captured based multi-camera platform | |
CN111016887A (en) | Automatic parking device and method for motor vehicle | |
CN111507132A (en) | Positioning method, device and equipment | |
CN113899364A (en) | Positioning method and device, equipment and storage medium | |
CN111798484B (en) | Continuous dense optical flow estimation method and system based on event camera | |
CN110874569B (en) | Unmanned aerial vehicle state parameter initialization method based on visual inertia fusion | |
CN112233149A (en) | Scene flow determination method and device, storage medium and electronic device | |
US20230076331A1 (en) | Low motion to photon latency rapid target acquisition | |
CN111866492A (en) | Image processing method, device and equipment based on head-mounted display equipment | |
CN111866493B (en) | Image correction method, device and equipment based on head-mounted display equipment | |
JP6602089B2 (en) | Image processing apparatus and control method thereof | |
CN115546876B (en) | Pupil tracking method and device | |
JP6437811B2 (en) | Display device and display method | |
JP6352182B2 (en) | How to correct stereo film frame zoom settings and / or vertical offset |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |