CN108106614A - A kind of inertial sensor melts algorithm with visual sensor data - Google Patents

A kind of inertial sensor melts algorithm with visual sensor data Download PDF

Info

Publication number
CN108106614A
CN108106614A CN201711398804.1A CN201711398804A CN108106614A CN 108106614 A CN108106614 A CN 108106614A CN 201711398804 A CN201711398804 A CN 201711398804A CN 108106614 A CN108106614 A CN 108106614A
Authority
CN
China
Prior art keywords
location data
data
location
anchor point
location information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711398804.1A
Other languages
Chinese (zh)
Other versions
CN108106614B (en
Inventor
郭*
郭
黄永鑫
路晗
张莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING LIGHT TECHNOLOGY Co Ltd
Original Assignee
BEIJING LIGHT TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING LIGHT TECHNOLOGY Co Ltd filed Critical BEIJING LIGHT TECHNOLOGY Co Ltd
Priority to CN201711398804.1A priority Critical patent/CN108106614B/en
Publication of CN108106614A publication Critical patent/CN108106614A/en
Application granted granted Critical
Publication of CN108106614B publication Critical patent/CN108106614B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Abstract

The embodiment of the present application provides a kind of inertial sensor and visual sensor data anastomosing algorithm,Using the method for a kind of inertial sensor provided in an embodiment of the present invention and visual sensor data fusion positioning,Second location data of the anchor point that the first location data and visual sensor of the anchor point gathered according to inertial sensor gather determines location information,Or location information is determined according to the first location data when not collecting the second location data of visual sensor acquisition,So as in definite two-dimensional space coordinate and/or three dimensional space coordinate,The second location data gathered always according to visual sensor is no longer needed to determine location information,Visual sensor frequency it is relatively low or acquisition image can not determine accurate location data when remain able to output location information,So as to improve location information output frequency,Reduce positioning delay.

Description

A kind of inertial sensor melts algorithm with visual sensor data
Technical field
The present invention relates to field of locating technology, more particularly to a kind of inertial sensor melts algorithm with visual sensor data.
Background technology
With the rapid development of science and technology, the demand that the mankind interact dimension with computer equipment is continuously increased.When very long In, the interaction of the mankind and computer is mainly in two-dimensional space, such as flat-panel screens and mouse, Trackpad.Use two dimension Spatial description three-dimensional body can cause inconvenient for use and certain limitation, particularly in three dimensional design and the present of three-dimensional amusement development My god, user can not be by naturally acting the interactive three dimensions seen in design.
The mankind have been arrived three-dimensional by the appearance of virtual reality (virtual reality, VR) with the promotion that interacts of computer. People can use movement, outside input equipment of the limb action, body of oneself etc. and meter in the three dimensions of reality Calculation machine interacts.This just needs to do three dimensional space coordinate positioning to human body key position or outside input equipment.
Existing three dimensional space coordinate locating scheme, by multigroup video camera of erection, first to all video cameras into rower Surely the inside and outside parameter of video camera is obtained, all video cameras is then controlled synchronously to expose, asks for the two-dimensional coordinate of specified point in image, And the two-dimensional points in 2 images above to tracking the mark point match, finally according to the inside and outside ginseng of corresponding video camera Number seeks out the elaborate position of target label point in three dimensions.
However, since the Image Acquisition frame per second of the imaging sensors such as existing video camera is limited, limit according to acquisition The frequency of image output three dimensional space coordinate and the positioning delay for adding three dimensional space coordinate, for example with 60FPS (Frames Per Second, transmission frame number per second) technical grade overall situation exposure camera determine three dimensional space coordinate position, data output Frequency (i.e. output delay) is 16 millisecond, i.e., every 16 milliseconds under normal circumstances export a three dimensional space coordinate, along with transmission The VR display delays of delay and display output about 11 milliseconds of reality of delay will be above 20 milliseconds, and the VR when virtual reality interacts Display delay can bring apparent spinning sensation when being higher than 20ms to user.
To sum up, the frequency of existing three dimensional space coordinate locating scheme output three dimensional space coordinate is not high and delay is excessively high, Not the problem of not adapting to the application scenarios higher to three-dimensional fix delay and frequency requirement such as VR.
The content of the invention
The present invention provides a kind of inertial sensor and melts algorithm with visual sensor data, is sat with solving existing three dimensions The frequency of mark locating scheme output three dimensional space coordinate is not high and delay is excessively high, it is impossible to adapt to VR etc. and be delayed to three-dimensional fix With frequency requirement the problem of higher application scenarios.
The method of a kind of inertial sensor provided in an embodiment of the present invention and visual sensor data fusion positioning, including:
Determine the first location data of the anchor point of inertial sensor acquisition;
Can judgement determine the second location data of the anchor point of visual sensor acquisition;
If so, location information is determined according to first location data and second location data;
Otherwise, location information is determined according to first location data.
Optionally, before determining location information according to first location data and second location data, further include:
According to the first location data described in second GPS correction data.
Optionally, before the first location data according to second GPS correction data, further include:
It is valid data to determine second location data;
Wherein, the valid data are all or part of second location data met in the following conditions:
The anchor point is located at the imaging sensor for shooting the positioning image when shooting the positioning image of the anchor point Non-shooting dead angle position, the positioning image is for determining second location data;
Shoot the anchor point positioning image when the anchor point with shoot it is described positioning image imaging sensor it Between there is no shelter, the positioning image is for determining second location data;
Shoot the anchor point positioning image when the anchor point with shoot it is described positioning image imaging sensor it Between distance be no more than described image sensor effective shooting distance, it is described positioning image for determine it is described second positioning number According to.
Optionally, location information is determined according to first location data and second location data, including:
The first positioning number is determined by probabilistic model according to first location data and second location data According to weight and first location data and second location data the second positioning number is determined by probabilistic model According to weight;
According to first location data, second location data, the weight of first location data and described The weight of two location datas determines the location information.
Optionally, according to first location data, second location data, the weight of first location data and The weight of second location data determines the location information, including:
According to first location data, second location data, the weight of first location data and described The weight of two location datas carries out data fusion, realizes correction of second location data to the first location data;According to the number According to fusion results by expanded Kalman filtration algorithm determine the anchor point current location information and the anchor point it is pre- Survey location information.
Optionally, location information is determined according to first location data, including:
The a data is corrected by expanded Kalman filtration algorithm according to first location data, with Determine the current location information of the anchor point and the prediction location information of the anchor point.
A kind of inertial sensor provided in an embodiment of the present invention and the device of visual sensor data fusion, including:Inertia Sensor, visual sensor and processor;
The inertial sensor, for obtaining the first location information of anchor point;
The visual sensor, for obtaining the second location information of the anchor point;
The processor, is used to implement the above method.
Using the method for a kind of inertial sensor provided in an embodiment of the present invention and visual sensor data fusion positioning, energy The second of the anchor point that the first location data and visual sensor of enough anchor points gathered according to inertial sensor gather Location data determines location information or determines when not collecting the second location data of visual sensor acquisition according to first Position data determine location information, and this method can still export positioning letter when visual sensor does not collect location data Breath improves the output frequency of location information, reduces positioning delay.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly introduced, it should be apparent that, the accompanying drawings in the following description is only some embodiments of the present invention, for this For the those of ordinary skill in field, without having to pay creative labor, it can also be obtained according to these attached drawings His attached drawing.
Fig. 1 is a kind of inertial sensor provided in an embodiment of the present invention and the method for visual sensor data fusion positioning Flow diagram;
Fig. 2 is a kind of inertial sensor provided in an embodiment of the present invention and the method for visual sensor data fusion positioning Particular flow sheet;
Fig. 3 is a kind of inertial sensor provided in an embodiment of the present invention and the device of visual sensor data fusion positioning Structure diagram.
Specific embodiment
It hereafter will be described in detail the present invention.
Unless otherwise defined, all technical and scientific terms used in this application have and fields technology people of the present invention Member is generally understood identical meaning.All patents and publications that the application is referred to is incorporated herein by reference.
A kind of inertial sensor and visual sensor data anastomosing algorithm involved by the embodiment of the present invention, inertial sensor Second location data of the anchor point of the first location data and the visual sensor acquisition of the anchor point of acquisition is to specified point Location information be determined, in addition, the algorithm can also when visual sensor does not collect the second location data according to First location data determines the location information of specified point, so as to frequency more higher than the picture-taken frequency of visual sensor Rate determines location information, reduces positioning delay.
As shown in Figure 1, a kind of inertial sensor provided in an embodiment of the present invention and visual sensor data fusion positioning Method comprises the following steps:
S101:Determine the first location data of the anchor point of inertial sensor acquisition;
S102:Can judgement determine the second location data of the anchor point of visual sensor acquisition;
S103:If so, location information is determined according to the first location data and the second location data;
S104:Otherwise, location information is determined according to the first location data.
Using above method, the first location data and visual sensor of the anchor point gathered according to inertial sensor gather The second location data of the anchor point determine location information or fixed in do not collect visual sensor acquisition second Location information is determined according to the first location data during the data of position, so as in definite two-dimensional space coordinate and/or three dimensional space coordinate In, it is no longer necessary to the second location data gathered always according to visual sensor determines location information, in visual sensor frequency Relatively low or acquisition image can not determine to remain able to output location information during accurate location data, so as to improve location information Output frequency reduces positioning delay.It in force, can be by the two dimension of the anchor point determined by S101~S104 and/or three Location information is tieed up to interact for virtual reality.
In the embodiment of the present invention, inertial sensor refers to the alignment sensor that Sensitive Apparatus is calculated as by gyroscope, acceleration, Any external information is not required in inertial sensor in use, will not be to any information of external radiation, only can be in whole day by itself Continuous two dimension and/or three-dimensional localization and orientation are carried out under the conditions of time.Wherein, gyroscope is used for measuring Three dimensional rotation movement point Amount, accelerometer are used for measuring D translation component motion.By integration can obtain device position at any time, speed and Rotation information.In general, positioning accuracy is high in a short time and stability is good for inertial sensor, the drawback is that measurement error It can constantly accumulate at any time, the large error of positioning be caused in the case of long-time, therefore the location data obtained substantially can not be straight It scoops out and is interacted for virtual reality.In force, the inertial sensor data of 1k frequencies may be employed, using 1 millisecond of inquiry Whether the cycle receives inertial sensor data to inquire about.In force, if judging not collecting the first of inertial sensor Location data gathers the first location data again after can initializing inertial sensor.
Visual sensor in the embodiment of the present invention gathers at least one positioning figure of anchor point by imaging sensor Picture, and identify the position that anchor point is located in the positioning image, and determine the two-dimensional coordinate and/or coordinate of anchor point.In reality Shi Zhong, can be when judging that visual sensor does not collect positioning image, and judgement can not collect the second location data.
Optionally, before S103, can according to second GPS correction data the first location data.
Since the measurement error of inertial sensor can constantly be accumulated at any time, the embodiment of the present invention is using alignment sensor It, can be according to first the second location data of GPS correction data, for example, can be according to the first positioning number before determining location information position According to the correction of the first GPS correction data weight, the second location data determined with the second location data and by probabilistic model Weight, the first location data of progress are merged with the second location data.
Optionally, before the first location data according to second GPS correction data, can also determine described Second location data is valid data;
Wherein, the valid data are all or part of second location data met in the following conditions:Described in shooting The anchor point is located at the non-shooting dead angle position for the imaging sensor for shooting the positioning image during positioning image of anchor point, The positioning image is used to determine second location data;The anchor point is with clapping when shooting the positioning image of the anchor point It takes the photograph between the imaging sensor of the positioning image there is no shelter, the positioning image is used to determine the second positioning number According to;Shoot the anchor point positioning image when the anchor point and shooting it is described positioning image imaging sensor between away from From effective shooting distance no more than described image sensor, the positioning image is used to determine second location data.
In embodiments of the present invention, in the positioning image of the anchor point of visual sensor acquisition, since anchor point may Positioned at the shooting blind angle amount of imaging sensor, either there may be shelter or anchor points between anchor point and imaging sensor The distance between imaging sensor has been more than effective shooting distance of imaging sensor, any one in the above situation The location data error determined according to the positioning image can be caused excessive, therefore before first location data is corrected, really Fixed second location data is there is no a kind of valid data of situation of any of the above, to exclude caused by positioning image falling problem Position error.
In force, if judging, the second location data is not valid data, and positioning letter is determined according to the first location data Breath.Wherein it is determined that the method for location information is referred to visual sensor when can not collect the second location data, according to first The method that location data determines location information.
Optionally, in S103, location information is determined according to first location data and second location data, is wrapped It includes:
The first positioning number is determined by probabilistic model according to first location data and second location data According to weight and first location data and second location data the second positioning number is determined by probabilistic model According to weight;
According to first location data, second location data, the weight of first location data and described The weight of two location datas determines the location information.
Optionally, according to first location data, second location data, the weight of first location data and The weight of second location data determines the location information, including:According to first location data, second positioning The weight of data, the weight of first location data and second location data, to first location data and described Second location data is merged.According to data fusion as a result, determining the anchor point by expanded Kalman filtration algorithm The prediction location information of current location information and the anchor point.
In the embodiment of the present invention, it can also determine that the prelocalization of working as of the anchor point is believed by expanded Kalman filtration algorithm The prediction location information of breath and the anchor point.Kalman filtering is a kind of state and parameter estimation equation of recursive form, with Lowest mean square root error is the posterior probability of criterion, mainly includes time update and two equations of measurement updaue.Suitable for limited The non-stationary problem of observation interval.The purpose of filtering is that target is estimated in the past with the state of present situation, while predicts target The motion state of future time instance, the parameters such as position, angle, speed and acceleration including target.Kalman filtering is in system mould Type and observing and nursing are linear models, and when system noise and observation noise are Gaussian Profile can generate best estimator.
Optionally, in S104, location information is determined according to first location data, including:According to the described first positioning Data determine the current location information of the anchor point and/or the pre- measure of the anchor point by expanded Kalman filtration algorithm Position information.
It, can be according to when not collecting the second location data that visual sensor determines in the embodiment of the present invention One location data determines the current location information of the anchor point and/or the anchor point by expanded Kalman filtration algorithm Predict location information.
As shown in Fig. 2, the method for inertial sensor provided in an embodiment of the present invention and visual sensor data fusion positioning It may comprise steps of:
Step 201:Determine the first location data of the anchor point of inertial sensor acquisition;
Step 202:Can judgement determine the second location data of the anchor point of visual sensor acquisition, if so, performing Step 203, step 205 is otherwise performed;
Step 203:Judge whether the second location data is valid data, if so, performing step 204, otherwise perform step 205;
Step 204:The first location data is corrected according to the second location data, performs step 206 afterwards;
Step 205:Location information is determined according to the first location data, terminates this flow afterwards;
Step 206:Location information is determined according to the first location data and the second location data, terminates this flow afterwards.
Using above-mentioned steps 201~206, it can determine that the positioning of anchor point is believed according to inertial sensor and visual sensor Breath, including obtaining the current location information of anchor point and/or the prediction location information of anchor point, wherein, according to 1k frequencies Inertial sensor and the visual sensor using 60FPS, the delay of the location information finally obtained is 1ms, fixed so as to improve The output frequency of position information reduces positioning delay, and location information can be predicted, prolongs so as to further reduce positioning When.
Wherein, before step 201, if judging, inertial sensor does not collect the first location data, can perform used After the initialization of property sensor, the first location data is redefined.
After step 205 and/or step 206, above-mentioned steps 201~206 can be repeated, to continuously determine positioning Multiple location informations of point.
Based on same inventive concept, the embodiment of the present invention also provides a kind of inertial sensor and visual sensor data fusion Device, since the device solves the problems, such as that the principle of technical problem is identical with the principle that the above method solves, the device Implementation is referred to the implementation of the above method, repeats part and repeats no more again.
As shown in figure 3, a kind of inertial sensor provided in an embodiment of the present invention and the device of visual sensor data fusion 300, including inertial sensor 301, visual sensor 302 and processor 303, wherein inertial sensor 301 positions for obtaining First location information of point, visual sensor 302 are used to obtain the second location information of anchor point, and processor 303 is used to implement Method described in the embodiment of the present invention.
Based on the device 300 described in Fig. 3, processor 303 can be according to the first of the anchor point of the acquisition of inertial sensor 301 Second location data of the anchor point that location data and visual sensor 302 gather determines location information or in vision Location information is determined according to the first location data during the second location data of the no acquisition of sensor 302, so as to determine two dimension In space coordinates and/or three dimensional space coordinate, it is no longer necessary to which the second location data gathered always according to visual sensor 302 is true Determine location information, 302 frequency of visual sensor it is relatively low or acquisition image can not determine accurate location data when still can Location information is enough exported, so as to improve location information output frequency, reduces positioning delay.
The foregoing describe the preferred embodiment for the present invention, and however, it is not to limit the invention.Those skilled in the art couple Embodiment disclosed herein can carry out improvement and the variation without departing from scope and spirit.

Claims (7)

1. a kind of method of inertial sensor and visual sensor data fusion positioning, which is characterized in that this method includes:
Determine the first location data of the anchor point of inertial sensor acquisition;
Can judgement determine the second location data of the anchor point of visual sensor acquisition;
If so, location information is determined according to first location data and second location data;
Otherwise, location information is determined according to first location data.
2. the method as described in claim 1, which is characterized in that according to first location data and second location data Before determining location information, further include:
According to the first location data described in second GPS correction data.
3. method as claimed in claim 2, which is characterized in that the first positioning number according to second GPS correction data According to before, further include:
It is valid data to determine second location data;
Wherein, the valid data are all or part of second location data met in the following conditions:
The anchor point, which is located at, when shooting the positioning image of the anchor point shoots the non-of the imaging sensor for positioning image Shooting blind angle amount position, the positioning image are used to determine second location data;
When shooting the positioning image of the anchor point between the imaging sensor of the anchor point and the shooting positioning image not There are shelter, the positioning image is used to determine second location data;
When shooting the positioning image of the anchor point between the imaging sensor of the anchor point and the shooting positioning image Distance is no more than effective shooting distance of described image sensor, and the positioning image is used to determine second location data.
4. the method as described in claim 1, which is characterized in that according to first location data and second location data Determine location information, including:
First location data is determined by probabilistic model according to first location data and second location data Weight and first location data and second location data determine second location data by probabilistic model Weight;
Determined according to first location data, second location data, the weight of first location data and described second The weight of position data determines the location information.
5. method as claimed in claim 4, which is characterized in that according to first location data, second location data, The weight of the weight of first location data and second location data determines the location information, including:
Determined according to first location data, second location data, the weight of first location data and described second The weight of position data, carries out data fusion, realizes correction of second location data to the first location data;
According to the data fusion result by expanded Kalman filtration algorithm determine the anchor point current location information and The prediction location information of the anchor point.
6. the method as described in claim 1, which is characterized in that location information is determined according to first location data, including:
First location data is corrected by expanded Kalman filtration algorithm according to second location data, it is described fixed to determine The prediction location information of the current location information in site and the anchor point.
7. a kind of inertial sensor and the device of visual sensor data fusion, which is characterized in that the device includes:Inertia sensing Device, visual sensor and processor;
The inertial sensor, for obtaining the first location information of anchor point;
The visual sensor, for obtaining the second location information of the anchor point;
The processor is used to implement the method as described in claim 1-6 is any.
CN201711398804.1A 2017-12-22 2017-12-22 A kind of inertial sensor and visual sensor data melt algorithm Active CN108106614B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711398804.1A CN108106614B (en) 2017-12-22 2017-12-22 A kind of inertial sensor and visual sensor data melt algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711398804.1A CN108106614B (en) 2017-12-22 2017-12-22 A kind of inertial sensor and visual sensor data melt algorithm

Publications (2)

Publication Number Publication Date
CN108106614A true CN108106614A (en) 2018-06-01
CN108106614B CN108106614B (en) 2019-02-19

Family

ID=62212350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711398804.1A Active CN108106614B (en) 2017-12-22 2017-12-22 A kind of inertial sensor and visual sensor data melt algorithm

Country Status (1)

Country Link
CN (1) CN108106614B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166181A (en) * 2018-08-12 2019-01-08 苏州炫感信息科技有限公司 A kind of mixing motion capture system based on deep learning
CN109945890A (en) * 2018-11-21 2019-06-28 财团法人车辆研究测试中心 More positioning systems switch and merge bearing calibration and its device
CN110132280A (en) * 2019-05-20 2019-08-16 广州小鹏汽车科技有限公司 Vehicle positioning method, vehicle locating device and vehicle under indoor scene
CN110186450A (en) * 2019-05-13 2019-08-30 深圳市普渡科技有限公司 Robot localization deviation restorative procedure and system
CN111323069A (en) * 2020-03-23 2020-06-23 清华大学 Multi-sensor online calibration method and system based on deep reinforcement learning
WO2022228056A1 (en) * 2021-04-30 2022-11-03 华为技术有限公司 Human-computer interaction method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102768042A (en) * 2012-07-11 2012-11-07 清华大学 Visual-inertial combined navigation method
CN103411621A (en) * 2013-08-09 2013-11-27 东南大学 Indoor-mobile-robot-oriented optical flow field vision/inertial navigation system (INS) combined navigation method
CN103941270A (en) * 2014-02-28 2014-07-23 北京邮电大学 Multi-system fusing and positioning method and device
CN103983263A (en) * 2014-05-30 2014-08-13 东南大学 Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network
CN106093853A (en) * 2016-06-07 2016-11-09 北京邮电大学 The measuring method of location of mobile station and device
CN107223275A (en) * 2016-11-14 2017-09-29 深圳市大疆创新科技有限公司 The method and system of multichannel sensing data fusion
CN107274438A (en) * 2017-06-28 2017-10-20 山东大学 Support single Kinect multi-human trackings system and method for mobile virtual practical application
US9805512B1 (en) * 2015-11-13 2017-10-31 Oculus Vr, Llc Stereo-based calibration apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102768042A (en) * 2012-07-11 2012-11-07 清华大学 Visual-inertial combined navigation method
CN103411621A (en) * 2013-08-09 2013-11-27 东南大学 Indoor-mobile-robot-oriented optical flow field vision/inertial navigation system (INS) combined navigation method
CN103941270A (en) * 2014-02-28 2014-07-23 北京邮电大学 Multi-system fusing and positioning method and device
CN103983263A (en) * 2014-05-30 2014-08-13 东南大学 Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network
US9805512B1 (en) * 2015-11-13 2017-10-31 Oculus Vr, Llc Stereo-based calibration apparatus
CN106093853A (en) * 2016-06-07 2016-11-09 北京邮电大学 The measuring method of location of mobile station and device
CN107223275A (en) * 2016-11-14 2017-09-29 深圳市大疆创新科技有限公司 The method and system of multichannel sensing data fusion
CN107274438A (en) * 2017-06-28 2017-10-20 山东大学 Support single Kinect multi-human trackings system and method for mobile virtual practical application

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166181A (en) * 2018-08-12 2019-01-08 苏州炫感信息科技有限公司 A kind of mixing motion capture system based on deep learning
CN109945890A (en) * 2018-11-21 2019-06-28 财团法人车辆研究测试中心 More positioning systems switch and merge bearing calibration and its device
CN109945890B (en) * 2018-11-21 2022-01-25 财团法人车辆研究测试中心 Multi-positioning system switching and fusion correction method and device
CN110186450A (en) * 2019-05-13 2019-08-30 深圳市普渡科技有限公司 Robot localization deviation restorative procedure and system
CN110132280A (en) * 2019-05-20 2019-08-16 广州小鹏汽车科技有限公司 Vehicle positioning method, vehicle locating device and vehicle under indoor scene
CN111323069A (en) * 2020-03-23 2020-06-23 清华大学 Multi-sensor online calibration method and system based on deep reinforcement learning
WO2022228056A1 (en) * 2021-04-30 2022-11-03 华为技术有限公司 Human-computer interaction method and device

Also Published As

Publication number Publication date
CN108106614B (en) 2019-02-19

Similar Documents

Publication Publication Date Title
CN108106614A (en) A kind of inertial sensor melts algorithm with visual sensor data
US9401025B2 (en) Visual and physical motion sensing for three-dimensional motion capture
US8705799B2 (en) Tracking an object with multiple asynchronous cameras
CN104658012B (en) Motion capture method based on inertia and optical measurement fusion
JP6734940B2 (en) Three-dimensional measuring device
CN108846867A (en) A kind of SLAM system based on more mesh panorama inertial navigations
CN105094335B (en) Situation extracting method, object positioning method and its system
CN109211277B (en) State determination method and device of visual inertial odometer and electronic equipment
US20100194879A1 (en) Object motion capturing system and method
RU2572637C2 (en) Parallel or serial reconstructions in online and offline modes for 3d measurements of rooms
CN104704384A (en) Image processing method, particularly used in a vision-based localization of a device
JP6288858B2 (en) Method and apparatus for estimating position of optical marker in optical motion capture
CN104964685A (en) Judgment method for moving state of mobile phone
CN109284006B (en) Human motion capturing device and method
CN106708037A (en) Autonomous mobile equipment positioning method and device, and autonomous mobile equipment
US20150139505A1 (en) Method and apparatus for predicting human motion in virtual environment
CA2694123A1 (en) Instant calibration of multi-sensor 3d motion capture system
CN109040525B (en) Image processing method, image processing device, computer readable medium and electronic equipment
CN111194122A (en) Somatosensory interactive light control system
JP2017524932A (en) Video-assisted landing guidance system and method
GB2466714A (en) Hybrid visual and physical object tracking for virtual (VR) system
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN110099206B (en) Robot-based photographing method, robot and computer-readable storage medium
CN107449403B (en) Time-space four-dimensional joint imaging model and application
WO2019119597A1 (en) Method for implementing planar recording and panoramic recording by coordination between mobile terminal and lens assembly and lens assembly

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant