CN110186458A - Indoor orientation method based on OS-ELM fusion vision and Inertia information - Google Patents

Indoor orientation method based on OS-ELM fusion vision and Inertia information Download PDF

Info

Publication number
CN110186458A
CN110186458A CN201910415446.3A CN201910415446A CN110186458A CN 110186458 A CN110186458 A CN 110186458A CN 201910415446 A CN201910415446 A CN 201910415446A CN 110186458 A CN110186458 A CN 110186458A
Authority
CN
China
Prior art keywords
vector
output
elm
training
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910415446.3A
Other languages
Chinese (zh)
Inventor
徐岩
李宁宁
安卫凤
崔媛媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910415446.3A priority Critical patent/CN110186458A/en
Publication of CN110186458A publication Critical patent/CN110186458A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of indoor orientation methods based on OS-ELM fusion vision and Inertia information, include: to carry out pretreatment to the inertia and visual sensor data of acquisition to generate training feature vector, the training data comprising training feature vector and target output is modeled;Training data is input to initialization output initial weight vector in OS-ELM model, using training feature vector as input vector, corresponding displacement of targets is as training output vector;OS-ELM model output update weight vectors are updated by the new training data of online serial order learning and export final weight vectors by iteration, as optimal weights vector;Modeling is carried out to test data and generates testing feature vector, test data is obtained by optimal weights vector and tests output vector accordingly;Corner judgement is introduced, test output vector is optimized, the output result after corner judges is calculated, final positioning result is obtained.

Description

Indoor orientation method based on OS-ELM fusion vision and Inertia information
Technical field
The present invention relates to indoor positioning, information fusion and field of signal processing, more particularly to it is a kind of based on OS-ELM ( Line order limit learning machine) fusion vision and Inertia information indoor orientation method.
Background technique
In recent years, with the demand rapid growth of indoor positioning service, indoor locating system is become more and more important.The whole world Positioning system (GPS) is the most popular system for positioning and navigating, it can day and night the world Anywhere Accurate location information is provided, however indoors due to wall barrier and multipath effect in environment, GPS is difficult to receive enough defend Star signal is unable to reach and the comparable positioning accuracy of outdoor environment so that positioning accuracy sharply declines.Therefore it has already been proposed that The alternative solution of many GPS solves the problems, such as indoor positioning.These solutions are broadly divided into two classes: being based on single piece of information source Location technology and location technology based on Multi-source Information Fusion.
Based on the location technology in single piece of information source, what is be widely used at present is the method based on received signal strength (RSS). Common signal source has WiFi[1]、FM[2]And bluetooth etc., they usually require experience two processes: initially set up with really Then the corresponding fingerprint database in position is matched received signals fingerprint with fingerprint database by matching algorithm, to obtain Obtain corresponding location information.But for there are the complex space of other signal interferences, the stability of this method is slightly worse.Although Some particular devices (such as radio frequency identification (RFID), ultra wide band (UWB) and ultrasonic wave) are used only single piece of information source and can provide Fairly good positioning accuracy, but the deployment and maintenance of its hardware usually may require that very high expense.
With constantly improve for computer vision technique, vision navigation system (VNS) has become the focus of Experts ' Attention.It is logical It crosses VNS and better understands and perceive indoor environment using visualized data, compared with other non-vision navigation system, VNS tool It contains much information, noiseless, the high advantage of positioning accuracy can obtain in the scene with characteristic matching abundant and identification High position precision[3]。C.Piciarelli[4]Propose a kind of reference model by image and the visual signature with position mark It is compared to realize the vision indoor positioning technologies (being referred to as VL algorithm herein) of positioning, the experimental results showed that, VL is calculated Although the positioning result of method can be with precise positioning in most of time, it is such as being blocked, light variation and personnel's access Interference etc. is ineffective in some cases, therefore VNS can be merged with other navigation devices to provide more accurate positioning accurate Degree.
Summary of the invention
The present invention provides a kind of indoor orientation method based on OS-ELM fusion vision and Inertia information, the present invention will be used to Property and visual information are merged using OS-ELM, and inertial navigation system (INS) can be retained in position fixing process in short-term Interior feature with high accuracy and the VNS of positioning can get high position precision in having the characteristics that feature scene abundant, and can To reduce VNS in the case where environmental catastrophe (for example light variation and personnel pass in and out interference etc.), larger position error is generated, in detail See below description:
A kind of indoor orientation method based on OS-ELM fusion vision and Inertia information, the described method comprises the following steps:
1) pretreatment is carried out to the inertia of acquisition and visual sensor data and generates training feature vector, it will be special comprising training The training data of sign vector sum target output is modeled;
2) training data is input to initialization output initial weight vector in OS-ELM model, training feature vector is made For input vector, corresponding displacement of targets is as training output vector;
3) output of OS-ELM model is updated by the new training data of online serial order learning and updates weight vectors, by repeatedly In generation, exports final weight vectors, as optimal weights vector;
4) modeling is carried out to test data and generates testing feature vector, it is corresponding to obtain test data by optimal weights vector Test output vector;
5) corner judgement is introduced, test output vector is optimized, for reducing the on the corner image due to acquisition Fuzzy and generation position error;Output result after corner judges is calculated, final positioning result is obtained.
It is wherein, described to model the training data comprising training feature vector and target output specifically:
The input data of each frame to be positioned is indicated by the vector of M several dimensions in model, calculates image IiAnd image Ii+mDifference between true two-dimensional coordinate, and it is made from it the target output vector of every frame image.
Further, the optimal weights vector specifically:
Wherein, H is hidden layer output matrix, and W is object vector, and β is output weight vectors, and T is transposition, and k is.
It is wherein, described that output vector is tested by optimal weights vector acquisition test data accordingly specifically:
Wherein, G (inputi) it is input vector for test, i is picture number, and L is picture number.
Further, the introducing corner judgement, optimizes test output vector specifically:
When meeting following two condition, it is believed that present frame is in turn condition, by picture IiAnd Ii+mBetween displacement It is set as zero;
1) as m=1, the logarithm N of match point0Still less than Na
2) when the differential seat angle between 2 continuous frames is greater than threshold value:
angle(Ii+1)-angle(Ii) > threshold value
Wherein, angle (Ii) be the i-th frame angle, threshold value is set as 0.001.
Wherein, the described pair of output result after corner judges calculates, and obtains final positioning result specifically:
Calculate the moving distance Δ S between every frame and next framei,
Then to Δ SiIt is integrated, obtains final positioning result;
Wherein, SiFor picture IiAnd Ii+mBetween displacement, m be picture IiAnd Ii+mBetween interval numerical value, i be picture compile Number.
The beneficial effect of the technical scheme provided by the present invention is that:
1) present invention merges inertia by modeling to vision, inertia and target position information, and using OS-ELM And visual information, inertia and visual information have obtained complementation in terms of accuracy and frequency response, improve positioning performance;
2) invention introduces corner judgement come reduce on the corner due to the image of acquisition is fuzzy and it is larger fixed to generate Position error can still provide accurately positioning result passing in and out in interference and the atwirl scene of carrier with personnel;
3) by experimental verification, each evaluation index of the present invention is superior to VL algorithm, can meet in real life based on position The requirement being served by.
Detailed description of the invention
Fig. 1 is a kind of flow chart that vision and inertia indoor orientation method are merged based on OS-ELM;
The locating effect contrast schematic diagram for the location algorithm that Fig. 2 is proposed by this method and document [4];
Wherein, (a) is the locating effect schematic diagram of X-coordinate;It (b) is the locating effect schematic diagram of Y-coordinate.
The error accumulation distribution map contrast schematic diagram for the location algorithm that Fig. 3 is proposed by this method and document [4];
Fig. 4 is the concealed nodes number of positioning accuracy and OS-ELM and the relation schematic diagram of activation primitive.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, embodiment of the present invention is made below further Ground detailed description.
In order to overcome the problems, such as the above-mentioned location technology based on single piece of information source, multi-source information is carried out using blending algorithm Fusion, complementary reliable and stable location technology of gaining the upper hand become the development trend of current location technology.A large amount of research knot Fruit shows that the overall precision of positioning, S.Knauth etc. can be improved in the location technology based on Multi-source Information Fusion[5]Use particle The measurement result of filter (PF) Lai Jicheng INS, Wi Fi and planogram information.It, can be with for not needing the case where positioning in real time Higher positioning accuracy is provided.S.Papaioannou etc.[6]Infrastructure camera is combined with radio and is positioned, is tied Fruit shows that the system can solve the problem of VNS is encountered when challenging indoor environment.
Indoor positioning algorithms provided in an embodiment of the present invention based on OS-ELM fusion vision and Inertia information, mainly by 4 Part forms: establishing data model, initialization OS-ELM model, online updating OS-ELM model, online adaptive positioning.
One, data model is established:
Inertia and visual sensor data to acquisition carry out pretreatment and generate feature vector, will include feature vector and mesh The training data of mark output is modeled.The input data of each frame to be positioned is indicated by the vector of M 13 dimension in model.Meter Nomogram is as IiWith image Ii+mDifference between true two-dimensional coordinate, i.e. image IiTo image Ii+mMoving distance Δ Si, and by its group At the target output vector of every frame image.
Two, OS-ELM model is initialized:
Study fitting is carried out to training input data and the output of corresponding target using OS-ELM, obtains initial OS-ELM Model and corresponding initial output weight vectors.
Three, online updating OS-ELM model
Output weight vectors are updated using the new training data of the online serial order learning of OS-ELM, to obtain best OS-ELM Model and optimal weights vector.
Four, online adaptive positions
The corresponding output vector of test data is obtained using trained OS-ELM model.And corner judgement is introduced to reduce On the corner due to the biggish position error that the image of acquisition is fuzzy and generates, then to the output knot after corner judges Fruit is calculated, and final positioning result is obtained.
Embodiment 1
Technical solution of the present invention is further introduced below with reference to specific calculation formula, attached drawing, is detailed in down Text description:
Inertia and visual sensor data to acquisition carry out pretreatment and generate feature vector, will include feature vector and mesh The training data of mark output is modeled:
Visual information is pre-processed first, SURF (rapid robust feature) feature is extracted to every frame training image, and The image I for i will be numberediThe image I for being i+m with numberi+mIt is matched.Then Mismatching point is removed with bi-directional matching algorithm, And retain the high N of matching degreeaTo match point, affine transformation matrix P is calculated by these match points.
Each affine transformation matrix P is calculated as follows:
In formula, r represents rotation angle, and A is scale vectors, Tx, TyRepresent translational movement.
Secondly, being pre-processed to Inertia information, by formula (2)-(5) to image IiWith image Ii+mBetween it is corresponding Component of acceleration, angular velocity component and timestamp carry out Difference Calculation.
Wherein,
Δtime(i)=time(i)-time(i-m) (5)
Wherein,And time(i)Respectively image IiX-axis component of acceleration, y-axis acceleration point Amount, z-axis component of angular acceleration and time difference.
Then image I is calculatediWith image Ii+mDifference between true two-dimensional coordinate, i.e. image IiTo image Ii+mMovement away from From Δ Si, and be made from it the target output vector of every frame image, finally by affine transformation matrix with it is correspondingWith Δ time(i)Form input vector.
It is inputted in initial phase using the feature vector in first batch of training data as training, while corresponding displacement of targets Initialization output weight vectors β in OS-ELM is input to as training(0)
Above-mentioned formula (6) gives weight vectors β(0)Definition, formula (7) gives weight vectors β(0)Specifically ask Solution.
Wherein, H is hidden layer output matrix, and W is object vector, and β is output weight vectors, and T is transposition, T0To instruct in the first batch Practice output data, H0For the initialization output matrix of hidden layer.
Training data in the online serial order learning stage by online updating updates output weight vectors, to obtain optimal Weight vectors;
Wherein,It is by K0What recursion obtained.
The corresponding output vector of test data is obtained using optimal output vector;
Wherein, G (inputi) it is input vector for test.
Corner judgement is introduced to reduce on the corner due to the image of acquisition is fuzzy and generates larger position error.
When meeting following two condition, then it is assumed that present frame is in turn condition, by picture IiAnd Ii+mBetween position Shifting is set as zero.
(1) as m=1, the logarithm N of match point0Still less than Na
(2) when the differential seat angle between 2 continuous frames is greater than threshold value:
angle(Ii+1)-angle(Ii) > threshold value
Wherein, angle (Ii) be the i-th frame angle, threshold value is set as 0.001.
Output result after corner judges is calculated, final positioning result is obtained.
The moving distance Δ S between every frame and next frame is calculated by formula (11) firsti,
Then by formula (12) to Δ SiIt is integrated, obtains final positioning result.
In conclusion the embodiment of the present invention merges inertia by OS-ELM and visual information is more robust, more smart to provide Quasi- positioning result, the embodiment of the present invention introduce corner judgement, reduce and on the corner produce since the image of acquisition is fuzzy Raw larger position error.
Embodiment 2
The feasibility of scheme in embodiment 1 is tested below with reference to Fig. 2-Fig. 4, table 1- table 2 and specific example Card, described below:
To the effect of this method, using the algorithm steps in embodiment 1 as above to total duration 56 seconds, shift length 15m Experiment carry out positioning analysis, the experiment include personnel arbitrarily pass in and out and scene be mutated etc. interference scene.Parameter setting is such as Under: node in hidden layer 150, SURF characteristic are 40, the threshold value N matched coupleaIt is 24, at the beginning of online adaptive positioning stage The matching step-length that begins is 5.
Qualitative angle, Fig. 2 show that the locating effect for the location algorithm that this method and document [4] are proposed compares.Fig. 3 is The error accumulation distribution map contrast schematic diagram for the location algorithm that this method and document [4] are proposed;It is best in order to reach VL algorithm Locating effect, therefore VL algorithm SURF number is set as 100 by this method in comparative experiments, it can be seen that VL from experimental result Algorithm is capable of providing good positioning result in the environment of feature rich, but can have matching mistake when there is personnel to pass in and out interference It is accidentally big so as to cause position error and uncontrollable situation, algorithm have certain limitation.And this method in reliability and There is apparent advantage in terms of stability, can still be provided in the case where personnel's interference by control errors within 1m Accurately positioning result.
From quantitative angle, table 1 is each evaluation index result obtained by two kinds of location algorithms.
Each evaluation index result of table 1
Each evaluation index of this method is superior to VL algorithm.This method is not only better than VL algorithm in terms of positioning accuracy, but also It is faster than VL algorithm by about one time the time required to positioning.This method controls RMSE and mean error respectively in 0.65m and 0.42m Within, it can satisfy the requirement of indoor positioning completely.
In practical applications, the relevant parameter that this method is related to need to be configured.Fig. 4 is shown with hidden layer section The increase position error of points is gradually reduced, when the increase of concealed nodes number to a certain extent when, position error no longer will obviously subtract It is small.Locating effect when using sigmoid function as activation primitive when ratio sine function and radbas function is demonstrated simultaneously More preferably.
The selection of initial matching step-length about online adaptive positioning stage, table 2 is demonstrated to be positioned in online adaptive Stage selects positioning result when different initial matching step values.
2 online adaptive positioning stage of table chooses the positioning result of different initial matching step-lengths
To sum up, the optimized parameter of this experiment is provided that node in hidden layer is 150, and activation primitive is sigmoid letter Number, online adaptive stage initial matching step-length are set as 5.The results show, under the parameter setting, this method is real-time Property, stability and positioning accuracy etc. obtain extraordinary effect.
Bibliography
[1]W.Xue,W.Qiu,X.Hua,and K.Yu,“Improved Wi-Fi RSSI Measurement for Indoor Localization,”IEEE Sensors J.,vol.17,no.7,pp.2224–2230,Apr.2017.
[2]Y.Chen,D.Lymberopoulos,J.Liu,and B.Priyantha,“Indoor Localization Using FM Signals,”IEEE Trans.Mobile Comput.,vol.12,no.8,pp.1502–1517,Aug.2013
[3]V.M.Sineglazov,‘Visual Navigation System Adjustment’,Actual Problems of Unmanned Aerial Vehicles Developments(APUAVD).,Ukraine,October 2017,pp.7-12.
[4]C.Piciarelli,‘Visual Indoor Localization in Known Environments’, IEEE Signal Process.Lett.,2016,23,(10),pp.1330-1334.
[5]S.Knauth and A.Koukofikis,“Smartphone positioning in large environments by sensor data fusion,particle filter and FCWC,”in Proc.Int.Conf.Indoor Positioning Indoor Navig.,Oct.2016,pp.1-5.
[6]S.Papaioannou,H.Wen,A.Markham,and N.Trigoni,“Fusion of radio and camera sensor data for accurate indoor positioning,”in Proc.IEEE MASS, Oct.2014,pp.109–117.
It will be appreciated by those skilled in the art that attached drawing is the schematic diagram of a preferred embodiment, the embodiments of the present invention Serial number is for illustration only, does not represent the advantages or disadvantages of the embodiments.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (6)

1. a kind of indoor orientation method based on OS-ELM fusion vision and Inertia information, which is characterized in that the method includes Following steps:
1) to the inertia of acquisition and visual sensor data carry out pretreatment generate training feature vector, will comprising training characteristics to Amount and the training data of target output are modeled;
2) training data is input to initialization output initial weight vector in OS-ELM model, using training feature vector as defeated Incoming vector, corresponding displacement of targets is as training output vector;
3) output of OS-ELM model is updated by the new training data of online serial order learning and updates weight vectors, it is defeated by iteration Final weight vectors out, as optimal weights vector;
4) modeling is carried out to test data and generates testing feature vector, test data is obtained by optimal weights vector and is surveyed accordingly Try output vector;
5) corner judgement is introduced, test output vector is optimized, for reducing on the corner since the image of acquisition is fuzzy And the position error generated;Output result after corner judges is calculated, final positioning result is obtained.
2. a kind of indoor orientation method based on OS-ELM fusion vision and Inertia information according to claim 1, special Sign is, described to model the training data comprising training feature vector and target output specifically:
The input data of each frame to be positioned is indicated by the vector of M several dimensions in model, calculates image IiWith image Ii+mVery Difference between real two-dimensional coordinate, and it is made from it the target output vector of every frame image.
3. a kind of indoor orientation method based on OS-ELM fusion vision and Inertia information according to claim 1, special Sign is, the optimal weights vector specifically:
Wherein, H is hidden layer output matrix, and W is object vector, and β is output weight vectors, and T is transposition,
4. a kind of indoor orientation method based on OS-ELM fusion vision and Inertia information according to claim 3, special Sign is, described to test output vector accordingly by optimal weights vector acquisition test data specifically:
Wherein, G (inputi) it is input vector for test, i is picture number, and L is picture number.
5. a kind of indoor orientation method based on OS-ELM fusion vision and Inertia information according to claim 1, special Sign is that the introducing corner judgement optimizes test output vector specifically:
When meeting following two condition, it is believed that present frame is in turn condition, by picture IiAnd Ii+mBetween displacement setting It is zero;
1) as m=1, the logarithm N of match point0Still less than Na
2) when the differential seat angle between 2 continuous frames is greater than threshold value:
angle(Ii+1)-angle(Ii) > threshold value
Wherein, angle (Ii) be the i-th frame angle, threshold value is set as 0.001.
6. a kind of indoor orientation method based on OS-ELM fusion vision and Inertia information according to claim 1, special Sign is that the described pair of output result after corner judges calculates, and obtains final positioning result specifically:
Calculate the moving distance Δ S between every frame and next framei,
Then to Δ SiIt is integrated, obtains final positioning result;
Wherein, SiFor picture IiAnd Ii+mBetween displacement, m be picture IiAnd Ii+mBetween interval numerical value, i is picture number.
CN201910415446.3A 2019-05-17 2019-05-17 Indoor orientation method based on OS-ELM fusion vision and Inertia information Pending CN110186458A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910415446.3A CN110186458A (en) 2019-05-17 2019-05-17 Indoor orientation method based on OS-ELM fusion vision and Inertia information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910415446.3A CN110186458A (en) 2019-05-17 2019-05-17 Indoor orientation method based on OS-ELM fusion vision and Inertia information

Publications (1)

Publication Number Publication Date
CN110186458A true CN110186458A (en) 2019-08-30

Family

ID=67716759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910415446.3A Pending CN110186458A (en) 2019-05-17 2019-05-17 Indoor orientation method based on OS-ELM fusion vision and Inertia information

Country Status (1)

Country Link
CN (1) CN110186458A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111873A (en) * 2021-03-25 2021-07-13 贵州电网有限责任公司 PCA-based weighted fusion mobile robot positioning method
CN113591015A (en) * 2021-07-30 2021-11-02 北京小狗吸尘器集团股份有限公司 Time delay calculation method and device, storage medium and electronic equipment
CN116735146A (en) * 2023-08-11 2023-09-12 中国空气动力研究与发展中心低速空气动力研究所 Wind tunnel experiment method and system for establishing aerodynamic model
CN117765084A (en) * 2024-02-21 2024-03-26 电子科技大学 Visual positioning method for iterative solution based on dynamic branch prediction

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106482738A (en) * 2016-11-09 2017-03-08 江南大学 Transfinited the indoor fingerprinting localization algorithm of learning machine based on online increment
CN108021947A (en) * 2017-12-25 2018-05-11 北京航空航天大学 A kind of layering extreme learning machine target identification method of view-based access control model
CN108204812A (en) * 2016-12-16 2018-06-26 中国航天科工飞航技术研究院 A kind of unmanned plane speed estimation method
CN108716917A (en) * 2018-04-16 2018-10-30 天津大学 A kind of indoor orientation method merging inertia and visual information based on ELM
CN108984785A (en) * 2018-07-27 2018-12-11 武汉大学 A kind of update method and device of the fingerprint base based on historical data and increment
CN109327797A (en) * 2018-10-15 2019-02-12 山东科技大学 Mobile robot indoor locating system based on WiFi network signal
CN109752725A (en) * 2019-01-14 2019-05-14 天合光能股份有限公司 A kind of low speed business machine people, positioning navigation method and Position Fixing Navigation System

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106482738A (en) * 2016-11-09 2017-03-08 江南大学 Transfinited the indoor fingerprinting localization algorithm of learning machine based on online increment
CN108204812A (en) * 2016-12-16 2018-06-26 中国航天科工飞航技术研究院 A kind of unmanned plane speed estimation method
CN108021947A (en) * 2017-12-25 2018-05-11 北京航空航天大学 A kind of layering extreme learning machine target identification method of view-based access control model
CN108716917A (en) * 2018-04-16 2018-10-30 天津大学 A kind of indoor orientation method merging inertia and visual information based on ELM
CN108984785A (en) * 2018-07-27 2018-12-11 武汉大学 A kind of update method and device of the fingerprint base based on historical data and increment
CN109327797A (en) * 2018-10-15 2019-02-12 山东科技大学 Mobile robot indoor locating system based on WiFi network signal
CN109752725A (en) * 2019-01-14 2019-05-14 天合光能股份有限公司 A kind of low speed business machine people, positioning navigation method and Position Fixing Navigation System

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MINGYANG ZHANG,ET AL: "Pedestrian Dead-Reckoning Indoor Localization Based on OS-ELM", 《IEEE ACCESS》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111873A (en) * 2021-03-25 2021-07-13 贵州电网有限责任公司 PCA-based weighted fusion mobile robot positioning method
CN113591015A (en) * 2021-07-30 2021-11-02 北京小狗吸尘器集团股份有限公司 Time delay calculation method and device, storage medium and electronic equipment
CN116735146A (en) * 2023-08-11 2023-09-12 中国空气动力研究与发展中心低速空气动力研究所 Wind tunnel experiment method and system for establishing aerodynamic model
CN116735146B (en) * 2023-08-11 2023-10-13 中国空气动力研究与发展中心低速空气动力研究所 Wind tunnel experiment method and system for establishing aerodynamic model
CN117765084A (en) * 2024-02-21 2024-03-26 电子科技大学 Visual positioning method for iterative solution based on dynamic branch prediction
CN117765084B (en) * 2024-02-21 2024-05-03 电子科技大学 Visual positioning method for iterative solution based on dynamic branch prediction

Similar Documents

Publication Publication Date Title
CN110186458A (en) Indoor orientation method based on OS-ELM fusion vision and Inertia information
CN105424030B (en) Fusion navigation device and method based on wireless fingerprint and MEMS sensor
Pei et al. Optimal heading estimation based multidimensional particle filter for pedestrian indoor positioning
Alonso et al. Accurate global localization using visual odometry and digital maps on urban environments
CN110047142A (en) No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN107421546B (en) A kind of passive combined positioning method based on space environment magnetic signature
Zhao et al. Learning-based bias correction for time difference of arrival ultra-wideband localization of resource-constrained mobile robots
CN105556742B (en) The acquisition methods and equipment and system of antenna works parameter
CN105043380A (en) Indoor navigation method based on a micro electro mechanical system, WiFi (Wireless Fidelity) positioning and magnetic field matching
CN107396322A (en) Indoor orientation method based on route matching Yu coding and decoding Recognition with Recurrent Neural Network
US20100315290A1 (en) Globally-convergent geo-location algorithm
CN110187375A (en) A kind of method and device improving positioning accuracy based on SLAM positioning result
CN109283562A (en) Three-dimensional vehicle localization method and device in a kind of car networking
CN105447574B (en) A kind of auxiliary blocks particle filter method, device and method for tracking target and device
CN106646366A (en) Visible light positioning method and system based on particle filter algorithm and intelligent equipment
CN106767791A (en) A kind of inertia/visual combination air navigation aid using the CKF based on particle group optimizing
KR102075844B1 (en) Localization system merging results of multi-modal sensor based positioning and method thereof
CN109997150A (en) System and method for classifying to roadway characteristic
Michaelsen et al. Stochastic reasoning for structural pattern recognition: An example from image-based UAV navigation
CN108716917A (en) A kind of indoor orientation method merging inertia and visual information based on ELM
CN106323272B (en) A kind of method and electronic equipment obtaining track initiation track
CN107148553A (en) Method and system for improving Inertial Measurement Unit sensor signal
CN108303095B (en) Robust volume target cooperative localization method suitable for non-Gaussian filtering
CN108939488A (en) A kind of sailing boat supplemental training device based on augmented reality and training paths planning method
Li et al. Research on the UWB/IMU fusion positioning of mobile vehicle based on motion constraints

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190830