CN107423547A - Increment type location algorithm based on the semi-supervised learning machine that transfinites - Google Patents

Increment type location algorithm based on the semi-supervised learning machine that transfinites Download PDF

Info

Publication number
CN107423547A
CN107423547A CN201710255375.6A CN201710255375A CN107423547A CN 107423547 A CN107423547 A CN 107423547A CN 201710255375 A CN201710255375 A CN 201710255375A CN 107423547 A CN107423547 A CN 107423547A
Authority
CN
China
Prior art keywords
matrix
mrow
hidden layer
training data
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710255375.6A
Other languages
Chinese (zh)
Inventor
卢先领
朱顺涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201710255375.6A priority Critical patent/CN107423547A/en
Publication of CN107423547A publication Critical patent/CN107423547A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16ZINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS, NOT OTHERWISE PROVIDED FOR
    • G16Z99/00Subject matter not provided for in other main groups of this subclass

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention provides a kind of increment type location algorithm based on the semi-supervised learning machine that transfinites(Incremental Semi supervised Extreme Learning Machine, IS ELM).The invention mainly comprises learning machine is transfinited based on semi-supervised to establish non-linear placement estimation model, the cost of off-line phase collection tape label training data is reduced;Then the algorithm of matrix in block form is utilized so that incremental training data can enter Mobile state adjustment to model parameter, improve the real-time of location model and the adaptability to dynamic environment;It is finally the newly-increased suitable punishment weight of training data distribution, makes model that there is ageing mechanism.

Description

Increment type location algorithm based on the semi-supervised learning machine that transfinites
Technical field
The invention belongs to wireless sensor network technology field, is related to a kind of increment type based on the semi-supervised learning machine that transfinites Location algorithm.
Background technology
In recent years, increasingly it is taken seriously in academia and industrial quarters based on the indoor positioning technologies of wireless signal.This is Because positional information is one of basic factors that mobile intelligence group is perceived and Internet of Things is applied, and is had widely in environment indoors Application scenarios, such as the personnel in airport/railway station/market navigate, special valuables track, location Based service pushes, Safety and intrusion detection and strick precaution etc..However, indoor environment is complicated, radio signal propagation is easily by flow of personnel, furniture, wall The influence of the barriers such as wall.Meanwhile indoor positioning is applied and very high requirement is proposed to positional precision.Therefore substantial amounts of research work Make for indoor positioning technologies to deploy.
Indoor positioning algorithms are broadly divided into 3 classes at present:Strongest base stations method, geometric method, location fingerprint method.Wherein, position refers to Line method is the emphasis of current indoor Study of location.Because compared with strongest base stations method and geometric method, fingerprint location skill Art does not all need special installation in base station side and mobile terminal side, and has higher positioning precision (average localization error 1-3 Rice).In particular with the quick popularization of smart mobile phone and WLAN infrastructure, the location technology based on WiFi signal intensity fingerprint Have become the Main way of research and application.
Lot of domestic and foreign scholar has done a large amount of explorations and research for indoor location fingerprinting localization algorithm.Zhou et al. is studied The indoor fingerprinting localization algorithm of (Weighted K Nearest Neighbor, WKNN) is occupied based on weighted nearest neighbor, although possessing Higher positioning precision, but K values selection different in text has considerable influence to final positioning precision, and also do not have in text Provide specific K values selection standard.Figuera et al. proposes one kind can jamproof SVMs (Support Vector Machine, SVM) location algorithm, positioning precision is effectively increased, but training and test speed are slower, and positioning is real-time Property is poor.Dwiyasa et al. proposes the location fingerprint based on learning machine (Extreme the Learning Machine, ELM) that transfinite Location algorithm, the off-line learning time is greatly decreased in the pace of learning that the algorithm is exceedingly fast using ELM, and relies on its close network knot Structure, effectively overcome the influence of environmental change and RSS time-varying to positioning precision, but it uses the pattern of supervised learning, to offline The cost requirement that stage collects tape label training data is higher, causes the practicality of algorithm low.Li et al. is proposed based on collaboration instruction The experienced semi-supervised learning machine that transfinites, training cost is reduced, but shortcoming is that repetitive cycling training causes study speed in learning process The reduction of degree and the increase of cumulative errors;Liu et al. proposes the semi-supervised learning machine (Semi- that transfinites based on manifold regularization Supervised Extreme Learning Machine, SELM) carry out indoor location estimation, it is ensured that in tape label data Still there is preferable positioning precision in the case of sparse, shortcoming is that model is excessively fixed, with the change positioning precision of environment Drop is rapid.Among Increment Learning Algorithm is applied to indoor positioning in real time by Yang et al., added in each circulation feedback to new Sample carry out on-line continuous study, but the newly-increased training data for collecting a large amount of tape labels online will consume huge manpower thing Power.
The content of the invention
The invention aims to solve location fingerprint location algorithm poor real in conventional chamber, dynamic environment is adapted to The problem of property deficiency, and propose a kind of increment type location algorithm (Incremental Semi- based on the semi-supervised learning machine that transfinites Supervised Extreme Learning Machine, IS-ELM).The technical solution adopted by the present invention is:
A kind of increment type location algorithm based on the semi-supervised learning machine that transfinites, comprises the following steps:
Step 1:Tape label training data is obtained from actual environment and without label training data, collectively as semi-supervised The initialization training dataset of study;
Step 2:The systematic parameter of location model, including activation primitive g (x), hidden layer node number L, regularization are set Parameter μ, punish weights omega;
Step 3:It is randomly assigned to input weight matrix W=[w1,w2,...,wL]TWith the skew b=[b of hidden layer node1, b2,...,bL]T;Step 4:Calculate initialization hidden layer output matrix H0, initialization figure Laplacian MatrixIt is hidden with initializing Containing layer output weight matrix β(0), regularization constraint item is added to hidden layer output weight matrix β, obtains new hidden layer output Weight matrix;
Step 5:When on-line stage has new training data input, new figure Laplacian Matrix is calculatedAnd repeatedly Model parameter β after generation(k+1)
Step 6:Punishment weights omega is distributed to original model parameter, maximum iteration is set, utilizes Newton iteration method Mode is updated adjustment to the model parameter of final output, obtains final model parameter β(out)
Step 7:Input test data carry out location estimation into location model.
Further, in step 4, regularization constraint item is added to original location model, then structure initialization IS-ELM Location model can be described as following second order optimization problem;
For initializing training datasetInitialize input feature vector matrix X0=[x1,x2,...,xN]T, just Beginningization target output matrix is the coordinates matrix T for initializing tape label training data0=[t1,t2,...,tN]T, initialization training Data amount check is N, and initialization hidden layer output matrix is
Wherein, x is the fingerprint vector of tape label training data;Then the hidden layer output weight matrix of initialization is calculated
Based on figure regularization framework, a figure is established according to training data and arest neighbors similarity, band mark has been corresponded in figure Sign training data and without label training data, similarity of the side between example, then calculate initialization figure Laplacian Matrix
Wherein, P0It is to initialize the diagonal matrix based on figure manifold regularization, Q0Figure manifold regularization sample is based on for initialization This similar matrix;
Regularization constraint item μ is added to original initialization location model | | β | |2, then structure initialization IS-ELM positioning Model can be described as following second order optimization problem
ε (β)=min | | JH β-T | |2+λ(Hβ)TLHβ+μ||β||2 (4)
Wherein, | | JH β-T | |2Xiang represents empiric risk, λ (H β)TLH β item representative model complexities, μ | | β | |2Xiang is represented Structure risk, μ and λ are regularization parameter, and L is figure Laplacian Matrix, and J is total training data to tape label training data Transition matrix, H are hidden layer output matrix, and ε (β) is the majorized function on β, and β is that hidden layer exports weight matrix, when λ=0 When, it is meant that no label training data is not considered;
Hidden layer output weight matrix solution is solved using method of Lagrange multipliers
β=[μ I+HT(J+λL)H]-1HTJT (6)
Wherein, I is unit matrix, HTFor the transposition of hidden layer output matrix, T is that target output matrix is tape label training The coordinates matrix of data;
Output weight matrix for initialization data set is designated as
In formula (7), β(0)Weight matrix is exported for initialization hidden layer,To initialize the transposition of hidden layer output matrix, T0For initialized target output matrix, J0Transition matrix of total training data to tape label training data during to initialize;Formula (8) in,For for initializing training datasetFigure Laplacian Matrix.
Further, in step 5, increasing amount adjustment is carried out to model parameter using the algorithm of matrix in block form:
It is new for training dataset when on-line stage has new training data inputWithFigure Laplce's square Battle array be For for initializing training datasetFigure Laplacian Matrix,For for newly-increased instruction Practice data setFigure Laplacian Matrix;The weights output of the location model of already present location model and process incremental learning Matrix is expressed as β and β*, relation therebetween is β*=β+Δ β, wherein, Δ β is designated as incremental data changing for model Variable,
Wherein, Hk+1For the hidden layer output matrix after (k+1) secondary incremental learning,To be implied after (k+1) incremental learning The transposition of layer output matrix, Jk+1For the transition matrix in (k+1) secondary incremental data,For for (k+1) individual incremental data set Figure Laplacian Matrix;β(k+1)Weight matrix, β are exported for the hidden layer after (k+1) secondary incremental learning(k)For (k) secondary increment Hidden layer output weight matrix after habit, Tk+1For the coordinates matrix of tape label training data in (k+1) secondary incremental data.
Further, in step 6, after distributing suitable punishment weight to incremental data, hidden layer output weight matrix is changed into
If carrying out disposable weight amendment for formula (11), location model is not enough to tend towards stability;This paper for this The value of IS-ELM model parameters is obtained by the way of Newton iteration method, i.e., is iterated calculating to formula (11);When | β(k+1)- β(k)| during < ε (ε is initial threshold value), stop iteration;Otherwise β is made(k)(k+1), and continue iteration until iteration time Number, which is more than maximum iteration, to be stopped;Final IS-ELM models output weight matrix parameter beta(out)(k+1), so as to complete position Put the training of estimation model.
Present invention contrast prior art has following innovative point:
(1) traditional semi-supervised learning fingerprinting localization algorithm only considers the experience error and model complexity of model, and originally Invention introduces structural failure, avoids the generation of over-fitting.
(2) traditional fingerprinting localization algorithm the pattern of the invention for using incremental learning, is improved using the pattern of batch study Algorithm is to the adaptability of dynamic environment and the real-time of positioning.
(3) traditional location algorithm does not account for the actual effect of newly-increased data, and the present invention is suitable to newly-increased data distribution Punishment weight, and by Newton method come solving model parameter so that location algorithm has ageing mechanism.
Brief description of the drawings
Fig. 1 is the IS-ELM algorithm flow charts of the present invention.
Fig. 2 is the different position error schematic diagrames for punishing IS-ELM under weight of the present invention.
Fig. 3 is change schematic diagram of the IS-ELM convergences of the present invention with iterations.
Fig. 4 is the schematic diagram that algorithm positioning precision compares under same error distance not of the invention.
Fig. 5 is the schematic diagram compared the algorithms of different training time.
Fig. 6 is the schematic diagram that SELM algorithmic errors position probability.
Fig. 7 is the schematic diagram that IS-ELM algorithmic errors position probability.
Fig. 8 is the schematic diagram of test error in two kinds of algorithm different time sections.
Embodiment
With reference to specific drawings and examples, the invention will be further described.
Traditional SELM algorithms only take into account the empiric risk and model complexity of model, and empirical risk minimization principle Easily produce over-fitting.Therefore regularization constraint item is added to original location model herein,
Balance is obtained between empiric risk, model complexity and structure risk three, so as to improve location model Generalization ability.Structure location model can be described as following second order optimization problem
ε (β)=min | | JH β-T | |2+λ(Hβ)TLHβ+μ||β||2 (4)
Wherein, | | JH β-T | |2Xiang represents empiric risk, λ (H β)TLH β item representative model complexities, μ | | β | |2Xiang is represented Structure risk, μ and λ are regularization parameter, and L is figure Laplacian Matrix, and J is total training data to tape label training data Transition matrix, H are hidden layer output matrix, and ε (β) is the majorized function on β, and β is that hidden layer exports weight matrix, hidden layer Output matrixX is the fingerprint vector of tape label training data, and g (x) is sharp Function living, L be hidden layer node number, and w is input weights, and b is the threshold value of hidden layer node, and N is the quantity of training data, mesh Mark output matrix T=[t1,t2,...,tN]T, t is the coordinate vector of every group of training data;Hidden layer output weight matrix β= [β12,...,βL]T, β is the output weight vector of each hidden layer node.
Output weight matrix is solved using method of Lagrange multipliers
β=[μ I+HT(J+λL)H]-1HTJT (6)
Wherein, I is unit matrix, HTFor the transposition of hidden layer output matrix, T is that target output matrix is tape label training The coordinates matrix of data.Output weight matrix for initialization data set is designated as
Wherein, β(0)Weight matrix is exported for initialization hidden layer,To initialize the transposition of hidden layer output matrix, T0 For initialized target output matrix, J0Transition matrix of total training data to tape label training data during to initialize.For For initializing training datasetFigure Laplacian Matrix.
The method for solving of output weight matrix is redefined using the algorithm of matrix in block form so that it is in semi-supervised Incremental learning can be carried out on the basis of habit, is avoided due to low memory caused by batch processing, while improve position Estimate adaptability of the model for dynamic environment.
If the incremental data set newly added isIncluding tape label data set (x, t) and Without label data collection x', x is the fingerprint vector of tape label data in data set, and t is the coordinate vector of tape label data, and x' is nothing The fingerprint vector of label data, N0And N0For initialization training data and the number of an incremental data;Output after joint training Weight matrix is
Wherein:
β(1)Weight matrix, J are exported for the hidden layer after an incremental learning1For the transition matrix after an incremental learning, H1For the hidden layer output matrix after an incremental learning, T1For the coordinates matrix of tape label data in an incremental data. For the figure Laplacian Matrix for an incremental data,Figure Laplce between different pieces of information block calculates Son.When figure Laplacian Matrix after calculating an incremental learning, the figure Laplace operator between different massesIgnored, last figure Laplacian MatrixAnd obtain K iteration form:
Following formula is written over
According to obtained by above formula, β(1)It can be written as
As can be seen from the above equation, influence of the incremental data for SELM models can be used To reflect.So when (k+1) data block inputs, the extensive form of an incremental learning is derived herein
IS-ELM output weight matrix is designated as
Wherein, Hk+1For the hidden layer output matrix after (k+1) secondary incremental learning,To be implied after (k+1) incremental learning The transposition of layer output matrix, Jk+1For the transition matrix in (k+1) secondary incremental data,After for (k+1) secondary incremental learning Figure Laplacian Matrix.β(k+1)Weight matrix, β are exported for the hidden layer after (k+1) secondary incremental learning(k)For (k) secondary increment Hidden layer output weight matrix after habit, Tk+1For the coordinates matrix of tape label data in (k+1) secondary incremental data.
In summary, already present model and by incremental learning model weights output matrix be expressed as β and β*, relation therebetween is β*=β+Δ β, wherein, Δ β can be designated as knots modification of the incremental data for model,
IS-ELM algorithms carry out decomposition sign on the basis of SELM models to the solution for exporting weight matrix, so as to subtract significantly Amount of calculation is lacked, but the actual effect for increasing data newly is not given and considered.Among actual location environment, due to moving for environment State changes, and newly-increased data have obvious ageing, it is therefore desirable to and it is smaller in view of contribution data remote, it is newer Contribution data is bigger, and the data that the mechanism of this real-time update currently to gather have higher contribution margin to model.
Formula (10)Item can be understood as using old model parameter to newly-increased Training data exported after caused error, therefore to this carry out weight punishment so that the error of this is bigger, old Model parameter is just corrected more.So formula (10) is rewritten
The punishment weights omega newly added can be as incremental training data to original model influence degree, subtract so as to relative Small interference of the out-of-date training data to location estimation model.
Punishment weights omega is a constant, and its value needs to obtain by specific location data and test.First, select Surely punish that weight be certain region [1,10], for each integer value in region, all incremental datas of progress Dynamic adjusts, and carries out location estimation eventually through the location model after adjustment, and try to achieve between predicted position and physical location Root-mean-square error, root-mean-square error is selected to obtain minimum value ω0Conduct corresponding to punishment weight.
If carrying out disposable weight amendment for formula (11), location model is not enough to tend towards stability.IS-ELM for this Model parameter β value is obtained by the way of Newton iteration method, i.e., is iterated calculating to formula (20).Make maximum iteration For T.When | β(k+1)(k)| during < ε (ε is initial threshold value), stop iteration;Otherwise β is made(k)(k+1), and continue to change In generation, stops until iterations more than maximum iteration.Final IS-ELM models output weight matrix parameter beta(out)= β(k+1), so as to complete the training of location estimation model.

Claims (4)

1. a kind of increment type location algorithm based on the semi-supervised learning machine that transfinites, it is characterised in that comprise the following steps:
Step 1:Tape label training data is obtained from actual environment and without label training data, collectively as semi-supervised learning Initialization training dataset;
Step 2:The systematic parameter of location model, including activation primitive g (x), hidden layer node number L, regularization parameter are set μ, punish weights omega;
Step 3:It is randomly assigned to input weight matrix W=[w1,w2,...,wL]TWith the skew b=[b of hidden layer node1, b2,...,bL]T
Step 4:Calculate initialization hidden layer output matrix H0, initialization figure Laplacian MatrixIt is defeated with initialization hidden layer Go out weight matrix β(0), regularization constraint item is added to hidden layer output weight matrix β, obtains new hidden layer output weights square Battle array;
Step 5:When on-line stage has new training data input, new figure Laplacian Matrix is calculatedAnd after iteration Model parameter β(k+1)
Step 6:Punishment weights omega is distributed to original model parameter, maximum iteration is set, utilizes the mode of Newton iteration method Adjustment is updated to the model parameter of final output, obtains final model parameter β(out)
Step 7:Input test data carry out location estimation into location model.
2. the increment type location algorithm as claimed in claim 1 based on the semi-supervised learning machine that transfinites, it is characterised in that
In step 4, regularization constraint item is added to original location model, then structure initialization IS-ELM location models can be retouched State as following second order optimization problem;
For initializing training datasetInitialize input feature vector matrix X0=[x1,x2,...,xN]T, initialization Target output matrix is the coordinates matrix T for initializing tape label training data0=[t1,t2,...,tN]T, initialize training data Number is N, and initialization hidden layer output matrix is
Wherein, x is the fingerprint vector of tape label training data;Then the hidden layer output weight matrix of initialization is calculated
<mrow> <msup> <mi>&amp;beta;</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> </msup> <mo>=</mo> <msubsup> <mi>H</mi> <mn>0</mn> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>T</mi> <mn>0</mn> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Based on figure regularization framework, a figure is established according to training data and arest neighbors similarity, tape label instruction has been corresponded in figure Practice data and without label training data, similarity of the side between example, then calculate initialization figure Laplacian Matrix
Wherein, P0It is to initialize the diagonal matrix based on figure manifold regularization, Q0To initialize based on figure manifold regularization sample Similar matrix;
Regularization constraint item μ is added to original initialization location model | | β | |2, then structure initialization IS-ELM location models can It is described as following second order optimization problem
ε (β)=min | | JH β-T | |2+λ(Hβ)TLHβ+μ||β||2(4) wherein, | | JH β-T | |2Xiang represents empiric risk, λ (Hβ)TLH β item representative model complexities, μ | | β | |2Item representative structure risk, μ and λ are regularization parameter, and L is figure Laplce Matrix, J are transition matrix of total training data to tape label training data, and H is hidden layer output matrix, and ε (β) is on β Majorized function, β is that hidden layer exports weight matrix, as λ=0, it is meant that do not consider no label training data;
Hidden layer output weight matrix solution is solved using method of Lagrange multipliers
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>&amp;epsiv;</mi> </mrow> <mrow> <mo>&amp;part;</mo> <mi>&amp;beta;</mi> </mrow> </mfrac> <mo>=</mo> <mo>&amp;lsqb;</mo> <mi>&amp;mu;</mi> <mi>I</mi> <mo>+</mo> <msup> <mi>H</mi> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mi>J</mi> <mo>+</mo> <mi>&amp;lambda;</mi> <mi>L</mi> <mo>)</mo> </mrow> <mi>H</mi> <mi>&amp;beta;</mi> <mo>&amp;rsqb;</mo> <mo>-</mo> <msup> <mi>H</mi> <mi>T</mi> </msup> <mi>J</mi> <mi>T</mi> <mo>=</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
β=[μ I+HT(J+λL)H]-1HTJT (6)
Wherein, I is unit matrix, HTFor the transposition of hidden layer output matrix, T is that target output matrix is tape label training data Coordinates matrix;
Output weight matrix for initialization data set is designated as
<mrow> <msup> <mi>&amp;beta;</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> </msup> <mo>=</mo> <msubsup> <mi>K</mi> <mn>0</mn> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msubsup> <mi>H</mi> <mn>0</mn> <mi>T</mi> </msubsup> <msub> <mi>J</mi> <mn>0</mn> </msub> <msub> <mi>T</mi> <mn>0</mn> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
In formula (7), β(0)Weight matrix is exported for initialization hidden layer,To initialize the transposition of hidden layer output matrix, T0For Initialized target output matrix, J0Transition matrix of total training data to tape label training data during to initialize;Formula (8) In,For for initializing training datasetFigure Laplacian Matrix.
3. the increment type location algorithm as claimed in claim 2 based on the semi-supervised learning machine that transfinites, it is characterised in that
In step 5, increasing amount adjustment is carried out to model parameter using the algorithm of matrix in block form:
It is new for training dataset when on-line stage has new training data inputWithFigure Laplacian Matrix be For for initializing training datasetFigure Laplacian Matrix,For for newly-increased training number According to collectionFigure Laplacian Matrix;The weights output matrix of the location model of already present location model and process incremental learning β and β * are expressed as, relation therebetween is β *=β+Δ β, wherein, Δ β is designated as change of the incremental data for model Amount,
Wherein, Hk+1For the hidden layer output matrix after (k+1) secondary incremental learning,It is defeated for hidden layer after (k+1) incremental learning Go out the transposition of matrix, Jk+1For the transition matrix in (k+1) secondary incremental data,For the figure for (k+1) individual incremental data set Laplacian Matrix;β(k+1)Weight matrix, β are exported for the hidden layer after (k+1) secondary incremental learning(k)After (k) secondary incremental learning Hidden layer output weight matrix, Tk+1For the coordinates matrix of tape label training data in (k+1) secondary incremental data.
4. the increment type location algorithm as claimed in claim 3 based on the semi-supervised learning machine that transfinites, it is characterised in that
In step 6, after distributing suitable punishment weight to incremental data, hidden layer output weight matrix is changed into
The value of IS-ELM location model parameters is obtained by the way of Newton iteration method, i.e., is iterated calculating to formula (11); When | β(k+1)(k)| during < ε, ε is initial threshold value, stops iteration;Otherwise β is made(k)(k+1), and it is straight to continue iteration Stop to iterations more than maximum iteration;Final IS-ELM location models output weight matrix parameter beta(out)= β(k+1), so as to complete the training of location estimation model.
CN201710255375.6A 2017-04-19 2017-04-19 Increment type location algorithm based on the semi-supervised learning machine that transfinites Pending CN107423547A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710255375.6A CN107423547A (en) 2017-04-19 2017-04-19 Increment type location algorithm based on the semi-supervised learning machine that transfinites

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710255375.6A CN107423547A (en) 2017-04-19 2017-04-19 Increment type location algorithm based on the semi-supervised learning machine that transfinites

Publications (1)

Publication Number Publication Date
CN107423547A true CN107423547A (en) 2017-12-01

Family

ID=60423340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710255375.6A Pending CN107423547A (en) 2017-04-19 2017-04-19 Increment type location algorithm based on the semi-supervised learning machine that transfinites

Country Status (1)

Country Link
CN (1) CN107423547A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109581280A (en) * 2018-09-30 2019-04-05 中科劲点(北京)科技有限公司 The adaptive tuning on-line method, system and device of terminal
CN109581282A (en) * 2018-11-06 2019-04-05 宁波大学 Indoor orientation method based on the semi-supervised deep learning of Bayes
CN109598283A (en) * 2018-10-22 2019-04-09 中南大学 A kind of aluminium electroloysis degree of superheat recognition methods based on semi-supervised extreme learning machine
CN109697453A (en) * 2018-09-30 2019-04-30 中科劲点(北京)科技有限公司 Semi-supervised scene classification recognition methods, system and device based on multimodality fusion
CN109743103A (en) * 2019-02-01 2019-05-10 福州大学 FBG sensing network node fault repairing method based on ELM
CN109858511A (en) * 2018-11-30 2019-06-07 杭州电子科技大学 The semi-supervised learning machine classification method that transfinites of safety indicated based on collaboration
CN109934295A (en) * 2019-03-18 2019-06-25 重庆邮电大学 A kind of image classification and method for reconstructing based on the hidden feature learning model that transfinites
CN112861594A (en) * 2020-07-17 2021-05-28 宁夏大学 Online handwritten digit recognition method based on incremental semi-supervised kernel extreme learning machine
CN112926266A (en) * 2021-03-02 2021-06-08 盐城工学院 Underground supply air volume estimation method based on regularized incremental random weight network
CN113470812A (en) * 2021-06-18 2021-10-01 浙江大学 Heart transmembrane potential reconstruction method based on graph convolution neural network and iterative threshold contraction algorithm

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101695152A (en) * 2009-10-12 2010-04-14 中国科学院计算技术研究所 Indoor positioning method and system thereof
CN103106331A (en) * 2012-12-17 2013-05-15 清华大学 Photo-etching line width intelligence forecasting method based on dimension-reduction and quantity-increment-type extreme learning machine
CN103605984A (en) * 2013-11-14 2014-02-26 厦门大学 Supergraph learning-based indoor scene classification method
CN103942749A (en) * 2014-02-24 2014-07-23 西安电子科技大学 Hyperspectral ground feature classification method based on modified cluster hypothesis and semi-supervised extreme learning machine
CN104992184A (en) * 2015-07-02 2015-10-21 东南大学 Multiclass image classification method based on semi-supervised extreme learning machine
CN106154221A (en) * 2016-04-05 2016-11-23 苏州市职业大学 A kind of semi-supervised localization method based on WLAN
CN106482738A (en) * 2016-11-09 2017-03-08 江南大学 Transfinited the indoor fingerprinting localization algorithm of learning machine based on online increment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101695152A (en) * 2009-10-12 2010-04-14 中国科学院计算技术研究所 Indoor positioning method and system thereof
CN103106331A (en) * 2012-12-17 2013-05-15 清华大学 Photo-etching line width intelligence forecasting method based on dimension-reduction and quantity-increment-type extreme learning machine
CN103605984A (en) * 2013-11-14 2014-02-26 厦门大学 Supergraph learning-based indoor scene classification method
CN103942749A (en) * 2014-02-24 2014-07-23 西安电子科技大学 Hyperspectral ground feature classification method based on modified cluster hypothesis and semi-supervised extreme learning machine
CN104992184A (en) * 2015-07-02 2015-10-21 东南大学 Multiclass image classification method based on semi-supervised extreme learning machine
CN106154221A (en) * 2016-04-05 2016-11-23 苏州市职业大学 A kind of semi-supervised localization method based on WLAN
CN106482738A (en) * 2016-11-09 2017-03-08 江南大学 Transfinited the indoor fingerprinting localization algorithm of learning machine based on online increment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘军发等: "具有时效机制的增量式无线定位方法", 《计算机学报》 *
赵立杰等: ""基于流形正则化半监督学习的污水处理操作工况识别方法"", 《化工学报》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109581280A (en) * 2018-09-30 2019-04-05 中科劲点(北京)科技有限公司 The adaptive tuning on-line method, system and device of terminal
CN109697453A (en) * 2018-09-30 2019-04-30 中科劲点(北京)科技有限公司 Semi-supervised scene classification recognition methods, system and device based on multimodality fusion
CN109598283A (en) * 2018-10-22 2019-04-09 中南大学 A kind of aluminium electroloysis degree of superheat recognition methods based on semi-supervised extreme learning machine
CN109581282A (en) * 2018-11-06 2019-04-05 宁波大学 Indoor orientation method based on the semi-supervised deep learning of Bayes
CN109858511A (en) * 2018-11-30 2019-06-07 杭州电子科技大学 The semi-supervised learning machine classification method that transfinites of safety indicated based on collaboration
CN109743103A (en) * 2019-02-01 2019-05-10 福州大学 FBG sensing network node fault repairing method based on ELM
CN109934295A (en) * 2019-03-18 2019-06-25 重庆邮电大学 A kind of image classification and method for reconstructing based on the hidden feature learning model that transfinites
CN109934295B (en) * 2019-03-18 2022-04-22 重庆邮电大学 Image classification and reconstruction method based on transfinite hidden feature learning model
CN112861594A (en) * 2020-07-17 2021-05-28 宁夏大学 Online handwritten digit recognition method based on incremental semi-supervised kernel extreme learning machine
CN112861594B (en) * 2020-07-17 2023-07-28 宁夏大学 Online handwriting digital recognition method based on incremental semi-supervised kernel extreme learning machine
CN112926266A (en) * 2021-03-02 2021-06-08 盐城工学院 Underground supply air volume estimation method based on regularized incremental random weight network
CN112926266B (en) * 2021-03-02 2023-10-13 盐城工学院 Underground air supply quantity estimation method based on regularized incremental random weight network
CN113470812A (en) * 2021-06-18 2021-10-01 浙江大学 Heart transmembrane potential reconstruction method based on graph convolution neural network and iterative threshold contraction algorithm
CN113470812B (en) * 2021-06-18 2023-08-22 浙江大学 Heart transmembrane potential reconstruction method based on graph convolution neural network and iterative threshold contraction algorithm

Similar Documents

Publication Publication Date Title
CN107423547A (en) Increment type location algorithm based on the semi-supervised learning machine that transfinites
Chen et al. An intelligent robust networking mechanism for the Internet of Things
Pickem et al. The robotarium: A remotely accessible swarm robotics research testbed
Pan et al. Transfer learning for wifi-based indoor localization
CN108317670A (en) A kind of refrigeration system energy-saving control method and system based on machine learning
Oda et al. Design and implementation of a simulation system based on deep Q-network for mobile actor node control in wireless sensor and actor networks
CN104134003B (en) The crop yield amount Forecasting Methodology that knowledge based drives jointly with data
CN110275436A (en) A kind of RBF neural self-adaptation control method of more single arm robots
CN107010542B (en) A kind of assembled architecture intelligence hanging method
CN112561199B (en) Weather parameter prediction model training method, weather parameter prediction method and device
Xue et al. Optimal chiller loading by improved sparrow search algorithm for saving energy consumption
CN105611555A (en) Energy saving coverage control method of sensor network based on virtual force orientation
Chen et al. Reinforcement learning path planning algorithm based on obstacle area expansion strategy
CN114818483B (en) Electromechanical disturbance positioning and propagation prediction method based on graph neural network
CN105068421A (en) Two-degree-of-freedom cooperative control method for multiple mobile robots
CN108537327A (en) One kind being based on time series BP neural network prediction technique and device
Yang A novel short-term multi-input–multi-output prediction model of wind speed and wind power with LSSVM based on improved ant colony algorithm optimization
CN104616072A (en) Method for improving concentration of glutamic acid fermented product based on interval optimization
Dan et al. Knowledge worker scheduling optimization model based on bacterial foraging algorithm
Lv et al. Internet of Things and augmented reality in the age of 5G
CN109426901A (en) Long-term power consumption prediction method and device in one kind
Wei et al. Accurate visible light positioning technique using extreme learning machine and meta-heuristic algorithm
Zhang et al. Modeling and optimization of watering robot optimal path for ornamental plant care
Yuwono et al. The impact of communication and memory in state-based potential game-based distributed optimization
Yuan et al. A mutative-scale pseudo-parallel chaos optimization algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171201