JP2005284128A - Quick learning method for self-organization network - Google Patents

Quick learning method for self-organization network Download PDF

Info

Publication number
JP2005284128A
JP2005284128A JP2004100361A JP2004100361A JP2005284128A JP 2005284128 A JP2005284128 A JP 2005284128A JP 2004100361 A JP2004100361 A JP 2004100361A JP 2004100361 A JP2004100361 A JP 2004100361A JP 2005284128 A JP2005284128 A JP 2005284128A
Authority
JP
Japan
Prior art keywords
learning
speed
self
learning data
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2004100361A
Other languages
Japanese (ja)
Inventor
Tsutomu Miyoshi
力 三好
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to JP2004100361A priority Critical patent/JP2005284128A/en
Publication of JP2005284128A publication Critical patent/JP2005284128A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To conduct a quick learning speed by appropriately providing learning data to a self-organization network. <P>SOLUTION: A situation frequently occurs that the winning node of the learning data in the early sequence is denied learning by the learning data in the late sequence. A range of values that learning data (n) satisfy so that learning (n-1) is not canceled by the learning (n) is derived to make a learning speed fast. Further, conditions are newly provided to impart learning data in increasing or decreasing order, and the learning speed can be made fast. <P>COPYRIGHT: (C)2006,JPO&NCIPI

Description

この発明は、人工ニューラルネットワークの一種であり、教師なし学習が可能な、自己組織化ネットワークの、学習速度の高速化を行うための学習方法に関するものである。   The present invention is a kind of artificial neural network, and relates to a learning method for increasing the learning speed of a self-organizing network capable of unsupervised learning.

自己組織化ネットワークとその学習方法については「ニューラルコンピューティング入門」(ISBN4−303−72640−0)など多くの文献で紹介されている。概略を以下に示す。自己組織化ネットワークはデータの次元数と同数の入力ノードを持ち、多くの場合2次元マップ状の多数の出力ノードを持つ2層構造である。出力ノードの集まりを特徴マップと呼ぶことにする。出力ノード間には位置関係があるため、あらかじめ近傍の大きさを指定することで、ある出力ノードとその近傍内の出力ノードを選択することができる。入力ノードと出力ノードの間には結合があり、すべての結合に結合加重が与えられる。つまり、出力ノードそれぞれに学習用データと同じ次元数のデータが結合加重の集まりとして与えられていることになる。この結合加重の集まりを結合加重ベクトルと呼ぶことにする。   The self-organizing network and its learning method are introduced in many documents such as “Introduction to Neural Computing” (ISBN4-303-72640-0). The outline is shown below. The self-organizing network has a two-layer structure having a number of input nodes equal to the number of dimensions of data, and in many cases having a number of output nodes in the form of a two-dimensional map. A collection of output nodes is called a feature map. Since there is a positional relationship between the output nodes, it is possible to select a certain output node and an output node in the vicinity by specifying the size of the neighborhood in advance. There is a connection between the input node and the output node, and a connection weight is given to all the connections. In other words, data having the same number of dimensions as the learning data is given to each output node as a collection of connection weights. This collection of connection weights is called a connection weight vector.

一般に、結合加重の初期値はランダムな値を与える。   In general, the initial value of the connection weight gives a random value.

学習時は、学習用データのそれぞれの次元の値が対応する入力ノードに与えられ、学習用データと出力ノードに与えられた結合加重ベクトルとの距離の計算をすべての出力ノードに対して行い、距離の最も近かったノードを一つ選ぶ。選ばれた出力ノードを勝利ノードと呼ぶことにする。   At the time of learning, each dimension value of the learning data is given to the corresponding input node, the distance between the learning data and the connection weight vector given to the output node is calculated for all the output nodes, Choose the node that is closest. The selected output node will be called the victory node.

一般に、距離はユークリッド距離を用いる。   In general, the Euclidean distance is used as the distance.

勝利ノードとその近傍内にある出力ノードとに対して、結合加重ベクトルが学習用データに近くなるように結合加重ベクトルの値を調整する。どの程度値を調整するかは学習率という変数で調節する。学習率は近傍内での位置関係によって値が決まる関数である。これをすべての学習用データについて順番に行うことで1回の学習が終了する。近傍の大きさと学習率を次第に小さくしながら学習を繰り返すことによってネットワークを自己組織化することができる。   The value of the connection weight vector is adjusted so that the connection weight vector is close to the learning data for the winning node and the output node in the vicinity thereof. How much the value is adjusted is adjusted by a variable called learning rate. The learning rate is a function whose value is determined by the positional relationship within the neighborhood. One learning is completed by sequentially performing this operation for all the learning data. The network can be self-organized by repeating learning while gradually decreasing the size of the neighborhood and the learning rate.

次に、学習終了の判定について述べる。学習終了の判定は、1)あらかじめ学習回数を定めておく、2)学習用データと勝利ノードの距離の最も遠いものが閾値以下になるなど、学習の進み具合により判定する、などの方法がある。1)の場合は学習回数が一定のため、学習の進み具合に無関係に学習終了の回数は一定でり、学習終了時点での学習の進み具合は学習ごとに異なる。2)の場合、学習終了時点での学習の進み具合は学習ごとに同じで、学習の進み具合が早いほど学習終了が速く、学習速度が速い。   Next, determination of the end of learning will be described. There are methods for determining the end of learning, such as 1) predetermining the number of times of learning in advance, 2) determining according to the progress of learning, such as the farthest distance between the learning data and the winning node being equal to or less than a threshold. . In the case of 1), since the number of times of learning is constant, the number of times of learning is constant regardless of the progress of learning, and the degree of progress of learning at the end of learning differs for each learning. In the case of 2), the progress of learning at the end of learning is the same for each learning, and the earlier the learning progress, the faster the learning ends and the faster the learning speed.

学習の進み具合は、学習用データと勝利ノードとの距離によって判定することができる。   The progress of learning can be determined by the distance between the learning data and the winning node.

次に、学習終了後のネットワークの利用方法について述べる。学習終了の時点で結合加重ベクトルの値を固定する。ネットワーク利用時は、入力データと結合加重ベクトルの距離が最も近い出力ノードを選択することで、データが学習用データのどれに近いかを知ることができ、入力データのクラスタリングを行うことができる。   Next, how to use the network after learning is described. The value of the connection weight vector is fixed at the end of learning. When the network is used, by selecting an output node having the closest distance between the input data and the coupling weight vector, it is possible to know which of the learning data is closest to, and to cluster the input data.

自己組織化ネットワークの学習速度を高速化する従来手法の多くは、近傍の大きさや学習率といったパラメータの最適化が主流で、学習アルゴリズムの改良や、結合加重ベクトルの初期値や学習用データの与え方の最適化などについての検討は少ない。   Many of the conventional methods for increasing the learning speed of self-organizing networks are mainly optimizing parameters such as neighborhood size and learning rate. Improvement of the learning algorithm, initial value of the joint weight vector, and provision of learning data There are few studies on optimization.

下にあげる特許文献1では、学習速度の高速化を目的として、学習用データの入力順序を調整している。特許文献1と本発明との違いを発明の開示において明らかにする。   In Patent Document 1 listed below, the input order of learning data is adjusted for the purpose of increasing the learning speed. The difference between Patent Document 1 and the present invention will be clarified in the disclosure of the invention.

下にあげる特許文献2では、結合加重の初期値の一部を調整しているが、これは特徴マップの形成を制御するためであり、学習速度の高速化を目的としていない。
特願2003-86533 特願2003-114379
In Patent Document 2 listed below, a part of the initial value of the connection weight is adjusted, but this is for controlling the formation of the feature map and is not aimed at increasing the learning speed.
Japanese Patent Application 2003-86533 Japanese Patent Application 2003-114379

学習速度の高速化は自動的に学習を行う手法に共通の課題である。この発明はこの課題を改善し、自己組織化ネットワークに対して学習用データを最適に与えることによって学習速度の高速化を行うことを目的とする。   Increasing the learning speed is a problem common to the method of automatically learning. An object of the present invention is to improve this problem and increase the learning speed by optimally providing learning data to a self-organizing network.

自己組織化ネットワークの学習時における結合加重の変化について考察する。ある出力ノードの結合加重の変化は図1の式101の漸化式で表現される。近傍が大きく、すべての学習用データに対してすべての出力ノードが近傍内にある場合は、すべての出力ノードでnの範囲は学習用データ全体となる。近傍が小さく、一部の出力ノードのみ近傍内にある場合は、nの範囲を各出力ノードが影響を受ける学習用データと考えれば、各出力ノードの結合加重の変化は同様に式101で表現できる。式101の漸化式を変形して、初期値と学習用データによって結合加重の変化を示す式103が得られる。式103を見ると、g=0の時は結合加重の変化がなく、g=1の時は結合加重は常に最後の学習用データと同じ値となることがわかる。学習用データの順番が進む間は近傍の大きさとgの値を変化させない。学習回数が進むにつれて近傍の大きさとgの値を小さくしてゆく。   Consider changes in connection weights during learning of self-organizing networks. A change in the connection weight of an output node is expressed by a recurrence formula of Formula 101 in FIG. When the neighborhood is large and all the output nodes are within the neighborhood for all the learning data, the range of n is the entire learning data in all the output nodes. When the neighborhood is small and only some of the output nodes are within the neighborhood, the change in the coupling weight of each output node is similarly expressed by Equation 101 if the range of n is considered as learning data that affects each output node. it can. By transforming the recurrence formula of Formula 101, Formula 103 showing the change in the connection weight by the initial value and the learning data is obtained. From Equation 103, it can be seen that when g = 0, there is no change in the connection weight, and when g = 1, the connection weight is always the same value as the last learning data. While the order of the learning data advances, the size of the neighborhood and the value of g are not changed. As the number of learning progresses, the size of the neighborhood and the value of g are reduced.

学習を進めるにあたって、早い順番の学習用データの勝利ノードが遅い順番の学習用データの近傍内に入り、遅い順番の学習によって早い順番の学習が打ち消されてしまう状況が発生する。   When the learning is advanced, a situation occurs in which the winning node of the learning data in the early order enters the vicinity of the learning data in the later order, and the learning in the earlier order is canceled by the learning in the later order.

特許文献1では、学習用データ間で図2の式202に示す条件を満たす場合に、n−1の学習がnの学習によって打ち消されないで式203の状態になることや、式201の右辺第2項から、対象となる学習用データ間の距離が大きいほど打ち消される度合いが大きく、距離が小さいほど打ち消される度合いが小さいことがわかる、と指摘している。式201を変形した式204を見るとより明確である。   In Patent Document 1, when the condition shown in Expression 202 in FIG. 2 is satisfied between learning data, the learning of n−1 is not canceled by the learning of n, and the state of Expression 203 is obtained. It is pointed out from the second term that it can be seen that the greater the distance between the target learning data, the greater the degree of cancellation, and the smaller the distance, the smaller the degree of cancellation. It is clearer when the equation 204 obtained by modifying the equation 201 is seen.

特許文献1では、学習が打ち消されない条件は式203の場合であると指摘しているが、より精密には式203を含む図3の式301の場合である。従って、式202の条件はより精密な議論が必要である。   In Patent Document 1, it is pointed out that the condition that the learning is not canceled is the case of Expression 203, but more precisely, it is the case of Expression 301 of FIG. Therefore, the condition of equation 202 requires a more precise discussion.

また、特許文献1での指摘の通り、式201の右辺第2項から、学習用データ間の距離によって学習の打ち消される度合いを類推することは可能であるが、これについても、より精密な議論が必要である。   Moreover, as pointed out in Patent Document 1, it is possible to infer the degree of learning cancellation by the distance between the learning data from the second term on the right side of Equation 201, but this is also a more precise discussion. is required.

図3の式301が成立する条件を式の変形によって求める。式301を変形して式303を得る。式303の右辺の絶対値の中が正または0の場合、図4に示す変形を行うことで式404を得る。式303の右辺の絶対値の中が負の場合、図5に示す変形を行うことで式504を得る。従って、式301が成立する条件は図6の式601となる。式202は式601に含まれる。図6は、nの学習用データが式601の範囲内にあれば、n−1の学習がnの学習によって打ち消されないことを示している。   A condition for satisfying the expression 301 in FIG. 3 is obtained by modifying the expression. Equation 301 is transformed to obtain Equation 303. When the absolute value of the right side of Expression 303 is positive or 0, Expression 404 is obtained by performing the transformation shown in FIG. When the absolute value of the right side of Expression 303 is negative, Expression 504 is obtained by performing the transformation shown in FIG. Therefore, the condition for satisfying Expression 301 is Expression 601 in FIG. Expression 202 is included in expression 601. FIG. 6 shows that if the learning data for n is within the range of the equation 601, the learning for n−1 is not canceled by the learning for n.

学習用データの入力順序が常に式601を満たすように決定する高速学習方法が請求項1である。   Claim 1 is a high-speed learning method in which the input order of learning data is always determined so as to satisfy Equation 601.

式601はどのnに付いても成立するので、学習用データの入力順序の一部分が式601を満たす場合も、学習が高速化することが期待できる。学習用データの入力順序の一部分が式601を満たすように決定する高速学習方法が請求項11である。   Since the expression 601 is established for any n, it can be expected that the learning speeds up even when a part of the input order of the learning data satisfies the expression 601. Claim 11 is a high-speed learning method for determining a part of the input order of the learning data to satisfy Expression 601.

式103を見ると、早い順番の学習用データには学習率gおよび(1−g)が多くの回数かけ算されているため、1回の学習が終了した時点では、早い順番の学習用データより遅い順番の学習用データの方が学習結果に強く影響することがわかる。従って、学習用データの入力順序の一部分が式601を満たす場合、後ろ部分が式601を満たす場合に学習を高速化する効果が大きいことが期待できる。学習用データの入力順序の後ろ部分が式601を満たすように決定する高速学習方法が請求項12である。   Looking at Equation 103, the learning rate g and (1-g) are multiplied many times in the learning data in the early order, and therefore, when the learning is completed once, the learning data in the earlier order is compared with the learning data in the earlier order. It can be seen that the learning data in the later order has a stronger influence on the learning result. Therefore, when a part of the input order of learning data satisfies Expression 601, it can be expected that the effect of speeding up learning is large when the rear part satisfies Expression 601. Claim 12 is a high-speed learning method in which the rear part of the learning data input order is determined so as to satisfy Expression 601.

自己組織化ネットワークの結合加重の初期値と学習用データの関係を式の変形によって明らかにする。式303の右辺の絶対値の中が正または0の場合、式101と式401から式704を得る。式303の右辺の絶対値の中が負の場合、式101と式501から式707を得る。式704および式707は、n−1の学習がnの学習によって打ち消されないためには、結合加重の初期値が学習用データの最大値以上か、最小値以下のいずれかであることを示している。   The relationship between the initial value of the connection weight of the self-organizing network and the learning data is clarified by transforming the formula. When the absolute value on the right side of Expression 303 is positive or 0, Expression 704 is obtained from Expression 101 and Expression 401. When the absolute value of the right side of Expression 303 is negative, Expression 707 is obtained from Expression 101 and Expression 501. Equations 704 and 707 indicate that the initial value of the connection weight is either greater than or equal to the maximum value of the learning data or less than the minimum value so that n-1 learning is not canceled by n learning. ing.

自己組織化ネットワークの結合加重の初期値が全て式704または式707を満たすように決定する高速学習方法が請求項2である。結合加重の初期値が全て式704を満たすように決定する高速学習方法が請求項5である。結合加重の初期値が全て式707を満たすように決定する高速学習方法が請求項6である。   The fast learning method according to claim 2, wherein the initial value of the connection weight of the self-organizing network is determined so as to satisfy all the expressions 704 or 707. Claim 5 is a fast learning method in which the initial values of the connection weights are all determined so as to satisfy Expression 704. Claim 6 is a fast learning method in which the initial values of the connection weights are all determined so as to satisfy Expression 707.

結合加重は出力ノードごと、次元ごとにつけられているので、初期値の一部が条件を満たす場合も、学習が高速化することが期待できる。結合加重の初期値の一部が式704または式707を満たすように決定する高速学習方法が請求項13である。   Since the connection weight is assigned to each output node and each dimension, it can be expected that the learning speeds up even when some of the initial values satisfy the condition. Claim 13 is a fast learning method for determining such that a part of the initial value of the connection weight satisfies Expression 704 or Expression 707.

学習用データの入力順序が式601を満たすように決定するには多くの計算が必要となるため、条件は厳しくなるが比較的簡単に入力順序を決定する方法を検討する。式801から、n−1の学習用データは式601に示すnの学習用データが満たすべき範囲の内部にあることがわかる。   Since many calculations are required to determine the input order of the learning data so as to satisfy Expression 601, a method for determining the input order relatively easily will be considered although the conditions become severe. From Expression 801, it can be seen that the learning data of n−1 is within the range to be satisfied by the learning data of n shown in Expression 601.

そこで、式601を満たした上で、新たにn−1の学習用データとnの学習用データに一定の大小関係を保持するという条件を付け加えた場合を考えた。式803に示すとおり、学習用データが昇順もしくは降順に並んでいる場合も式301の条件を満たしている。   Therefore, a case was considered in which, after satisfying Expression 601, a condition that a certain magnitude relationship is maintained between the n-1 learning data and the n learning data is newly added. As shown in Expression 803, the condition of Expression 301 is also satisfied when the learning data is arranged in ascending or descending order.

学習用データの入力順序が常に式803を満たすように決定する高速学習方法が請求項3および請求項4である。   Claims 3 and 4 are high-speed learning methods for determining the input order of the learning data so that the expression 803 is always satisfied.

本発明は、以上のような方法に基づいて高速化を行う自己組織化ネットワークの高速学習方法である。   The present invention is a high-speed learning method for a self-organizing network that performs high-speed processing based on the above-described method.

この発明によれば、自己組織化ネットワークの学習アルゴリズムに変更を加えることなく、学習速度の高速化を行うことが可能となった。学習アルゴリズムの改良による高速化と同時に利用することができるため、さらに大きな効果を得ることが期待される。   According to the present invention, the learning speed can be increased without changing the learning algorithm of the self-organizing network. It can be used at the same time as the learning algorithm is improved, so it is expected to have a greater effect.

学習用データの一部に対して処理を行うことによって処理量を減らすことができ、その場合でも高速化が可能となった。   By processing a part of the learning data, the processing amount can be reduced, and even in that case, the speed can be increased.

学習用データを昇順あるいは降順にするという簡単な処理で高速化の効果を得ることができた。   The effect of speeding up was able to be obtained by a simple process of making the learning data ascending or descending.

結合加重の初期値を調整するという簡単な処理で高速化の効果を得ることができた。   The effect of speeding up was able to be obtained by a simple process of adjusting the initial value of the connection weight.

計算機を用いて自己組織化ネットワークのプログラムを作成して実験を行い、この発明の効果を実証した。実験に用いた自己組織化ネットワークの概念図を図11と図12に示す。   A self-organizing network program was created using a computer and experiments were performed to verify the effects of the present invention. The conceptual diagram of the self-organizing network used in the experiment is shown in FIGS.

実験では、100個の出力ノードを10×10の2次元に配置し、入力ノードをデータの次元数と同数の5個とし、全ての結合加重を乱数によって初期化した自己組織化ネットワークに対して学習を行った。学習用データは、5次元空間上に5個の中心点を与え、各中心点の周りに正規分布する10個の合計50個を乱数により合成した。A)近傍の大きさの初期値7×7で学習率の初期値0.7の場合、B)近傍の大きさの初期値5×5で学習率の初期値0.5の場合について、学習用データを1)ランダムな順番、2)全データ数の10%にあたる5個に対して距離の近いものを遅い順番に並べ残りのランダムな順番のデータの後に配置した順番、3)全データ同士の距離を計算し距離の近いものを遅い順番に並べた順番に対して、各10回実験しその平均学習回数を測定した。終了判定は、学習用データとの距離が最も遠い勝利ノードの距離が閾値以下になった時とした。   In the experiment, a self-organizing network in which 100 output nodes are arranged in two dimensions of 10 × 10, the number of input nodes is five, which is the same as the number of data dimensions, and all connection weights are initialized by random numbers. Learned. As the learning data, five center points are given in a five-dimensional space, and a total of 50 normal distributions around each center point are synthesized by random numbers. A) Learning in the case of the initial value of the neighborhood size 7 × 7 and the initial value of the learning rate 0.7, B) Learning in the case of the initial value of the neighborhood size 5 × 5 and the initial value of the learning rate 0.5 1) random order, 2) the order in which the short distances are arranged with respect to 5 which is 10% of the total number of data, the order in which they are arranged after the remaining random order data, and 3) all the data The distance was calculated, and the average number of learning was measured by experimenting 10 times each in the order in which the distances closer to each other were arranged in the slow order. The end determination is made when the distance of the winning node having the longest distance from the learning data is equal to or less than the threshold.

実験の結果、A1が1027回、A2が944回、A3が951回、B1が603回、B2が528回、B3が571回であった。これらの結果は、この発明によって約6%から約14%学習時間が高速化されたことを示している。   As a result of the experiment, A1 was 1027 times, A2 was 944 times, A3 was 951 times, B1 was 603 times, B2 was 528 times, and B3 was 571 times. These results show that the present invention speeds up the learning time from about 6% to about 14%.

結合加重の変化を示す式である。It is a formula which shows the change of joint weight. 既存の学習結果と学習用データの近さを示す式である。It is a formula which shows the proximity of the existing learning result and learning data. 新規の学習結果と学習用データの近さを示す式である。It is a formula which shows the nearness of the new learning result and the data for learning. 新規の近さを示す式の変形である。It is a modification of the equation indicating the new proximity. 新規の近さを示す式の変形である。It is a modification of the equation indicating the new proximity. 新規の近さを示す式の結果である。It is the result of the formula which shows new proximity. 学習用データと結合加重の初期値の関係を示す式である。It is a formula which shows the relation between the data for learning and the initial value of connection weight. 学習用データを昇順または降順にした場合の関係を示す式である。It is a formula which shows the relation at the time of making learning data into ascending order or descending order. 実験に用いた自己組織化ネットワークの出力ノードと入力ノードの関係の概念図である。It is a conceptual diagram of the relationship between the output node and input node of the self-organizing network used for experiment. 実験に用いた10×10の2次元出力マップ、勝利ノード、5×5近傍の概念図である。FIG. 10 is a conceptual diagram of a 10 × 10 two-dimensional output map, victory node, and 5 × 5 neighborhood used in the experiment.

Claims (13)

学習用データの入力順序を学習用データの値を基に調整することで高速化を行う自己組織化ネットワークの高速学習方法。 A high-speed learning method for a self-organizing network that increases the speed by adjusting the input order of learning data based on the value of the learning data. 結合加重の初期値を学習用データを基に調整することで高速化を行う自己組織化ネットワークの高速学習方法。 A high-speed learning method for a self-organizing network that increases the speed by adjusting the initial value of the connection weight based on the learning data. 学習用データの各次元の値が昇順になるように学習用データの入力順序を調整することで高速化を行う請求項1の自己組織化ネットワークの高速学習方法。 2. The high-speed learning method for a self-organizing network according to claim 1, wherein the speed is increased by adjusting the input order of the learning data so that the values of each dimension of the learning data are in ascending order. 学習用データの各次元の値が降順になるように学習用データの入力順序を調整することで高速化を行う請求項1の自己組織化ネットワークの高速学習方法。 2. The high-speed learning method for a self-organizing network according to claim 1, wherein the speed is increased by adjusting the input order of the learning data so that the values of each dimension of the learning data are in descending order. 結合加重の初期値を学習用データの最大値以上にすることで高速化を行う請求項2の自己組織化ネットワークの高速学習方法。 The high-speed learning method for a self-organizing network according to claim 2, wherein the speed is increased by setting the initial value of the connection weight to be equal to or greater than the maximum value of the learning data. 結合加重の初期値を学習用データの最小値以下にすることで高速化を行う請求項2の自己組織化ネットワークの高速学習方法。 3. The high-speed learning method for a self-organizing network according to claim 2, wherein the speed is increased by setting the initial value of the connection weight to be equal to or less than the minimum value of the learning data. 請求項1と請求項2の特徴を備えた自己組織化ネットワークの高速学習方法。 A high-speed learning method for a self-organizing network comprising the features of claims 1 and 2. 請求項3と請求項5の特徴を備えた請求項7の自己組織化ネットワークの高速学習方法。 The self-organizing network fast learning method according to claim 7 comprising the features of claims 3 and 5. 請求項4と請求項6の特徴を備えた請求項7の自己組織化ネットワークの高速学習方法。 8. A method for fast learning of a self-organizing network according to claim 7 comprising the features of claims 4 and 6. 各次元ごとに請求項1または請求項3または請求項4または請求項7もしくは請求項8もしくは請求項9の特徴を備えた自己組織化ネットワークの高速学習方法。 A high-speed learning method for a self-organizing network having the features of claim 1, claim 3, claim 4, claim 7, claim 8, or claim 9 for each dimension. 学習データの一部分が請求項1または請求項3または請求項4または請求項7または請求項8または請求項9または請求項10の特徴を備えた自己組織化ネットワークの高速学習方法。 A high-speed learning method for a self-organizing network in which a part of learning data comprises the features of claim 1, claim 3, claim 4, claim 7, claim 8, claim 9, or claim 10. 学習データの後ろ部分が請求項1または請求項3または請求項4または請求項7または請求項8または請求項9または請求項10の特徴を備えた請求項11の自己組織化ネットワークの高速学習方法。 The high-speed learning method of the self-organizing network according to claim 11, wherein the rear part of the learning data comprises the features of claim 1, claim 3, claim 4, claim 7, claim 8, claim 9, or claim 10. . 結合加重の初期値の一部分が請求項2または請求項5または請求項6または請求項7または請求項8または請求項9または請求項10または請求項11または請求項12の特徴を備えた自己組織化ネットワークの高速学習方法。
A part of the initial value of the joint weight is self-organized with the features of claim 2 or claim 5 or claim 6 or claim 7 or claim 8 or claim 9 or claim 10 or claim 11 or claim 12. Learning method for network.
JP2004100361A 2004-03-30 2004-03-30 Quick learning method for self-organization network Pending JP2005284128A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2004100361A JP2005284128A (en) 2004-03-30 2004-03-30 Quick learning method for self-organization network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2004100361A JP2005284128A (en) 2004-03-30 2004-03-30 Quick learning method for self-organization network

Publications (1)

Publication Number Publication Date
JP2005284128A true JP2005284128A (en) 2005-10-13

Family

ID=35182545

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2004100361A Pending JP2005284128A (en) 2004-03-30 2004-03-30 Quick learning method for self-organization network

Country Status (1)

Country Link
JP (1) JP2005284128A (en)

Similar Documents

Publication Publication Date Title
CN107844835B (en) Multi-objective optimization improved genetic algorithm based on dynamic weight M-TOPSIS multi-attribute decision
CN108985732B (en) Consensus and account book data organization method and system based on block-free DAG technology
CN108637020B (en) Self-adaptive variation PSO-BP neural network strip steel convexity prediction method
CN106447024A (en) Particle swarm improved algorithm based on chaotic backward learning
CN108846472A (en) A kind of optimization method of Adaptive Genetic Particle Swarm Mixed Algorithm
CN109413710B (en) Clustering method and device of wireless sensor network based on genetic algorithm optimization
CN109919313A (en) A kind of method and distribution training system of gradient transmission
CN104035438A (en) Self-adaptive multi-target robot obstacle avoidance algorithm based on population diversity
CN109931943B (en) Unmanned ship global path planning method and electronic equipment
CN109800849A (en) Dynamic cuckoo searching algorithm
WO2017124930A1 (en) Method and device for feature data processing
CN112465844A (en) Multi-class loss function for image semantic segmentation and design method thereof
CN101034482A (en) Method for automatically generating complex components three-dimensional self-adapting finite element grid
CN113422695A (en) Optimization method for improving robustness of topological structure of Internet of things
CN107578101B (en) Data stream load prediction method
CN104283736B (en) A kind of network communication five-tuple Fast Match Algorithm based on improvement automatic state machine
Masrom et al. Hybridization of particle swarm optimization with adaptive genetic algorithm operators
CN112232011B (en) Wide-frequency-band electromagnetic response self-adaptive determination method and system of integrated circuit
JP2005284128A (en) Quick learning method for self-organization network
CN108133240A (en) A kind of multi-tag sorting technique and system based on fireworks algorithm
CN112800384A (en) GTSP solving algorithm based on self-adaptive large-field search
CN112395822A (en) Time delay driven non-Manhattan structure Steiner minimum tree construction method
CN114818203A (en) Reducer design method based on SWA algorithm
CN108415774B (en) Software and hardware partitioning method based on improved firework algorithm
CN109635913A (en) Q learning algorithm Soccer System emulation mode based on adaptive greediness