TW201636905A - Neural network and method of neural network training - Google Patents

Neural network and method of neural network training Download PDF

Info

Publication number
TW201636905A
TW201636905A TW105101628A TW105101628A TW201636905A TW 201636905 A TW201636905 A TW 201636905A TW 105101628 A TW105101628 A TW 105101628A TW 105101628 A TW105101628 A TW 105101628A TW 201636905 A TW201636905 A TW 201636905A
Authority
TW
Taiwan
Prior art keywords
correction
input
weight
sum
network
Prior art date
Application number
TW105101628A
Other languages
Chinese (zh)
Other versions
TWI655587B (en
Inventor
德米崔 佩先奇
Original Assignee
前進公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/US2015/019236 external-priority patent/WO2015134900A1/en
Application filed by 前進公司 filed Critical 前進公司
Publication of TW201636905A publication Critical patent/TW201636905A/en
Application granted granted Critical
Publication of TWI655587B publication Critical patent/TWI655587B/en

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A neural network includes a plurality of inputs for receiving input signals, and synapses connected to the inputs and having corrective weights. The network additionally includes distributors. Each distributor is connected to one of the inputs for receiving the respective input signal and selects one or more corrective weights in correlation with the input value. The network also includes neurons. Each neuron has an output connected with at least one of the inputs via one synapse and generates a neuron sum by summing corrective weights selected from each synapse connected to the respective neuron. Furthermore, the network includes a weight correction calculator that receives a desired output signal, determines a deviation of the neuron sum from the desired output signal value, and modifies respective corrective weights using the determined deviation. Adding up the modified corrective weights to determine the neuron sum minimizes the subject deviation for training the neural network.

Description

神經網路及神經網路訓練的方法 Neural network and neural network training method 【相關申請案之交互參照】[Reciprocal Reference of Related Applications]

此申請案係主張西元2014年3月6日提出的美國臨時申請案序號第61/949,210號、西元2015年1月22日提出的美國臨時申請案序號第62/106,389號、西元2015年3月6日提出的PCT申請案PCT/US2015/19236號、西元2015年6月9日提出的美國臨時申請案序號第62/173,163號、與西元2015年9月23日提出的美國發明申請案序號第14/862,337號之裨益,該等申請案的整體內容係以參照方式而納入本文。 This application is based on the U.S. Provisional Application No. 61/949,210 filed on March 6, 2014, and the U.S. Provisional Application No. 62/106,389, issued on January 22, 2015, March 2015. The PCT application filed on the 6th, PCT/US2015/19236, the US provisional application number No. 62/173,163, issued on June 9, 2015, and the US invention application number number issued on September 23, 2015 The benefit of 14/862,337, the entire contents of which are hereby incorporated by reference.

本揭露內容係關於一種人工神經網路及訓練人工神經網路的方法。 The disclosure relates to an artificial neural network and a method of training an artificial neural network.

在機器學習之中,術語“神經網路”係概括指稱軟體及/或電腦架構,即:一個電腦系統或一個微處理器的整體設計或結構,其包括執行其所需要的硬體與軟體。人工神經網路係可由亦稱為動物的中樞神經系統(特別是頭腦)之生物神經網路所驅使之一系列的統計學習演算法。人工神經網路係主要使用以估測或大致估計其可取決於大量輸入之概括未知的功能。該種神經網路係已經用於其為難以使用一般基於規則的程式設計來解決之廣泛的種種任務,其包括電腦視覺與語音辨識。 In machine learning, the term "neural network" generally refers to a software and/or computer architecture, that is, the overall design or structure of a computer system or a microprocessor, including the hardware and software needed to perform it. The artificial neural network is a series of statistical learning algorithms driven by a biological neural network, also known as the central nervous system of animals (especially the brain). Artificial neural networks are primarily used to estimate or approximate estimates of functions that may depend on a large number of inputs that are not known in general. This type of neural network has been used for a wide variety of tasks that are difficult to solve using general rule-based programming, including computer vision and speech recognition.

人工神經網路係概括呈現為“神經元(neuron)”之系統,其可計算來自輸入的值,且由於其適應的性質而為能夠機器學習、以及型樣辨識。各個神經元係經常為透過具有神經結權重(weight)的神經結(synapse)而和數個輸入連接。 Artificial neural networks are generally presented as "neuron" systems that can calculate values from inputs and are capable of machine learning and pattern recognition due to their adaptive nature. Each neuron is often connected to several inputs by a synapse with a weight of the nerve knot.

神經網路係並非如同典型的軟體與硬體為經程式設計,而是被訓練。該種訓練係典型為經由充分數目個代表實例之分析且藉由神經結權重之統計或演算法的選擇而達成,使得一個既定組的輸入影像為對應於一個既定組的輸出影像。古典的神經網路之一個常見的批評係在於,對於其訓練而言係經常需要可觀的時間與其他的資源。 Neural network systems are not programmed as typical software and hardware, but are trained. Such training systems are typically achieved by analysis of a sufficient number of representative instances and by selection of statistics or algorithms for neural node weights such that a given set of input images is an output image corresponding to a given set. A common criticism of classical neural networks is that they often require considerable time and other resources for their training.

種種的人工神經網路係描述在以下的美國專利:4,979,124;5,479,575;5,493,688;5,566,273;5,682,503;5,870,729;7,577,631;以及7,814,038。 A variety of artificial neural networks are described in U.S. Patent Nos. 4,979,124, 5,479,575, 5,493,688, 5,566,273, 5,682,503, 5,870,729, 7,577,631, and 7,814,038.

該種網路係包括複數個網路輸入,俾使各個輸入被裝配以接收其具有一個輸入值的一個輸入訊號。該種神經網路還包括複數個神經結,其中各個神經結被連接到該複數個輸入的一者且包括複數個校正(corrective)權重,其中各個校正權重係由一個權重值所界定。該種神經網路係另外包括一組分配器。各個分配器係運作連接到該複數個輸入的一者以供接收各別輸入訊號且被裝配以從和輸入值相關之該複數個校正權重來選擇一個或多個校正權重。該種神經網路還包括一組神經元。各個神經元係具有至少一個輸出且經由該複數個神經結的一者而和該複數個輸入的至少一者為連接,且被裝配以將選擇自被連接到各別神經元之各個神經結的該 等校正權重之權重值合計且藉此產生一個神經元總和。再者,該種神經網路係包括一個權重校正計算器,其被裝配以接收具有一個值的一個期望輸出訊號,確定該神經元總和與該期望輸出訊號值的一個偏差,且使用該確定的偏差來修改各別的校正權重值。將修改的校正權重值合計以確定該神經元總和係使該神經元總和與該期望輸出訊號值的偏差為最小化,藉以提供訓練該神經網路。該種神經網路係可被實現且實施為軟體及/或硬體。 The network includes a plurality of network inputs such that each input is assembled to receive an input signal having an input value. The neural network also includes a plurality of neural nodes, wherein each neural node is coupled to one of the plurality of inputs and includes a plurality of corrective weights, wherein each of the correction weights is defined by a weight value. The neural network additionally includes a set of dispensers. Each of the dispensers is operatively coupled to one of the plurality of inputs for receiving respective input signals and is configured to select one or more correction weights from the plurality of correction weights associated with the input values. The neural network also includes a set of neurons. Each neuron has at least one output and is coupled to at least one of the plurality of inputs via one of the plurality of neural nodes and is configured to select from each of the neural nodes connected to the respective neuron The The weight values of the correction weights are summed and thereby a sum of neurons is generated. Furthermore, the neural network system includes a weight correction calculator that is configured to receive a desired output signal having a value, determine a deviation of the sum of the neurons from the expected output signal value, and use the determined Deviation to modify the individual correction weight values. The modified correction weight values are aggregated to determine that the neuron sum is minimized by the deviation of the neuron sum from the expected output signal value to provide training for the neural network. Such a neural network can be implemented and implemented as a soft body and/or a hardware.

該神經元總和與該期望輸出訊號的偏差之確定係可包括該期望輸出訊號值除以該神經元總和,藉此產生一個偏差係數。此外,該等各別的校正權重之修改係可包括用以產生該神經元總和的各個校正權重乘以該偏差係數。 The determination of the deviation of the sum of the neurons from the desired output signal may include dividing the desired output signal value by the sum of the neurons, thereby generating a coefficient of variation. Moreover, the modification of the respective correction weights can include the respective correction weights used to generate the sum of the neurons multiplied by the deviation coefficients.

該神經元總和與該期望輸出訊號的偏差係可為在其間的一個數學差異。在該種情形,該等各別修改的校正權重之產生係可包括該數學差異之分攤到用以產生該神經元總和的各個校正權重。該數學差異之該種分攤到各個校正權重係意欲將各個神經元總和收斂在該期望訊號值上。 The deviation of the sum of the neurons from the desired output signal can be a mathematical difference therebetween. In such a case, the respective modified correction weights may be generated by including the mathematical differences to the respective correction weights used to generate the sum of the neurons. The distribution of the mathematical difference to each of the correction weights is intended to converge the sum of the individual neurons on the desired signal value.

該數學差異之分攤係亦可包括將該確定的差異平均劃分在用以產生該神經元總和的各個校正權重之間。 The addition of the mathematical difference may also include dividing the determined difference equally between the respective correction weights used to generate the sum of the neurons.

該分配器係可另外裝配以將複數個影響(impact)係數指定到該複數個校正權重,俾使各個影響係數是以一個預定的比例而被指定到該複數個校正權重的一者以產生該神經元總和。 The dispenser can be additionally configured to assign a plurality of impact coefficients to the plurality of correction weights such that each of the influence coefficients is assigned to one of the plurality of correction weights at a predetermined ratio to generate the The sum of neurons.

各個各別的複數個影響係數係可由一個影響分佈函數所界定。該複數個輸入值係可被接收到一個值範圍,其根據一個區間分佈函數而被劃分為區間,俾使各個輸入值被接收在一個各別區間內,且各個校正 權重係對應於該等區間的一者。此外,各個分配器係可使用該輸入訊號之各別接收的輸入值以選擇該各別的區間。另外,各個分配器係可將該各別的複數個影響係數指定到其對應於該選擇各別的區間之校正權重且到其對應於鄰近於該選擇各別的區間的一個區間之至少一個校正權重。 Each individual plurality of influence coefficients can be defined by an influence distribution function. The plurality of input values can be received into a range of values, which are divided into intervals according to an interval distribution function, so that the respective input values are received in a respective interval, and each correction The weights correspond to one of the intervals. In addition, each of the dispensers can use the input values received by the input signals to select the respective intervals. In addition, each of the plurality of distributors may assign the respective plurality of influence coefficients to at least one of the correction weights corresponding to the selected respective intervals and to at least one of the intervals corresponding to the respective intervals adjacent to the selection. Weights.

各個神經元係可裝配以將對於經連接到其的所有神經結之該校正權重與指定的影響係數之一個乘積合計。 Each neuron system can be assembled to total the product of the correction weights for all of the neural nodes connected thereto to a specified influence coefficient.

該等影響係數的預定比例係可根據一個統計分佈所界定,諸如:使用一個高斯函數。 The predetermined ratio of the influence coefficients can be defined according to a statistical distribution, such as: using a Gaussian function.

權重校正計算器係可裝配以根據由該各別的影響係數所建立之比例而將該確定的差異之一部分施加到用以產生該神經元總和的各個校正權重。 The weight correction calculator can be configured to apply a portion of the determined difference to each of the correction weights used to generate the sum of the neurons based on the ratio established by the respective influence factors.

各個校正權重係可另外由一組索引所界定。該等索引係可包括:一個輸入索引,其被設定以識別對應於該輸入的校正權重;一個區間索引,其被設定以指定用於該各別校正權重的選擇區間;及,一個神經元索引,其被設定以指定對應於該神經元的校正權重。 Each correction weight can be additionally defined by a set of indices. The indexing system can include: an input index configured to identify a correction weight corresponding to the input; an interval index set to specify a selection interval for the respective correction weight; and, a neuron index It is set to specify the correction weight corresponding to the neuron.

各個校正權重係可進而由一個存取索引所界定,該存取索引被設定以結算該各別校正權重在該神經網路的訓練期間而由該輸入訊號所存取的次數。 Each of the correction weights can in turn be defined by an access index that is set to settle the number of times the respective correction weight is accessed by the input signal during training of the neural network.

一種訓練該種神經網路的方法亦被揭示。 A method of training such a neural network is also disclosed.

當關連於伴隨圖式與隨附申請專利範圍來理解,本揭露內容的上述特徵與優點、以及其他特徵與優點係將由用於實行所述的揭露內容之實施例與最佳模式的以下詳細說明而為顯而易見。 The above features and advantages, and other features and advantages of the present invention will be apparent from the following detailed description of the embodiments and the preferred embodiments And it is obvious.

10‧‧‧神經網路 10‧‧‧Neural Network

12‧‧‧輸入裝置 12‧‧‧ Input device

14‧‧‧神經結 14‧‧‧ nerve knot

16‧‧‧神經結權重 16‧‧‧ nerve knot weight

18‧‧‧神經元 18‧‧‧ neurons

20‧‧‧加法器 20‧‧‧Adder

22‧‧‧致動函數裝置 22‧‧‧Activity function device

24‧‧‧神經元輸出 24‧‧‧ neuron output

26‧‧‧權重校正計算器 26‧‧‧ Weight Correction Calculator

28‧‧‧訓練配對 28‧‧‧ Training pairing

28-1‧‧‧輸入影像 28-1‧‧‧ Input image

28-2‧‧‧期望輸出影像 28-2‧‧‧ Expected output image

100‧‧‧漸進式神經網路(p網路) 100‧‧‧ Progressive neural network (p network)

102‧‧‧輸入 102‧‧‧ Input

104‧‧‧輸入訊號 104‧‧‧Input signal

106‧‧‧輸入影像 106‧‧‧ Input image

108‧‧‧神經結權重 108‧‧‧ nerve knot weight

110‧‧‧權重校正方塊 110‧‧‧weight correction block

112‧‧‧校正權重 112‧‧‧correction weight

114‧‧‧分配器 114‧‧‧Distributor

116‧‧‧神經元 116‧‧‧ neurons

117‧‧‧輸出 117‧‧‧ Output

118‧‧‧神經結 118‧‧‧ nerve knot

119‧‧‧神經元單元 119‧‧‧neuronal unit

120‧‧‧神經元總和 120‧‧‧The sum of neurons

122‧‧‧權重校正計算器 122‧‧‧ Weight Correction Calculator

124‧‧‧期望輸出訊號 124‧‧‧ Expected output signal

126‧‧‧輸出影像 126‧‧‧ Output image

128‧‧‧偏差 128‧‧‧ Deviation

134‧‧‧影響係數 134‧‧‧Impact coefficient

136‧‧‧影響分佈函數 136‧‧‧Impact distribution function

138‧‧‧值範圍 138‧‧‧value range

140‧‧‧區間分佈函數 140‧‧‧ interval distribution function

200‧‧‧方法 200‧‧‧ method

202、204、206、208、210、212、214‧‧‧方框 202, 204, 206, 208, 210, 212, 214‧‧ box

圖1係一種先前技藝之古典的人工神經網路的說明圖。 Figure 1 is an illustration of a classical artificial neural network of the prior art.

圖2係具有複數個神經結、一組分配器、以及其和各個神經結有關聯的複數個校正權重之一種“漸進式神經網路”(p網路)的說明圖。 2 is an illustration of a "progressive neural network" (p-network) having a plurality of neural nodes, a set of dispensers, and a plurality of correction weights associated with each of the neural nodes.

圖3A係在圖2所示的p網路之一部分的說明圖,其具有複數個神經結與經定位在各個分配器之上游的一個神經結權重。 3A is an illustration of a portion of the p-network shown in FIG. 2 having a plurality of neural nodes and a neural node weight positioned upstream of each of the dispensers.

圖3B係在圖2所示的p網路之一部分的說明圖,其具有複數個神經結與經定位在各別的複數個校正權重之下游的一組神經結權重。 3B is an illustration of a portion of the p-network shown in FIG. 2 having a plurality of neural nodes and a set of neural node weights positioned downstream of respective plurality of correction weights.

圖3C係在圖2所示的p網路之一部分的說明圖,其具有複數個神經結與經定位在各個分配器之上游的一個神經結權重以及經定位在各別的複數個校正權重之下游的一組神經結權重。 3C is an illustration of a portion of the p-network shown in FIG. 2 having a plurality of neural nodes and a neural node weight positioned upstream of each of the dispensers and positioned at a respective plurality of correction weights. A group of nerve nodes in the downstream are weighted.

圖4A係在圖2所示的p網路之一部分的說明圖,其具有用於一個既定輸入之所有神經結的單一個分配器與經定位在各個分配器之上游的一個神經結權重。 4A is an illustration of a portion of the p-network shown in FIG. 2 with a single dispenser for all of the neural inputs of a given input and a neural node weight positioned upstream of each dispenser.

圖4B係在圖2所示的p網路之一部分的說明圖,其具有用於一個既定輸入之所有神經結的單一個分配器與經定位在各別的複數個校正權重之下游的一組神經結權重。 4B is an illustration of a portion of the p-network shown in FIG. 2 having a single distributor for all of the neural inputs of a given input and a set positioned downstream of the respective plurality of correction weights. The nerve knot is weighted.

圖4C係在圖2所示的p網路之一部分的說明圖,其具有用於一個既定輸入之所有神經結的單一個分配器,且具有經定位在各個分配器之上游的一個神經結權重以及經定位在各別的複數個校正權重之下游的一組神經結權重。 Figure 4C is an illustration of a portion of the p-network shown in Figure 2 with a single dispenser for all of the neural inputs of a given input, with a neural node weight positioned upstream of each dispenser And a set of neural node weights positioned downstream of the respective plurality of correction weights.

圖5係在圖2所示的p網路中的輸入訊號值範圍之劃分為個別區間的、說明圖。 FIG. 5 is an explanatory diagram in which the range of input signal values in the p network shown in FIG. 2 is divided into individual sections.

圖6A係用於在圖2所示的p網路中的校正權重之影響係數值的分佈之一個實施例的說明圖。 Fig. 6A is an explanatory diagram of an embodiment of a distribution of influence coefficient values for correction weights in the p network shown in Fig. 2.

圖6B係用於在圖2所示的p網路中的校正權重之影響係數值的分佈之另一個實施例的說明圖。 Fig. 6B is an explanatory diagram of another embodiment of the distribution of the influence coefficient values for the correction weights in the p network shown in Fig. 2.

圖6C係用於在圖2所示的p網路中的校正權重之影響係數值的分佈之又一個實施例的說明圖。 Fig. 6C is an explanatory diagram of still another embodiment of the distribution of the influence coefficient values of the correction weights in the p network shown in Fig. 2.

圖7係用於在圖2所示的p網路之一個輸入影像的說明圖、以及其用數位碼的形式來代表該影像之一個對應表格與其作為一組各別區間來代表該同個影像之另一個對應表格。 7 is an explanatory diagram for an input image of the p network shown in FIG. 2, and a corresponding table representing the image in the form of a digital code and representing the same image as a group of respective intervals. Another corresponding table.

圖8係在圖2所示的p網路之一個實施例的說明圖,該p網路被訓練以供二個相異影像之辨識,其中該p網路係裝配以辨識其包括各個影像的一些特徵之一個圖像。 Figure 8 is an illustration of one embodiment of the p-network shown in Figure 2, the p-network being trained for identification of two distinct images, wherein the p-network is assembled to recognize that it includes individual images. An image of some features.

圖9係在圖2所示的p網路之一個實施例的說明圖,其具有環繞一個“中央”神經元的神經結權重之分佈的一個實例。 Figure 9 is an illustration of one embodiment of the p-network shown in Figure 2 with an example of the distribution of neural node weights surrounding a "central" neuron.

圖10係在圖2所示的p網路之一個實施例的說明圖,其描繪在校正權重之間的訓練偏差的均勻分佈。 Figure 10 is an illustration of one embodiment of the p-network shown in Figure 2 depicting a uniform distribution of training deviations between correction weights.

圖11係在圖2所示的p網路之一個實施例的說明圖,其運用在p網路訓練期間的校正權重之修改。 Figure 11 is an illustration of one embodiment of the p-network shown in Figure 2, which utilizes the modification of the correction weights during p-network training.

圖12係在圖2所示的p網路之一個實施例的說明圖,其中基本演算法係產生一個主要組的輸出神經元總和,且其中該產生組係使用以產生具有 保留或增大值的數個“獲勝者”總和且其餘的總和之貢獻係無效。 Figure 12 is an illustration of one embodiment of the p-network shown in Figure 2, wherein the basic algorithm produces a primary set of output neuron sums, and wherein the generated set is used to generate The sum of several "winners" that retain or increase the value and the contribution of the remaining sums are invalid.

圖13係在圖2所示的p網路之一個實施例的說明圖,其辨識具有多個影像的元素之一個複雜影像。 Figure 13 is an illustration of one embodiment of the p-network shown in Figure 2, which identifies a complex image of elements having multiple images.

圖14係使用統一模型化語言(UML)之用於在圖2所示的p網路之物件導向程式設計的一個模型的說明圖。 Figure 14 is an explanatory diagram of a model for the object-oriented programming of the p-network shown in Figure 2 using Unified Modeling Language (UML).

圖15係在圖2所示的p網路之一個概括形成順序的說明圖。 Fig. 15 is an explanatory diagram showing a general formation sequence of the p network shown in Fig. 2.

圖16係用於在圖2所示的p網路之形成的資料之代表性的分析與準備的說明圖。 Fig. 16 is an explanatory diagram showing representative analysis and preparation of data for formation of the p network shown in Fig. 2.

圖17係允許在圖2所示的p網路和輸入資料在訓練與p網路應用期間的互動之代表性的輸入建立的說明圖。 Figure 17 is an illustration of a representative input setup that allows for the interaction of the p-network and input data shown in Figure 2 during training and p-network applications.

圖18係用於在圖2所示的p網路之神經元單元之代表性的建立的說明圖。 Fig. 18 is an explanatory diagram for the establishment of a representative of a neuron unit of the p network shown in Fig. 2.

圖19係其和神經元單元連接的各個神經結之代表性的建立的說明圖。 Figure 19 is an explanatory diagram showing the establishment of a representative representation of each nerve node connected to a neuronal unit.

圖20係訓練在圖2所示的p網路的說明圖。 Fig. 20 is an explanatory diagram of training the p network shown in Fig. 2.

圖21係在圖2所示的p網路中之神經元單元訓練的說明圖。 Figure 21 is an explanatory diagram of neuron unit training in the p network shown in Figure 2.

圖22係在圖2所示的p網路之訓練期間的神經元總和之延伸的說明圖。 Fig. 22 is an explanatory diagram showing the extension of the sum of neurons during the training of the p network shown in Fig. 2.

圖23係一種用以訓練在圖2-22所示的神經網路之方法的流程圖。 Figure 23 is a flow diagram of a method for training the neural network shown in Figures 2-22.

如圖1所示,一種古典的人工神經網路10係典型包括輸入裝置12、具有神經結權重16的神經結14、神經元18(其包括一個加法器20與致動函數裝置22)、神經元輸出24以及權重校正計算器26。各個神經元18係透過神經結14而連接到二個或多個輸入裝置12。神經結權重16的值 係通常使用電阻、導電率、電壓、電荷、磁性性質、或其他參數來代表。 As shown in FIG. 1, a classical artificial neural network 10 system typically includes an input device 12, a neural node 14 having a nerve node weight 16, a neuron 18 (which includes an adder 20 and an actuation function device 22), and a nerve. The meta output 24 and the weight correction calculator 26 are provided. Each neuron 18 is connected to two or more input devices 12 through a neural node 14. Nerve node weight 16 The system is typically represented by electrical resistance, electrical conductivity, voltage, electrical charge, magnetic properties, or other parameters.

古典的神經網路10之受監督的訓練係概括為基於一組訓練配對28之應用。各個訓練配對28係通常由一個輸入影像28-1與一個期望輸出影像28-2(亦稱為一個監督訊號)所組成。古典的神經網路10之訓練係典型為提供如後。以一組輸入訊號(I1-Im)的形式之一個輸入影像係進入該等輸入裝置12且被轉移到其具有初始權重(W1)的神經結權重16。輸入訊號的值係由該等權重所修改,典型為將各個訊號(I1-Im)的值乘以或除以各別的權重。從該等神經結權重16,經修改的輸入訊號係各者轉移到各別的神經元18。各個神經元18係接收來自關於該所屬的神經元18之一群神經結14的一組訊號。包括在神經元18之中的加法器20係將由該等權重所修改且由該所屬的神經元所接收的所有輸入訊號總計。致動函數裝置22係接收各別的合成神經元總和且根據數學函數而修改該總和,因此形成各別的輸出影像為如同成組的神經元輸出訊號(ΣF1…ΣFn)。 The supervised training system of the classical neural network 10 is summarized as an application based on a set of training pairs 28. Each training pair 28 is typically comprised of an input image 28-1 and a desired output image 28-2 (also referred to as a supervisory signal). The training system of the classical neural network 10 is typically provided as follows. An input image in the form of a set of input signals (I 1 -I m ) enters the input device 12 and is transferred to its neural node weight 16 having an initial weight (W 1 ). The value of the input signal is modified by the weights, typically by multiplying or dividing the value of each signal (I 1 -I m ) by the respective weight. From these neural node weights 16, the modified input signals are each transferred to a respective neuron 18. Each neuron 18 receives a set of signals from a group of neural nodes 14 associated with the associated neuron 18. The adder 20 included in the neuron 18 sums all of the input signals modified by the weights and received by the associated neuron. The actuation function device 22 receives the sum of the respective synthesized neurons and modifies the sum according to a mathematical function, thus forming the respective output images as a set of neuron output signals (ΣF 1 ... ΣF n ).

由神經元輸出訊號(ΣF1…ΣFn)所界定之得到的神經元輸出影像係和預定的期望輸出影像(O1-On)而由一個權重校正計算器26來比較。基於在所得到的神經元輸出影像ΣFn與期望輸出影像On之間的預定差異,用於改變該等神經結權重16的校正訊號係使用一個預定程式設計的演算法所形成。在校正係對於所有神經結權重16而作成之後,該組輸入訊號(I1-Im)係再次引入到神經網路10且新的校正係作成。上述的循環係重複而直到在所得到的神經元輸出影像ΣFn與期望輸出影像On之間的差異係確定為小於某個預定誤差。關於所有的個別影像之網路訓練的一個循環係典型識別為一個“訓練時期”。概括而言,藉著各個訓練時期,誤差的大小係降低。 然而,視個別輸入訊號(I1-Im)之數目、以及輸入與輸出的數目而定,古典的神經網路10之訓練係可能需要可觀數目的訓練時期,其在一些情形係可能為如同幾十萬個之大。 The neuron output image system defined by the neuron output signals (ΣF 1 ... ΣF n ) and the predetermined desired output image (O 1 -O n ) are compared by a weight correction calculator 26. Based on a predetermined difference between the neuron output the resulting image with a desired output image ΣF n O n for changing these ganglia right signal correction system 16 using a predetermined weight programming algorithms formed. After the correction system is made for all of the neural node weights 16, the set of input signals (I 1 -I m ) are again introduced to the neural network 10 and a new calibration system is created. And said circulation system until it is determined based on the difference between the neuron output image obtained ΣF n O n and a desired output image is smaller than a predetermined error is repeated. A cycle of network training for all individual images is typically identified as a "training period." In summary, the magnitude of the error is reduced by each training period. However, depending on the number of individual input signals (I 1 -I m ) and the number of inputs and outputs, the training system of classical neural network 10 may require a significant number of training periods, which may be similar in some cases. Hundreds of thousands of them.

種種古典的神經網路係存在,其包括何普菲(Hopfield)網路、限制波茲曼機器(Restricted Boltzmann Machine)、徑向基底函數網路、與遞迴神經網路。分類及叢集之特定任務係需要一個特定型式的神經網路,自我組織映射(Self-Organizing Map),僅使用輸入影像來作為網路輸入訓練資訊,而對應於某個輸入影像的期望輸出影像係基於具有最大值的一個輸出訊號之單一個獲勝(winning)的神經元而在訓練過程期間被直接形成。 A variety of classical neural networks exist, including the Hopfield network, the Restricted Boltzmann Machine, the radial basis function network, and the recurrent neural network. The specific tasks of classification and clustering require a specific type of neural network, Self-Organizing Map, which uses only the input image as the network input training information, and the desired output image system corresponding to an input image. A single winning neuron based on an output signal having a maximum value is formed directly during the training process.

如上所指出,關於現存的古典神經網路(諸如:神經網路10)之主要關注的一者係在於其成功的訓練係需要可觀的時間持續期間。關於古典網路之一些另外關注係可為計算資源之大量耗費,其將接著驅使對於強大電腦之需要。另外的關注係在沒有該網路之完全重新調校的情況下而無法提高該網路的大小,對於如同“網路癱瘓”與“凍結在局部最小量”之該種現象的一個傾向,其使得不可能預測一個特定神經網路是否將能夠用一個既定順序之一個既定組的影像來訓練。此外,可能存在關於在訓練期間所引入的影像之特定順序的限制,其中,改變訓練影像之引入的順序係可能導致網路凍結,以及無法實行一個已經訓練的網路之附加的訓練。 As noted above, one of the main concerns with existing classical neural networks (such as neural network 10) is that their successful training system requires considerable time duration. Some additional concerns about classical networks can be a significant expense for computing resources, which in turn will drive the need for powerful computers. Another concern is that without the full recalibration of the network, the size of the network cannot be increased, and for a tendency like "network 瘫痪" and "freezing at a local minimum", It makes it impossible to predict whether a particular neural network will be able to train with a given set of images in a given order. In addition, there may be limitations regarding the particular order of images introduced during training, where changing the order in which training images are introduced may result in network freezes and the inability to perform additional training on an already trained network.

參考其餘圖式,其中,同樣的參考標號係指稱同樣的構件,圖2係顯示此後稱為“漸進式網路(progressive network)”或“p網路(p-net)”100之一種漸進式神經網路的示意圖。p網路100係包括該p網路的複數個或一組輸入102。各個輸入102係裝配以接收一個輸入訊號104,其中該等 輸入訊號係在圖2被表示為I1,I2…Im。各個輸入訊號I1,I2…Im係代表一個輸入影像106之一些特徵的一個值,例如:大小、頻率、相位、訊號極化角度、或和輸入影像106之不同部分的關聯性。各個輸入訊號104係具有一個輸入值,其中該複數個輸入訊號104係共同概括描述該輸入影像106。 Referring to the remaining figures, wherein the same reference numerals refer to the same components, and FIG. 2 shows a progressive type referred to as "progressive network" or "p-net" 100 hereinafter. Schematic diagram of a neural network. The p-network 100 includes a plurality or set of inputs 102 of the p-network. Each input 102 is assembled to receive an input signal 104, wherein the input signals are represented as I 1 , I 2 ... I m in FIG. Each of the input signals I 1 , I 2 ... I m represents a value of some features of an input image 106, such as size, frequency, phase, signal polarization angle, or correlation with a different portion of the input image 106. Each of the input signals 104 has an input value, wherein the plurality of input signals 104 collectively describe the input image 106.

各個輸入值係可在其置於-∞與+∞之間的一個值範圍內且可被設定為數位及/或類比形式。輸入值的範圍係可取決於一組訓練影像。在最簡單的情形中,輸入值的範圍係可為在用於所有訓練影像之輸入訊號的最小與最大值之間的差異。為了實際理由,輸入值的範圍係可藉由排除其被視為過高的輸入值而受限制。舉例來說,輸入值的範圍之該種限制係可經由用於變異數縮減之已知統計方法而達成,諸如:重要性取樣。限制輸入值的範圍之另一個實例係可為其低於一個預定最小位準的所有訊號之指定到一個特定最小值以及其超過一個預定最大位準的所有訊號之指定到一個特定最大值。 Each input value may be within a range of values between -∞ and +∞ and may be set to a digital and/or analog form. The range of input values can depend on a set of training images. In the simplest case, the range of input values can be the difference between the minimum and maximum values of the input signals for all training images. For practical reasons, the range of input values can be limited by excluding the input values that are considered too high. For example, such a limitation of the range of input values can be achieved via known statistical methods for the reduction of variance, such as importance sampling. Another example of limiting the range of input values is the assignment of all signals assigned to a particular minimum value and all of its signals exceeding a predetermined maximum level to a particular maximum value for all signals below a predetermined minimum level.

該p網路100還包括複數個或一組神經結118。各個神經結118被連接到複數個輸入102的一者,包括複數個校正權重112,且還可包括一個神經結權重108,如在圖2所示。各個校正權重112係由一個各別的權重值112所界定。該p網路100還包括一組分配器114。各個分配器114係運作連接到複數個輸入102的一者以供接收各別的輸入訊號104。此外,各個分配器114係裝配以從和輸入值相關之複數個校正權重112來選擇一個或多個校正權重。 The p-network 100 also includes a plurality of or a set of neural nodes 118. Each neural node 118 is coupled to one of a plurality of inputs 102, including a plurality of correction weights 112, and may also include a neural node weight 108, as shown in FIG. Each correction weight 112 is defined by a respective weight value 112. The p-network 100 also includes a set of allocators 114. Each of the distributors 114 is operatively coupled to one of the plurality of inputs 102 for receiving respective input signals 104. In addition, each of the dispensers 114 is configured to select one or more correction weights from a plurality of correction weights 112 associated with the input values.

該p網路100係另外包括一組神經元116。各個神經元116係具有至少一個輸出117且經由一個神經結118而和複數個輸入102的至少 一者為連接。各個神經元116係裝配以將選擇自被連接到各別的神經元116之各個神經結118的該等校正權重112之權重值合計或總計且藉此產生一個神經元總和120,或者被標示為Σn。一個單獨的分配器114係可使用於一個既定輸入102的各個神經結118,如在圖3A、3B、與3C所示,或單一個分配器係可使用於所有的該等神經結118,如在圖4A、4B、與4C所示。在p網路100之形成或設置期間,所有的校正權重112係指定的初始值,其可在p網路的訓練過程期間而改變。校正權重112的初始值係可如同在古典的神經網路10之中而被指定,舉例來說,該等權重係可隨機選擇、在一個預定數學函數之幫助下所計算、從一個預定樣板所選擇、等等。 The p-network 100 additionally includes a set of neurons 116. Each neuron 116 has at least one output 117 and is via at least one neural node 118 and at least a plurality of inputs 102 One is the connection. Each neuron 116 is assembled to sum or total the weight values of the correction weights 112 selected from the respective neural nodes 118 that are connected to the respective neuron 116 and thereby generate a sum of neurons 120, or Σn. A separate dispenser 114 can be used for each nerve node 118 of a given input 102, as shown in Figures 3A, 3B, and 3C, or a single dispenser system can be used for all of the nerve nodes 118, such as 4A, 4B, and 4C. During the formation or setup of the p-network 100, all of the correction weights 112 are assigned initial values that may change during the training process of the p-network. The initial values of the correction weights 112 can be specified as in the classical neural network 10, for example, the weights can be randomly selected, calculated with the help of a predetermined mathematical function, from a predetermined template. Choice, and so on.

該p網路100還包括一個權重校正計算器122。權重校正計算器122係裝配以接收其具有一個訊號值且代表一個輸出影像126的一部分之一個期望(即:預定)輸出訊號124。權重校正計算器122係亦裝配以確定神經元總和120與期望輸出訊號124的值之一個偏差128,亦稱:訓練誤差,且使用該確定的偏差128來修改各別的校正權重值。之後,將修改的校正權重值總計以確定該神經元總和120係使該所屬的神經元總和與期望輸出訊號124的值之偏差為最小化,且結果為有效以供訓練該p網路100。 The p-network 100 also includes a weight correction calculator 122. The weight correction calculator 122 is configured to receive a desired (i.e., predetermined) output signal 124 having a signal value and representing a portion of an output image 126. The weight correction calculator 122 is also assembled to determine a deviation 128 between the sum of the neuron sum 120 and the desired output signal 124, also known as the training error, and the determined deviation weight 128 is used to modify the respective correction weight value. Thereafter, the modified correction weight values are totaled to determine that the sum of neurons 120 minimizes the deviation of the associated neuron sum from the value of the desired output signal 124, and the result is valid for training the p-network 100.

為了類比於關於圖1所論述之古典的網路10,偏差128亦可被描述為在所確定的神經元總和120與期望輸出訊號124的值之間的訓練誤差。相較於關於圖1所論述之古典的神經網路10,在p網路100之中,輸入訊號104的輸入值係僅在一般網路設置的過程中改變,且在該p網路的訓練期間為未改變。並非改變該輸入值,p網路100之訓練係藉由改變校正權重112的值112所提供。此外,雖然各個神經元116係包括一個總計的功 能,其中該神經元係將該等校正權重值合計,神經元116係不需要一個致動函數且實際上其特徵為沒有一個致動函數,諸如:由古典的神經網路10之中的致動函數裝置22所提供。 To be analogous to the classical network 10 discussed with respect to FIG. 1, the deviation 128 can also be described as a training error between the determined sum of neuron 120 and the value of the desired output signal 124. Compared with the classical neural network 10 discussed with respect to FIG. 1, in the p network 100, the input value of the input signal 104 is changed only during the general network setting, and the training in the p network. The period is unchanged. Rather than changing the input value, the training of the p-network 100 is provided by changing the value 112 of the correction weight 112. In addition, although each neuron 116 includes a total of merit Yes, wherein the neuron system sums the correction weight values, the neuron 116 does not require an actuation function and is actually characterized by no actuation function, such as: caused by the classical neural network 10 Provided by the dynamic function device 22.

在古典的神經網路10之中,在訓練期間的權重校正係藉由改變神經結權重16所達成,而在p網路100之中,對應的權重校正係藉由改變校正權重值112所提供,如在圖2所示。各別的校正權重112可被包括在其經定位在所有或一些神經結118之上的權重校正方塊110。在神經網路電腦模擬中,各個神經結權重與校正權重係可由一個數位裝置(諸如:一個記憶體單元)及/或由一個類比裝置所代表。在神經網路軟體模擬中,校正權重112的值係可經由一個適當程式設計的演算法所提供,而在硬體模擬中,用於記憶體控制之已知的方法係可使用。 In the classical neural network 10, the weight correction during training is achieved by changing the neural node weight 16, and in the p network 100, the corresponding weight correction is provided by changing the correction weight value 112. As shown in Figure 2. The respective correction weights 112 can be included in a weight correction block 110 that is positioned over all or some of the neural nodes 118. In neural network computer simulations, individual neural node weights and correction weights can be represented by a digital device (such as a memory unit) and/or by an analog device. In neural network software simulations, the value of the correction weight 112 can be provided via an appropriately programmed algorithm, while in hardware simulation, known methods for memory control can be used.

在p網路100之中,神經元總和120與期望輸出訊號124的偏差128係可表示為在其間的一個數學計算差異。此外,各別修改的校正權重112之產生係可包括該計算的差異之分擔到用以產生神經元總和120的各個校正權重。在該種實施例中,各別修改的校正權重112之產生係將允許神經元總和120在很少個時期之內而被收斂在期望輸出訊號值,在一些情形係僅需要單一個時期,用以快速訓練p網路100。在一個特定情形,該數學差異之分擔在用以產生神經元總和120的該等校正權重112之間係可包括將該確定的差異平均劃分在用以產生各別的神經元總和120的各個校正權重之間。 In p-network 100, the deviation 128 of neuron sum 120 from desired output signal 124 can be expressed as a mathematical difference therebetween. Moreover, the generation of the individually modified correction weights 112 may include the division of the calculated differences to the respective correction weights used to generate the sum of neurons 120. In such an embodiment, the separately modified correction weights 112 will allow the neuron sum 120 to be converged to the desired output signal value within a few periods, in some cases only a single period is required. To quickly train the p network 100. In a particular case, the sharing of the mathematical differences between the correction weights 112 used to generate the sum of neurons 120 may include dividing the determined differences equally among the various corrections used to generate the respective sum of neurons 120. Between weights.

在一個不同的實施例中,神經元總和120與期望輸出訊號值的偏差128之確定係可包括該期望輸出訊號值除以神經元總和,藉此產生 一個偏差係數。在該種特定情形,各別修改的校正權重112之修改係包括用以產生神經元總和120的各個校正權重乘以該偏差係數。各個分配器114係可另外裝配以將複數個影響係數134指定到該複數個校正權重112。在本實施例中,各個影響係數134係可為以某個預定比例而指定到該複數個校正權重112的一者以產生各別的神經元總和120。為了對應於各個各別的校正權重112,各個影響係數134可被指定一個“Ci,d,n”命名,如在圖式所示。 In a different embodiment, the determination of the deviation 128 of the sum of neurons 120 from the desired output signal value can include the desired output signal value divided by the sum of the neurons, thereby generating a coefficient of variation. In this particular case, the modification of the individually modified correction weights 112 includes the respective correction weights used to generate the sum of neurons 120 multiplied by the deviation coefficients. Each of the dispensers 114 can be additionally configured to assign a plurality of influence coefficients 134 to the plurality of correction weights 112. In the present embodiment, each of the influence coefficients 134 may be assigned to one of the plurality of correction weights 112 at a predetermined ratio to produce a respective sum of neurons 120. To correspond to each respective correction weight 112, each influence coefficient 134 can be assigned a "C i,d,n " designation as shown in the drawing.

對應於特定神經結118之該複數個影響係數134的各者係由一個各別的影響分佈函數136所界定。影響分佈函數136係可對於所有影響係數134或僅對於其對應於一個特定神經結118之複數個影響係數134而為相同。複數個輸入值的各者係可被接收到一個值範圍138,其根據一個區間分佈函數140而劃分為區間或子劃分“d”,俾使各個輸入值被接收在一個各別的區間“d”之內且各個校正權重係對應於該等區間的一者。各個分配器114係可使用各別接收的輸入值以選擇各別的區間“d”,且將該等各別的複數個影響係數134指定到其對應於所選擇的各別區間“d”之校正權重112並且到其對應於鄰近於所選擇的各別區間的一個區間之至少一個校正權重,諸如:Wi,d+1,n或Wi,d-1,n。在另一個非限制的實例中,該等影響係數134之預定的比例係可根據一種統計分佈而界定。 Each of the plurality of influence coefficients 134 corresponding to a particular neural node 118 is defined by a respective influence distribution function 136. The influence distribution function 136 may be the same for all impact coefficients 134 or only for a plurality of influence coefficients 134 corresponding to one particular neural node 118. Each of the plurality of input values can be received a range of values 138 that is divided into intervals or sub-divisions "d" according to an interval distribution function 140 such that each input value is received in a respective interval "d" And each correction weight corresponds to one of the intervals. Each of the distributors 114 can use the respective received input values to select respective intervals "d", and assign the respective plurality of influence coefficients 134 to their respective respective intervals "d". The weight 112 is corrected and to at least one correction weight corresponding to an interval adjacent to the selected respective interval, such as: W i, d+1, n or W i, d-1, n . In another non-limiting example, the predetermined ratio of the influence coefficients 134 can be defined according to a statistical distribution.

產生神經元總和120係可包括根據輸入值102而初始將各別的影響係數134指定到各個校正權重112且接著將該所屬的影響係數乘以各別運用的校正權重112之值。接著,經由各個神經元116,將對於連接到其的所有神經結118之該校正權重112與經指定的影響係數134之個別乘積總計。 Generating the sum of neurons 120 may include initially assigning respective influence coefficients 134 to respective correction weights 112 based on the input values 102 and then multiplying the associated influence coefficients by the values of the respective applied correction weights 112. Next, via the individual neurons 116, the individual weights of the correction weights 112 for all of the neural nodes 118 connected thereto are summed with the specified influence coefficients 134.

權重校正計算器122係可被裝配以應用各別的影響係數134來產生各別的修改後的校正權重112。明確而言,權重校正計算器122係可根據由各別的影響係數134所建立的比例而將在神經元總和120與期望輸出訊號124之間的計算數學差異的一部分應用到用以產生該神經元總和120的各個校正權重112。此外,在用以產生該神經元總和120的該等校正權重112之間所劃分的數學差異係可進一步除以各別的影響係數134。隨後,神經元總和120除以各別的影響係數134之結果係可相加到校正權重112以使神經元總和120收斂在期望輸出訊號值上。 The weight correction calculator 122 can be assembled to apply the respective influence coefficients 134 to generate respective modified correction weights 112. In particular, the weight correction calculator 122 can apply a portion of the calculated mathematical difference between the neuron sum 120 and the desired output signal 124 to generate the nerve based on the ratio established by the respective influence coefficient 134. The respective correction weights 112 of the sum totals 120. Moreover, the mathematical differences divided between the correction weights 112 used to generate the sum of neurons 120 may be further divided by the respective influence coefficients 134. Subsequently, the result of dividing the sum of neurons 120 by the respective influence factor 134 can be added to the correction weight 112 to cause the sum of neurons 120 to converge on the desired output signal value.

典型而言,p網路100之形成係將在p網路的訓練開始之前而發生。然而,在一個不同的實施例中,若在訓練期間,p網路100係接收對於其的初始校正權重為不存在之一個輸入訊號104,適當的校正權重112係可產生。在該種情形,特定的分配器114係將確定用於特定輸入訊號104的適當區間“d”,且一群具有初始值的校正權重112係將針對於既定輸入102、既定區間“d”、與所有各別的神經元116而產生。此外,對應的影響係數134係可被指定到各個新產生的校正權重112。 Typically, the formation of p-network 100 will occur before the training of the p-network begins. However, in a different embodiment, if during the training, the p-network 100 receives an input signal 104 for which the initial correction weight is not present, an appropriate correction weight 112 can be generated. In this case, the particular distributor 114 will determine the appropriate interval "d" for the particular input signal 104, and a group of correction weights 112 having the initial value will be for the given input 102, the established interval "d", and All individual neurons 116 are produced. Additionally, a corresponding impact factor 134 can be assigned to each newly generated correction weight 112.

各個校正權重112係可由一組索引所界定,該組索引係設定以識別各個各別的校正權重在p網路100之上的一個位置。該組索引可明確包括:一個輸入索引“i”,其被設定以識別對應於特定的輸入102之校正權重112;一個區間索引“d”,其被設定以指定用於各別校正權重之上述的選擇區間;以及,一個神經元索引“n”,其被設定以用命名“Wi,d,n”來指定對應於特定的神經元116之校正權重112。因此,對應於一個特定輸入102之各個校正權重112被指定該特定索引“i”於下標符號以指出該所屬 位置。同理,對應於一個特定神經元116與一個各別神經結118之各個校正權重“W”被指定該特定索引“n”與“d”於下標符號以指出在p網路100之上的該校正權重的所屬位置。該組索引還可包括:一個存取索引“a”,其被設定以結算該各別校正權重112在p網路100的訓練期間而由輸入訊號104所存取的次數。換言之,每當一個特定區間“d”與各別的校正權重112係從和該輸入值相關的複數個校正權重而被選擇用於訓練,存取索引“a”被增量以計數該輸入訊號。存取索引“a”係可被使用以藉由採用一個命名“Wi,d,n,a”來進而指定或界定各個校正權重的目前狀態。索引“i”、“d”、“n”、與“a”的各者係可為在0到+∞之範圍中的數值。 Each correction weight 112 is defined by a set of indices that are set to identify a respective location of the respective correction weights above the p-network 100. The set of indices may explicitly include an input index "i" that is set to identify a correction weight 112 corresponding to a particular input 102; an interval index "d" that is set to specify the above for each correction weight And a neuron index "n" that is set to specify a correction weight 112 corresponding to a particular neuron 116 by the designation "W i,d,n ". Thus, each correction weight 112 corresponding to a particular input 102 is assigned the particular index "i" to the subscript symbol to indicate the location of the location. Similarly, the respective correction weights "W" corresponding to a particular neuron 116 and a respective neural node 118 are assigned the particular index "n" and "d" to the subscript symbol to indicate above the p network 100. The location of the correction weight. The set of indices may also include an access index "a" that is set to settle the number of times the respective correction weights 112 were accessed by the input signal 104 during the training of the p-network 100. In other words, whenever a particular interval "d" and a respective correction weight 112 are selected for training from a plurality of correction weights associated with the input value, the access index "a" is incremented to count the input signal. . The access index "a" can be used to further specify or define the current state of each correction weight by employing a designation "W i,d,n,a ". Each of the indices "i", "d", "n", and "a" may be a value in the range of 0 to +∞.

將輸入訊號104的範圍劃分為區間d0,d1…dm之種種可能性係顯示在圖5。特定區間分佈係可為均勻或線性,舉例來說,其可藉由指定所有的區間“d”為具有相同大小而達成。具有低於一個預定最低位準之其各別輸入訊號值的所有輸入訊號104係可視為具有零值,而具有大於一個預定最高位準之其各別輸入訊號值的所有輸入訊號104係可指定到該最高位準,如亦在圖5所示。特定區間分佈係亦可為非均勻或非線性,諸如:對稱、不對稱、或無限。當該等輸入訊號104的範圍被視為不切實際地大,區間“d”之非線性分佈係可為有用,且該範圍的某個部分係可包括被視為最關鍵的輸入訊號,諸如:在該範圍的開端、中間、或末端。該特定區間分佈係亦可由一個隨機函數所描述。所有前述的實例係非限制性質,由於區間分佈之其他變型亦為可能。 The various possibilities for dividing the range of the input signal 104 into the intervals d 0 , d 1 ... d m are shown in FIG. The particular interval distribution may be uniform or linear, for example, it may be achieved by specifying that all intervals "d" are of the same size. All input signals 104 having their respective input signal values below a predetermined minimum level can be considered to have a zero value, and all input signals 104 having their respective input signal values greater than a predetermined highest level can be specified. To the highest level, as shown in Figure 5. The particular interval distribution may also be non-uniform or non-linear, such as: symmetrical, asymmetrical, or infinite. When the range of the input signals 104 is deemed to be unrealistically large, a non-linear distribution of the interval "d" may be useful, and a portion of the range may include input signals that are considered to be the most critical, such as : at the beginning, middle, or end of the range. This particular interval distribution can also be described by a random function. All of the foregoing examples are non-limiting in nature, as other variations of the interval distribution are also possible.

在輸入訊號104的選擇範圍內之區間“d”的數目係可提高以使p網路100為最佳化。該p網路100之最佳化係可例如藉著在訓練輸入 影像106的複雜度之提高而為合意。舉例來說,相較於單色的影像,對於多色的影像而言係可能需要較大數目個區間,且比起對於簡單圖像,對於複雜裝飾而言係可能需要較大數目個區間。相較於由輪廓所描述的影像,對於其具有複雜的色彩漸層的影像之精確辨識而言係可能需要增大數目個區間,如同對於較大的整體數目個訓練影像而言。假使有高量的雜訊、在訓練影像的高的變動、以及計算資源的過量消耗,亦可能需要於區間“d”的數目之降低。 The number of intervals "d" within the selection range of the input signal 104 can be increased to optimize the p-network 100. The optimization of the p-network 100 can be done, for example, by training It is desirable to increase the complexity of the image 106. For example, a larger number of intervals may be required for a multi-color image compared to a monochrome image, and a larger number of intervals may be required for a complex decoration than for a simple image. Compared to the image described by the contour, it may be necessary to increase the number of intervals for accurate identification of images with complex color gradients, as for a larger overall number of training images. If there is a high amount of noise, high fluctuations in the training image, and excessive consumption of computing resources, it may also be necessary to reduce the number of intervals "d".

視p網路100所處置的資訊之任務或型式而定,例如:視覺或文字資料、來自種種性質之感測器的資料,不同數目個區間與其分佈的型式係可指定。對於各個輸入訊號值的區間“d”,具有索引“d”之既定神經結的對應校正權重係可指定。因此,某個區間“d”將包括其具有關於既定輸入的索引“i”、關於既定區間的索引“d”之所有校正權重112;及,對於從0到n的索引“n”之所有值。在訓練p網路100之過程中,分配器114係界定各個輸入訊號值且因此使該所屬的輸入訊號104為關於對應區間“d”。舉例來說,若在從0到100之輸入訊號的範圍內有10個相等區間“d”,具有在30與40之間的一個值之輸入訊號係相關於區間3,即:“d”=3。 Depending on the task or type of information handled by the p-network 100, for example: visual or textual material, data from sensors of various natures, different numbers of intervals and their distribution patterns may be specified. For the interval "d" of each input signal value, the corresponding correction weight of the given neural node having the index "d" can be specified. Therefore, a certain interval "d" will include all of the correction weights 112 having an index "i" for a given input, an index "d" for a given interval; and, for all values of the index "n" from 0 to n . In the process of training the p-network 100, the allocator 114 defines the respective input signal values and thus causes the associated input signal 104 to be associated with the corresponding interval "d". For example, if there are 10 equal intervals "d" in the range of input signals from 0 to 100, the input signal having a value between 30 and 40 is related to interval 3, namely: "d" = 3.

對於其和既定輸入102為連接的各個神經結118之所有校正權重112,分配器114係可根據其關於特定輸入訊號的區間“d”而指定該影響係數134的值。分配器114亦可根據影響係數134的值之一個預定分佈(在圖6所示)(諸如:正弦波、常態、對數分佈曲線、或一個隨機分佈函數)而指定該影響係數134的值。在許多情形,對於關於各個神經結118的一個 特定輸入訊號102之影響係數134或Ci,d,n的總和或積分將具有1(一)的一個值。 For all of the correction weights 112 of the respective neural nodes 118 to which the given input 102 is connected, the distributor 114 can specify the value of the influence coefficient 134 based on its interval "d" for the particular input signal. The distributor 114 may also specify the value of the influence coefficient 134 based on a predetermined distribution of the values of the influence coefficients 134 (shown in Figure 6) such as a sine wave, a normal state, a logarithmic distribution curve, or a random distribution function. In many cases, the sum or integral of the influence factor 134 or C i,d,n for a particular input signal 102 for each neural node 118 will have a value of 1 (one).

Σ Synapse Ci,d,n=1或 Synapse Ci,d,n=1 [1]在最簡單的情形中,最接近對應於輸入訊號值的校正權重112係可指定1(一)的一個值到影響係數134(Ci,d,n),而對於其他區間的校正權重係可接收0(零)的一個值。 Σ Synapse C i,d,n =1 or Synapse C i,d,n =1 [1] In the simplest case, the correction weight 112 closest to the input signal value can be assigned 1 (one) One value to the influence factor 134 (C i,d,n ), and the correction weight for the other interval can receive a value of 0 (zero).

相較於古典的神經網路10,p網路100係針對於在p網路之訓練期間的持續時間降低與其他資源使用。雖然在本文所揭示作為p網路100的部分者之一些元件係由對於熟悉古典神經網路者為習知的某些名稱或識別符號來命名,特定的名稱係為了簡化而使用且可能不同於其在古典神經網路的對應者而運用。舉例來說,控制輸入訊號(I1-Im)的大小之神經結權重16係在古典的神經網路10之概括設置過程期間而制定且在該古典網路之訓練期間而改變。在另一方面,p網路100之訓練係藉由改變校正權重112而達成,而神經結權重108係在訓練期間為不變。此外,如上文所論述,神經元116的各者係包括一個總計或相加構件,而不包括其對於古典神經網路10為典型者的一個致動函數裝置22。 Compared to the classical neural network 10, the p-network 100 is designed to reduce the duration of use during training of the p-network with other resources. Although some of the elements disclosed herein as part of the p-network 100 are named by certain names or identifying symbols that are familiar to those familiar with classical neural networks, the specific names are used for simplicity and may be different. It is used by the counterparts of the classical neural network. For example, the neural node weight 16 that controls the magnitude of the input signal (I 1 -I m ) is established during the general setup process of the classical neural network 10 and changes during training of the classical network. On the other hand, the training of the p-network 100 is achieved by changing the correction weight 112, while the neural node weights 108 are unchanged during training. Moreover, as discussed above, each of the neurons 116 includes a totaling or adding component, and does not include an actuation function device 22 that is typical for the classical neural network 10.

概括而言,p網路100係藉由訓練各個神經元單元119所訓練,各個神經元單元119係包括一個各別的神經元116與所有連接的神經結118,包括該特定神經元與其和該所屬的神經元連接的所有各別的神經結118與校正權重112。是以,p網路100之訓練係包括改變其貢獻到各別的神經元116之校正權重112。對於校正權重112之改變係基於其包括在下文詳細揭示的一種方法200之一個群組訓練演算法而進行。在所揭示的演算 法中,訓練誤差(即:偏差128)係對於各個神經元而確定,基於哪些校正值係確定且指定到其在確定由各個各別的神經元116所得到的總和而使用之權重112的各者。在訓練期間的該等校正值之引入係意欲將對於該所屬的神經元116的偏差128降低到零。在具有額外的影像之訓練期間,關於稍早利用的影像之新的誤差係可能再次發生。為了消除該等額外的誤差,在一個訓練時期之完成後,對於整個p網路100之所有訓練影像的誤差係可計算,且若該等誤差係大於預定值,一個或多個額外的訓練時期係可進行而直到該等誤差係成為小於一個目標或預定值。 In summary, p-network 100 is trained by training individual neuron units 119, which include a respective neuron 116 and all connected neural nodes 118, including the particular neuron All of the individual neural nodes 118 to which the associated neuron is connected are corrected for weight 112. Thus, the training system of the p-network 100 includes changing the correction weights 112 that contribute to the respective neurons 116. The change to the correction weight 112 is based on a group training algorithm that includes a method 200 disclosed in detail below. In the revealed calculus In the method, the training error (i.e., the deviation 128) is determined for each neuron, based on which correction values are determined and assigned to each of the weights 112 used to determine the sum obtained by the respective individual neurons 116. By. The introduction of such correction values during training is intended to reduce the deviation 128 for the associated neuron 116 to zero. During training with additional images, new errors in the images that were used earlier may occur again. In order to eliminate such additional errors, the error of all training images for the entire p-network 100 can be calculated after completion of a training period, and if the errors are greater than a predetermined value, one or more additional training periods The system can be performed until the error becomes less than a target or a predetermined value.

圖23係描繪訓練其關於圖2-22之如上所述的p網路100之方法200。方法200係開始在方框202,其中,該種方法係包括:經由輸入102,接收其具有輸入值的輸入訊號104。接在方框202之後,該種方法係前進到方框204。在方框204,該種方法係包括:將輸入訊號104傳遞到其被運作連接到輸入102的分配器114。在方框202或方框204,該種方法係可包括:由該組索引來界定各個校正權重112。如關於p網路100的結構之上文所述,該組索引係可包括:輸入索引“i”,其被設定以識別對應於該輸入102的校正權重112。該組索引還可包括:區間索引“d”,其被設定以指定用於各別校正權重112的選擇區間;以及,神經元索引“n”,其被設定以指定對應於特定神經元116的校正權重112為“Wi,d,n”。該組索引係可另外包括:存取索引“a”,其被設定以結算該各別校正權重112在該p網路100的訓練期間而由輸入訊號104所存取的次數。是以,各個校正權重的目前狀態係可採用命名“Wi,d,n,a”。 Figure 23 depicts a method 200 of training the p-network 100 as described above with respect to Figures 2-22. The method 200 begins at block 202, where the method includes receiving, via input 102, an input signal 104 having an input value. Following block 202, the method proceeds to block 204. At block 204, the method includes passing the input signal 104 to the dispenser 114 that is operatively coupled to the input 102. At block 202 or block 204, the method can include defining, by the set of indices, individual correction weights 112. As described above with respect to the structure of p-network 100, the set of indices can include an input index "i" that is set to identify a correction weight 112 corresponding to the input 102. The set of indices may further include an interval index "d" that is set to specify a selection interval for the respective correction weights 112; and a neuron index "n" that is set to specify that corresponds to the particular neuron 116 The correction weight 112 is "W i,d,n ". The set of indices may additionally include an access index "a" that is set to settle the number of times the respective correction weights 112 were accessed by the input signal 104 during training of the p-network 100. Therefore, the current state of each correction weight can be named "W i,d,n,a ".

在方框204之後,該種方法係繼續進行到方框206,其中, 該種方法係包括:經由分配器114,從其定位在連接到該所屬的輸入102的神經結118之上的和該輸入值為相關複數個校正權重來選擇一個或多個校正權重112。如上所述,各個校正權重112係由其各別的權重值所界定。在方框206,該種方法係可另外包括:經由分配器114,將複數個影響係數134指定到該複數個校正權重112。在方框206,該種方法還可包括:以一個預定比例而將各個影響係數134指定到該複數個校正權重112的一者以產生神經元總和120。此外,在方框206,該種方法係可包括:經由神經元116,將對於經連接到其的所有神經結118之校正權重112與指定的影響係數134之一個乘積合計。此外,在方框206,該種方法係可包括:經由權重校正計算器122,根據由各別的影響係數134所建立之比例而將該確定的差異之一部分施加到用以產生該神經元總和120的各個校正權重112。 After block 204, the method proceeds to block 206 where The method includes selecting one or more correction weights 112 via the dispenser 114 from which the plurality of correction weights are located above the neural node 118 connected to the associated input 102 and the input value is associated. As noted above, each of the correction weights 112 is defined by its respective weight value. At block 206, the method can additionally include assigning a plurality of influence coefficients 134 to the plurality of correction weights 112 via the distributor 114. At block 206, the method may further include assigning each of the influence coefficients 134 to one of the plurality of correction weights 112 at a predetermined ratio to generate a sum of neurons 120. Moreover, at block 206, the method can include summing, by the neuron 116, a product of the correction weight 112 for all of the neural nodes 118 connected thereto and a specified influence coefficient 134. Moreover, at block 206, the method can include, via the weight correction calculator 122, applying a portion of the determined difference to the sum used to generate the neuron based on the ratio established by the respective influence coefficient 134 Each of the correction weights 120 of 120.

如關於p網路100的結構之上文所述,複數個影響係數134係可由一個影響分佈函數136所界定。在該種情形中,該種方法係可另外包括:將該輸入值接收到值範圍138,其根據一個區間分佈函數140而被劃分為區間“d”,俾使該輸入值被接收在一個各別區間內,且各個校正權重112係對應於該等區間的一者。此外,該種方法係可包括:經由分配器114,使用該接收的輸入值以選擇該各別的區間“d”且將該複數個影響係數134指定到其對應於該選擇各別的區間“d”之校正權重112且到其對應於鄰近於該選擇各別的區間“d”的一個區間之至少一個校正權重。如關於p網路100的結構之上文所述,對應於鄰近於該選擇各別的區間“d”的一個區間之校正權重112係可識別為例如Wi,d+1,n或Wi,d-1,nAs described above with respect to the structure of p-network 100, a plurality of influence coefficients 134 may be defined by an influence distribution function 136. In such a case, the method can additionally include receiving the input value into a range of values 138 that is divided into intervals "d" according to an interval distribution function 140 such that the input values are received in each Within the interval, each correction weight 112 corresponds to one of the intervals. Moreover, the method can include, via the distributor 114, using the received input value to select the respective interval "d" and assigning the plurality of influence coefficients 134 to their respective intervals corresponding to the selection. The correction weight 112 of d" and to its at least one correction weight corresponding to an interval adjacent to the respective interval "d" of the selection. As described above with respect to the structure of the p-network 100, the correction weights 112 corresponding to an interval adjacent to the selected respective interval "d" can be identified as, for example, W i, d+1, n or W i , d-1, n .

接在方框206之後,該種方法係前進到方框208。在方框 208,該種方法係包括:將經由神經結118而和輸入102所連接之特定神經元116所選擇的校正權重112的該等權重值合計以產生該神經元總和120。如關於p網路100的結構之上文所述,各個神經元116係包括至少一個輸出117。在方框208之後,該種方法係繼續進行到方框210,其中,該種方法係包括:經由權重校正計算器122,接收其具有該訊號值的期望輸出訊號124。接在方框210之後,該種方法係前進到方框212,其中,該種方法係包括:經由權重校正計算器122,確定該神經元總和120與期望輸出訊號124的值之偏差128。 Following block 206, the method proceeds to block 208. In the box 208. The method includes summing the weight values of the correction weights 112 selected by the particular neuron 116 connected to the input 102 via the neural node 118 to produce the sum of the neurons 120. As described above with respect to the structure of p-network 100, each neuron 116 includes at least one output 117. After block 208, the method continues to block 210, wherein the method includes receiving, via the weight correction calculator 122, a desired output signal 124 having the signal value. Following block 210, the method proceeds to block 212 where the method includes determining, via the weight correction calculator 122, a deviation 128 between the sum of the neuron sum 120 and the desired output signal 124.

如在p網路100的說明之上文所揭示,該神經元總和120與該期望輸出訊號值的偏差128之確定係可包括:確定在其間的數學差異。此外,各別的校正權重112之修改係可包括:將該數學差異分攤到用以產生該神經元總和120的各個校正權重。替代而言,該數學差異之分攤係可包括:將該確定的差異平均劃分在用以產生該神經元總和120的各個校正權重112之間。在還有一個不同的實施例中,偏差128之確定係還可包括:將該期望輸出訊號124的值除以該神經元總和120,藉此產生該偏差係數。再者,在該種情形,該等各別的校正權重112之修改係可包括:將用以產生該神經元總和120的各個校正權重112乘以該產生的偏差係數。 As disclosed above in the description of p-network 100, the determination of the deviation 128 of the neuron sum 120 from the desired output signal value can include determining a mathematical difference therebetween. Moreover, the modification of the respective correction weights 112 can include allocating the mathematical differences to respective correction weights used to generate the sum of the neurons 120. Alternatively, the mathematical difference distribution may include dividing the determined difference equally between the respective correction weights 112 used to generate the neuron sum 120. In still another different embodiment, the determining of the deviation 128 can further include dividing the value of the desired output signal 124 by the sum of the neurons 120, thereby generating the coefficient of variation. Moreover, in such a case, the modification of the respective correction weights 112 can include multiplying the respective correction weights 112 used to generate the sum of the neurons 120 by the resulting deviation coefficients.

在方框212之後,該種方法係繼續進行到方框214。在方框214,該種方法係包括:經由權重校正計算器122,使用該確定的偏差128來修改各別的校正權重值。經修改的校正權重值係可隨後被合計或總計且接著被使用以確定一個新的神經元總和120。該總計的修改校正權重值係可接著作用以使得神經元總和120與該期望輸出訊號124的值之偏差為最小 化,且因此訓練p網路100。接在方框214之後,方法200係可包括:返回到方框202以實行附加的訓練時期,直到神經元總和120與該期望輸出訊號124的值之偏差為充分最小。換言之,附加的訓練時期係可實行以使神經元總和120收斂在期望輸出訊號124之上而為在預定偏差或誤差值之內,俾使該p網路100係可視為經訓練且備妥以供新的影像之操作。 After block 212, the method continues to block 214. At block 214, the method includes modifying the respective correction weight values using the determined deviation 128 via the weight correction calculator 122. The modified correction weight values can then be aggregated or totaled and then used to determine a new sum of neurons 120. The modified modified weight value of the total is used to minimize the deviation of the sum of the neuron 120 from the expected output signal 124. The p network 100 is trained. Following block 214, method 200 can include returning to block 202 to effect an additional training period until the deviation of neuron sum 120 from the desired output signal 124 is sufficiently minimized. In other words, an additional training period can be implemented to cause the neuron sum 120 to converge above the desired output signal 124 to be within a predetermined deviation or error value, such that the p-network 100 can be considered trained and ready For the operation of new images.

概括而言,該等輸入影像106必須作準備以供p網路100之訓練。用於訓練之p網路100的準備係概括開始為一組的訓練影像之形成,包括該等輸入影像106以及(在大多數情形中)對應於所屬的輸入影像之期望輸出影像126。用於p網路100之訓練而由輸入訊號I1,I2…Im所界定之輸入影像106(在圖2所示)係根據該p網路被指定以處置的任務所選擇,例如:人類影像或其他物體的辨識、某些活動的辨識、叢集或資料分類、統計資料的分析、型樣辨識、預測、或控制某些處理。是以,輸入影像106係可用其適用於引入到電腦之任何格式來呈現,例如:使用格式jpeg、gif、pptx,用表格、圖表、曲線圖與圖形、種種文件格式、或一組符號之形式。 In summary, the input images 106 must be prepared for training by the p-network 100. The preparation of the p-network 100 for training summarizes the formation of a set of training images, including the input images 106 and, in most cases, the desired output image 126 corresponding to the associated input image. The input image 106 (shown in Figure 2) defined by the input signals I 1 , I 2 ... I m for the training of the p-network 100 is selected according to the task that the p-network is designated for disposal, for example: Identification of human images or other objects, identification of certain activities, clustering or data classification, analysis of statistical data, pattern identification, prediction, or control of certain processes. Therefore, the input image 106 can be rendered in any format suitable for introduction to a computer, for example, using the format jpeg, gif, pptx, in the form of a table, a chart, a graph and a graph, various file formats, or a set of symbols. .

用於p網路100之訓練的準備係還可包括:該等選擇的輸入影像106針對於其統一之轉換,其為便於該所屬影像由p網路100之處理,舉例來說,將所有影像轉變為其具有相同數目個訊號、或在圖像之情形中為相同數目個像素之一種格式。彩色影像係可例如呈現為三原色的組合。影像轉換係還可包括特徵之修改,例如:將一個影像在空間移位、改變該影像的視覺特徵(諸如:解析度、亮度、對比、色彩、視角、透視、焦距與焦點)、以及增加符號、標號、或註記。 The preparation for the training of the p-network 100 may further include: the selected input image 106 is for its unified conversion, in order to facilitate processing of the associated image by the p-network 100, for example, all images Convert to a format that has the same number of signals, or the same number of pixels in the case of an image. The color image can be presented, for example, as a combination of three primary colors. The image conversion system may also include modification of features, such as shifting an image spatially, changing visual features of the image (such as resolution, brightness, contrast, color, angle of view, perspective, focus, and focus), and adding symbols , label, or annotation.

在該區間數目的選擇之後,一個特定輸入影像可被轉換為以 區間格式的一個輸入影像,即:實際訊號值係可被記錄為該所屬的各別的訊號所屬於之區間的數目。此程序係可實施在用於既定的影像之各個訓練時期。然而,該影像係亦可一次形成為一組的區間數目。舉例來說,在圖7,初始影像係呈現為一個圖像,而在表格“用數位格式的影像”之中,相同的影像係用數位碼的形式來呈現,且在表格“用區間格式的影像”之中,該影像係呈現為一組的區間數目,一個單獨的區間係指定用於各10個值的數位碼。 After the selection of the number of intervals, a particular input image can be converted to An input image of the interval format, that is, the actual signal value can be recorded as the number of intervals to which the respective signals to which it belongs. This program can be implemented during each training session for a given image. However, the image system can also be formed as a set of intervals at a time. For example, in Figure 7, the initial image is presented as an image, and in the table "Image in digital format", the same image is presented in the form of a digit code, and in the table "in the format of the interval" Among the images, the image is presented as a set of intervals, and a single interval is assigned to the digit code for each of the 10 values.

該p網路100的上述結構以及如所描述的訓練演算法或方法200係允許該p網路之持續或反覆的訓練,因此在訓練過程的開始係不需要形成一個完整組之訓練輸入影像106。可能形成一個相當小的起始組之訓練影像,且該起始組係可隨著必要而擴展。輸入影像106係可劃分為獨特的類別,例如:一組某人的圖像、一組貓的照片、或一組汽車的照片,俾使各個類別係對應於單一個輸出影像、該人的名稱或一個特定的標籤。期望輸出影像126係代表數位(其中各個點係對應於從-∞到+∞的一個特定數值)或類比值的一個欄位或表格。期望輸出影像126的各個點係可對應於p網路100之該等神經元的一者的輸出。期望輸出影像126係可用影像、表格、文字、公式、符號集(諸如:條碼)、或聲音之數位或類比的碼來編碼。 The above-described structure of the p-network 100 and the training algorithm or method 200 as described allow for continuous or repeated training of the p-network, so that it is not necessary to form a complete set of training input images 106 at the beginning of the training process. . It is possible to form a training image of a relatively small initial group, and the starting group can be expanded as necessary. The input image 106 can be divided into unique categories, such as: a group of images of a person, a group of cats, or a set of photos of a car, such that each category corresponds to a single output image, the name of the person Or a specific label. The desired output image 126 is a field or table representing a digit (where each point corresponds to a particular value from -∞ to +∞) or an analog value. It is contemplated that the various points of the output image 126 may correspond to the output of one of the neurons of the p-network 100. The desired output image 126 can be encoded with an image, a table, a text, a formula, a set of symbols (such as a bar code), or a digital or analog code of the sound.

在最簡單的情形中,各個輸入影像106係可對應於一個輸出影像,其編碼該所屬的輸入影像。該輸出影像之點的一者係可被指定一個最大可能值,例如:100%,而所有的其他點係可被指定一個最小可能值,例如:零。在該種情形,接在訓練之後,以其與訓練影像的類似性之一個百分比的形式之種種影像的機率辨識係將為致能。圖8係顯示其訓練用於 二個影像(一個方形與一個圓形)之辨識的p網路100可如何辨識一個圖像之一個實例,該圖像係含有一些特徵,其各個圖形係以百分比來表示且總和不須等於100%。藉由界定在用於訓練的不同影像之間的類似性的百分比之該種型樣辨識過程係可使用以分類特定的影像。 In the simplest case, each input image 106 can correspond to an output image that encodes the associated input image. One of the points of the output image can be assigned a maximum possible value, for example: 100%, and all other points can be assigned a minimum possible value, for example: zero. In this case, after the training, the probability identification of various images in the form of a percentage of their similarity to the training image will be enabled. Figure 8 shows its training for How the identification of the two images (a square and a circle) of the p-network 100 can identify an instance of an image that contains features, each of which is expressed as a percentage and the sum does not have to equal 100. %. This pattern recognition process, which defines the percentage of similarity between different images for training, can be used to classify specific images.

為了改善準確度且排除誤差,編碼係可使用一組數個神經元輸出而非為一個輸出所達成(參閱下文)。在最簡單的情形中,輸出影像係可在訓練之前而準備。然而,亦可能具有由該p網路100在訓練期間所形成的輸出影像。 To improve accuracy and eliminate errors, the coding system can use a set of several neuron outputs rather than one output (see below). In the simplest case, the output image can be prepared before training. However, it is also possible to have an output image formed by the p-network 100 during training.

在p網路100之中,亦有將輸入與輸出影像倒置的可能性。換言之,輸入影像106係可為數位或類比的值的一個欄位或表格之形式,其中各個點係對應於p網路的一個輸入,而輸出影像係可用其適用於引入到電腦之任何格式來呈現,例如:使用格式jpeg、gif、pptx,用表格、圖表、曲線圖與圖形、種種文件格式、或一組符號之形式。造成的p網路100係可相當適用於歸檔系統、以及影像、音樂術語、方程式、或資料組之關聯的搜尋。 Among the p networks 100, there is also the possibility of inverting the input and output images. In other words, the input image 106 can be in the form of a field or table of digits or analog values, where each point corresponds to an input to the p network, and the output image can be used in any format that is suitable for introduction to a computer. Rendering, for example: using the format jpeg, gif, pptx, in the form of tables, charts, graphs and graphics, various file formats, or a set of symbols. The resulting p-network 100 system is quite suitable for archival systems, as well as for the search of images, musical terms, equations, or data sets.

接在輸入影像106的準備之後,典型而言,p網路100必須被形成且/或一個現存的p網路之參數必須被設定以供處置既定的任務。該p網路100之形成係可包括以下的指定:˙p網路100的規模,如由輸入與輸出的數目所界定;˙對於所有輸入的神經結權重108;˙校正權重112的數目;˙對於輸入訊號104的不同值之校正權重影響係數(Ci,d,n)的分佈;及 ˙訓練之期望的準確度。輸入的數目係基於輸入影像106的尺寸而確定。舉例來說,若干個像素係可使用於圖像,而輸出之選擇的數目係可取決於期望輸出影像126的尺寸。在一些情形中,輸出之選擇的數目係可取決於訓練影像之分類的數目。 Following the preparation of the input image 106, typically, the p-network 100 must be formed and/or an existing p-network parameter must be set for handling the intended task. The formation of the p-network 100 may include the following designations: The size of the ̇p network 100, as defined by the number of inputs and outputs; 神经 the neural node weights for all inputs 108; ̇ the number of correction weights 112; The distribution of the weighting influence coefficient (C i,d,n ) for the different values of the input signal 104; and the desired accuracy of the training. The number of inputs is determined based on the size of the input image 106. For example, several pixels may be used for the image, and the number of selections of the output may depend on the size of the desired output image 126. In some cases, the number of choices of outputs may depend on the number of classifications of the training images.

個別的神經結權重108之值係可在-∞到+∞之範圍中。小於0(零)之神經結權重108的值係可意指訊號放大,其可被用以加強來自特定輸入或來自特定影像的訊號之影響,例如:用於在其含有大量不同個體或物體之相片中的人臉之更有效辨識。另一方面,大於0(零)之神經結權重108的值係可使用以意指訊號衰減,其可被用以降低所需計算的數目且提高該p網路100的操作速度。概括而言,神經結權重的值愈大,則傳送到對應的神經元之訊號係衰減愈多。若其對應於所有輸入的所有神經結權重108係相等且所有神經元係和所有輸入為同樣連接,該種神經網路係將成為通用且將對於一般任務為最有效,諸如:當有關於影像的性質係事先極少知道。然而,該種結構係將通常提高在訓練與操作期間之所需計算的數目。 The value of individual nerve knot weights 108 can range from -∞ to +∞. A value of neural node weight 108 less than 0 (zero) may mean signal amplification, which may be used to enhance the effect of a signal from a particular input or from a particular image, for example, for containing a large number of different individuals or objects. The face in the photo is more effectively identified. On the other hand, a value of the neural node weight 108 greater than 0 (zero) can be used to mean signal attenuation, which can be used to reduce the number of calculations required and to increase the operating speed of the p-network 100. In summary, the greater the value of the neural node weight, the more the signal is transmitted to the corresponding neuron. If all of the neural node weights 108 corresponding to all inputs are equal and all neurons and all inputs are equally connected, the neural network will become generic and will be most effective for general tasks, such as when there is an image The nature of the system is rarely known in advance. However, such a structure will generally increase the number of calculations required during training and operation.

圖9係顯示p網路100的一個實施例,其中,在一個輸入與各別的神經元之間的關係是根據統計常態分佈而降低。神經結權重108之不平均的分佈係可造成整個輸入訊號被傳遞到對於既定輸入之一個目標或“中央”神經元,因此將零的一個值指定到該所屬的神經結權重。此外,神經結權重之不平均的分佈係可造成其他神經元接收降低的輸入訊號值,例如:使用常態、對數-常態、正弦波、或其他的分佈。對於其接收降低的輸入訊號值的神經元116之神經結權重108的值係可隨著其相距該“中央”神經元的距離之增大而增大。在該種情形中,計算之數目係可降低且該p 網路之操作係可加速。已知的完全連接與非完全連接的神經網路之一個組合的該等網路係可對於具有強的內部型樣之影像分析為非常有效,例如:人臉或電影影片的連續畫面。 Figure 9 shows an embodiment of a p-network 100 in which the relationship between an input and a respective neuron is reduced according to a statistical normal distribution. The uneven distribution of the neural node weights 108 can cause the entire input signal to be passed to a target or "central" neuron for a given input, thus assigning a value of zero to the associated neural node weight. In addition, the uneven distribution of neural node weights can cause other neurons to receive reduced input signal values, such as using normal, log-normal, sinusoidal, or other distributions. The value of the neural node weight 108 for the neuron 116 that receives the reduced input signal value may increase as its distance from the "central" neuron increases. In this case, the number of calculations can be reduced and the p The operation of the network can be accelerated. Such a network of known fully connected and non-fully connected neural networks can be very effective for image analysis with strong internal patterns, such as continuous pictures of faces or movie films.

圖9係顯示p網路100的一個實施例,其係對於局部型樣的辨識為有效。為了改善一般型樣之識別,其中神經結權重108的值為小或零之10-20%的強連接係可用一種確定性(諸如:以柵的形式)或隨機的方式而分佈在整個p網路100之中。意欲用於處置一個特定任務之該p網路100的實際形成係使用例如用一種物件導向程式設計語言所撰寫的一個程式而實行,其產生該p網路的主要元件,諸如:神經結、神經結權重、分配器、校正權重、神經元、等等,如同軟體物件。該種程式係可指定在經指出的物件與明確說明其動作的演算法之間的關係。尤其,神經結與校正權重係可在p網路100之形成的開始而形成,連同設定其初始值。p網路100係可在其訓練之開始前而完全形成,且如為必要時而在一個稍後的畫面被修改或附加,舉例而言,當該網路的資訊容量係成為耗盡或如果發生一個致命的失誤。p網路100之完成係亦可能在訓練繼續時。 Figure 9 shows an embodiment of a p-network 100 that is valid for identification of local patterns. In order to improve the identification of the general pattern, a strong connection system in which the value of the neural node weight 108 is small or 10-20% can be distributed over the entire p network in a certain deterministic manner (such as in the form of a grid) or in a random manner. In the middle of the road 100. The actual formation of the p-network 100 intended to handle a particular task is performed using, for example, a program written in an object-oriented programming language that produces the main components of the p-network, such as: neural knots, nerves Weights, allocators, correction weights, neurons, and so on, like soft objects. Such a program can specify the relationship between the object being pointed out and the algorithm that clearly states its action. In particular, the neural node and correction weights can be formed at the beginning of the formation of the p-network 100, along with setting its initial value. The p-network 100 can be fully formed before the start of its training, and modified or appended in a later picture if necessary, for example, when the information capacity of the network becomes exhausted or if A fatal mistake occurred. The completion of the p network 100 may also be as training continues.

若p網路100係預先形成,在一個特定神經結上之選擇校正權重的數目係可等於在輸入訊號之範圍內的區間的數目。此外,校正權重係可在p網路100之形成後而產生,作為響應於個別的區間之出現的訊號。類似於古典的神經網路10,該p網路100的參數與設定之選擇係提供有一系列的目標實驗。該等實驗可包括:(1)具有在所有輸入的相同神經結權重108之p網路的形成,及(2)對於選擇影像的輸入訊號值之評估與區間數目的初始選擇。舉例來說,對於二元(單色)影像之辨識而言,具有僅有2個區間 係可能為充分;對於8位元影像之定性的辨識而言,高達256個區間係可被使用;複雜的統計相依性之近似估算係可能需要許多或甚至數百個區間;對於大的資料庫而言,區間的數目係可為數千個。 If the p-network 100 is pre-formed, the number of selection correction weights on a particular neural node can be equal to the number of intervals within the range of the input signal. In addition, the correction weights may be generated after the formation of the p-network 100 as a signal in response to the occurrence of individual intervals. Similar to the classical neural network 10, the selection of parameters and settings for the p-network 100 provides a series of targeted experiments. Such experiments may include: (1) the formation of a p-network with the same neural node weights 108 at all inputs, and (2) an evaluation of the input signal values for the selected image and an initial selection of the number of intervals. For example, for binary (monochrome) image recognition, there are only 2 intervals. The system may be sufficient; up to 256 intervals can be used for qualitative identification of 8-bit images; complex estimates of statistical dependence may require many or even hundreds of intervals; for large databases In other words, the number of intervals can be thousands.

在訓練p網路100的過程中,輸入訊號的值係可捨入化整,由於其為分佈在特定的區間之間。因此,大於該範圍寬度除以區間數目之輸入訊號的準確度係可能不需要。舉例來說,若輸入值範圍係設定用於100單位且區間數目為10,較±5為佳的準確度係將不需要。該等實驗還可包括:(3)在輸入訊號值的整個範圍之區間的均勻分佈之選擇,且對於校正權重影響係數Ci,d,n之最簡單的分佈係可對於其對應於用於特定輸入訊號的區間之校正權重而設定等於1,而對於所有其餘的校正權重之校正權重影響係可設定為0(零)。該等實驗可另外包括:(4)用具有預定的準確度之一個、多個、或所有準備的訓練影像來訓練p網路100。 During the training of the p-network 100, the value of the input signal can be rounded down as it is distributed between specific intervals. Therefore, the accuracy of the input signal that is greater than the width of the range divided by the number of intervals may not be required. For example, if the input value range is set for 100 units and the number of intervals is 10, an accuracy better than ±5 would not be required. Such experiments further comprising: selecting a uniform distribution of the (3) in a section of the entire range of input signal values and the weight for correcting the influence coefficient C i, d, n the distribution system may be the easiest for which correspond to The correction weight of the interval of the specific input signal is set equal to 1, and the correction weight influence for all remaining correction weights can be set to 0 (zero). The experiments may additionally include: (4) training the p-network 100 with one, more, or all of the prepared training images having a predetermined accuracy.

對於預定準確度之p網路100的訓練時間係可藉由實驗來建立。若p網路100的準確度與訓練時間係令人滿意,選擇的設定係可維持或改變,而對於一個更為有效的變型之搜尋係持續。若所需要的準確度係未達成,為了最佳化之目的,特定修改的影響係可作評估,其可逐個或成群而被實行。該種修改之評估係可包括:改變(提高或降低)區間的數目;改變校正權重影響係數(Ci,d,n)之分佈的型式,測試其具有區間的不均勻分佈之變型,諸如:使用常態、冪次、對數、或對數-常態分佈;以及,改變神經結權重108的值,例如:其轉變到不均勻分佈。 The training time for the p-network 100 with predetermined accuracy can be established by experimentation. If the accuracy and training time of the p-network 100 is satisfactory, the selected settings can be maintained or changed, while the search for a more efficient variant continues. If the required accuracy is not achieved, the impact of the particular modification may be evaluated for the purpose of optimization, which may be carried out one by one or in groups. The evaluation of the modification may include: changing (increasing or decreasing) the number of intervals; changing the pattern of the distribution of the correction weight influence coefficient (C i, d, n ), testing the variation having an uneven distribution of intervals, such as: Use a normal, power, log, or log-normal distribution; and, change the value of the neural node weight 108, for example, its transition to an uneven distribution.

若針對於一個準確結果所需要的訓練時間被視為過量,具有增大的數目個區間之訓練係可評估對於其在訓練時間上的效應。若結果為 訓練時間被降低,在區間數目之提高係可重複,直到期望的訓練時間係在沒有損及需要的準確度之情況下而得到。若訓練時間係隨著提高區間數目來增加而非為減少,附加的訓練係可用減小的數目個區間來實行。若降低的數目個區間係造成減少的訓練時間,區間數目係可進而降低,直到期望的訓練時間係得到。 If the training time required for an accurate result is considered excessive, the training system with an increased number of intervals can evaluate its effect on training time. If the result is The training time is reduced and the increase in the number of intervals is repeatable until the desired training time is obtained without compromising the required accuracy. If the training time is increased rather than reduced as the number of intervals is increased, the additional training can be performed with a reduced number of intervals. If the reduced number of intervals results in reduced training time, the number of intervals can be further reduced until the desired training time is obtained.

該p網路100的設定之形成係可經由具有預定的訓練時間之訓練以及訓練準確度的實驗確定。參數係可經由類似於上述者的實驗改變而改善。關於種種p網路之實際的實行係已經顯示的是,設定選擇之程序係通常為直接而不耗時。 The formation of the settings of the p-network 100 can be determined via experiments with predetermined training time training and training accuracy. The parameter system can be improved via experimental changes similar to those described above. The actual implementation of various p-networks has shown that the procedure for setting the selection is usually straightforward and time consuming.

作為在圖23所顯示的方法200之部分者,p網路100的實際訓練係開始為將輸入影像訊號I1,I2…In饋送到網路輸入裝置102,從該處,該等訊號被傳送到神經結118,通過神經結權重108且進入該分配器(或一群的分配器)114。基於輸入訊號值,分配器114係設定該既定的輸入訊號104對應於之區間“d”的數目,且指定對於其和各別的輸入102為連接之所有神經結118的權重校正方塊110的所有校正權重112之校正權重影響係數Ci,d,n。舉例來說,若區間“d”可對於第一輸入而設定為3,對於所有權重W1,3,n,C1,3,n=1係設定為1,而對於i≠1且d≠3之所有其他權重,Ci,d,n係可設定為0(零)。 As the beginning of the actual method of training system 200 shown in FIG. 23 are part of, network 100 is the P input image signal I 1, I 2 ... I n the web is fed to the input device 102, from there, the signals are It is transmitted to the nerve knot 118, passes through the nerve knot weight 108 and enters the dispenser (or a group of dispensers) 114. Based on the input signal value, the allocator 114 sets the number of intervals "d" for which the predetermined input signal 104 corresponds, and assigns all of the weight correction blocks 110 for all of the neural nodes 118 to which the respective input 102 is connected. The correction weight of the correction weight 112 affects the coefficient C i,d,n . For example, if the interval "d" can be set to 3 for the first input, and for the weights W 1, 3, n , C 1, 3, n = 1 is set to 1, and for i ≠ 1 and d ≠ All other weights of 3, C i, d, n can be set to 0 (zero).

對於各個神經元116,在以下的關係式中被識別為“n”,神經元輸出總和Σ1,Σ2…Σn係藉由將對於貢獻到特定神經元的所有神經結118的各個校正權重112(在以下關係式中被識別為Wi,d,n)乘以一個對應的校正權重影響係數Ci,d,n且藉由相加所有得到的值而形成: Σ n =Σ i,d,n W i,d,n ×C i,d,n [2]Wi,d,n×Ci,d,n之相乘係可由種種的裝置所實行,例如:藉由分配器114、具有儲存的權重之裝置或直接藉由神經元116。該等總和係經由神經元輸出117而轉移到權重校正計算器122。描述該期望輸出影像126的期望輸出訊號O1,O2…On亦被饋送到計算器122。 For each neuron 116, it is identified as "n" in the following relationship, and the neuron output sums Σ1, Σ2...Σn by correcting weights 112 for each of the neural nodes 118 that contribute to a particular neuron (in The following relationship is identified as W i,d,n ) multiplied by a corresponding correction weight influence coefficient C i,d,n and is formed by adding all the obtained values: Σ n = Σ i,d,n The multiplication of W i,d,n × C i,d,n [2]W i,d,n ×C i,d,n can be performed by various devices, for example, by the dispenser 114, with storage The weight of the device or directly by the neuron 116. The sums are transferred to the weight correction calculator 122 via the neuron output 117. The desired output signals O 1 , O 2 . . . O n describing the desired output image 126 are also fed to the calculator 122.

如上所論述,權重校正計算器122係藉由神經元輸出總和Σ1,Σ2…Σn和期望輸出訊號O1,O2…On之比較而用於計算對於校正權重的修改值之一個計算裝置。圖11係顯示其貢獻到該神經元輸出總和Σ1之一組校正權重Wi,d,1,其乘以對應的校正權重影響係數C i,d,1,且此等乘積係隨後由該神經元輸出總和Σ1所相加:Σ1=W1,0,1×C 1,0,1.+W1,1,1×C 1,1,1.+W1,2,1×C 1,2,1.+…[3]隨著訓練開始,即:在第一時期期間,校正權重Wi,d,1係不對應於用於訓練的輸入影像106,因此,神經元輸出總和Σ1係不等於對應的期望輸出影像126。基於初始的校正權重Wi,d,1,該種權重校正系統係計算該校正值Δ1,其被使用以改變貢獻於神經元輸出總和Σ1之所有的校正權重(Wi,d,1)。p網路100係允許對於貢獻於一個特定的神經元116之所有校正權重Wi,d,n的集體校正訊號之其形成及利用的種種選項或變型。 As discussed above, weight correction calculator 122 based neuron output by the sum Σ1, Σ2 ... Σn and the expected output signals O 1, O 2 ... O n of the comparison a calculation means for calculating a modified value for the correction weight. Figure 11 is a graph showing the contribution weights W i,d,1 contributed to the neuron output sum Σ1, multiplied by the corresponding correction weight influence coefficients C i,d, 1 , and the product is subsequently derived from the nerve The sum of the meta-outputs Σ1 is added: Σ1=W 1,0,1 × C 1 , 0 , 1 .+W 1,1,1 × C 1 , 1 , 1 .+W 1,2,1 × C 1 , 2 , 1 .+...[3] As the training begins, that is, during the first period, the correction weight W i,d,1 does not correspond to the input image 106 for training, therefore, the neuron output sum Σ1 is not Equal to the corresponding desired output image 126. Based on the initial correction weight W i,d,1 , the weight correction system calculates the correction value Δ 1 , which is used to change all the correction weights (W i,d,1 ) contributing to the neuron output sum Σ1. . The p-network 100 is a variety of options or variations that allow the formation and utilization of collective correction signals for all of the correction weights W i, d, n that contribute to a particular neuron 116.

以下係對於該等集體校正訊號之形成及利用的二個示範而非限制的變型。變型1-基於期望輸出訊號與得到的輸出總和之間的差異,校正訊號之形成及利用係如後: The following are two exemplary, non-limiting variations on the formation and utilization of such collective correction signals. Variant 1 - Based on the difference between the expected output signal and the sum of the outputs obtained, the formation and utilization of the correction signal is as follows:

˙對於貢獻到神經元“n”之所有校正權重的相等校正值Δn之計算係根據下式: Δ n =(O n -Σ n )/S [4],其中:On-對應於神經元輸出總和Σn之期望輸出訊號;S-經連接到神經元“n”之神經結的數目。 计算 The calculation of the equal correction value Δ n for all the correction weights contributing to the neuron “n” is based on the following equation: Δ n =(O n - Σ n )/S [4], where: O n - corresponds to the nerve The expected output signal of the meta output sum Σn; S- the number of neural nodes connected to the neuron "n".

˙貢獻到神經元“n”之所有校正權重Wi,d,n的修改係根據下式:W i,d,n 修改=W i,d,n +Δ n /C i,d,n [5],變型2-基於期望輸出訊號對得到的輸出總和之比值,校正訊號之形成及利用係如後: 修改 All correction weights W i,d,n contributed to the neuron "n" are modified according to the following formula: W i , d , n modified = W i , d , n + Δ n /C i,d,n [ 5] , variant 2 - based on the ratio of the expected output signal to the sum of the outputs obtained, the formation and utilization of the correction signal is as follows:

˙對於貢獻到神經元“n”之所有校正權重的相等校正值Δn之計算係根據下式:Δ n =On/Σ n [6], ˙ contribution to neuron "n" all the weight of the correction weight is equal to the correction value [Delta] n of the system is calculated according to the following formula: Δ n = On / Σ n [6],

˙貢獻到神經元“n”之所有校正權重Wi,d,n的修改係根據下式:W i,d,n, 修改=W i,d,n, ×Δ n [7],按照任何可利用的變型之校正權重Wi,d,n的修改係意欲要藉由將其輸出總和Σ n 收斂在期望輸出訊號的值而降低對於各個神經元116的訓練誤差。以此方式,對於一個既定影像的訓練誤差係可被降低而直到其成為等於或接近於零。 修改 All correction weights W i,d,n contributed to the neuron "n" are modified according to the following formula: W i , d , n , modified = W i , d , n , × Δ n [7] , according to any The modification of the available correction correction weights W i,d,n is intended to reduce the training error for each neuron 116 by converging its output sum Σ n to the value of the desired output signal. In this way, the training error for a given image can be reduced until it becomes equal to or close to zero.

在訓練期間之校正權重Wi,d,n的修改之一個實例係顯示於圖11。校正權重Wi,d,n的值係在訓練開始之前而設定,以具有權重值被設定為從該校正權重範圍的0±10%之隨機權重分佈的形式,且在訓練之後而達到最終的分佈。集體訊號之所述的計算係針對於在p網路100之中的所有神經元116而進行。對於一個訓練影像之所述的訓練程序係可針對於其他訓練 影像而重複。該種程序係可能導致對於一些先前訓練影像的訓練誤差之出現,由於一些校正權重Wi,d,n係可能參與在數個影像。是以,用另一個影像之訓練係可能部分擾亂其針對於先前影像所形成的校正權重Wi,d,n之分佈。然而,歸因於各個神經結118係包括一組的校正權重Wi,d,n之事實,用可能提高訓練誤差的新影像之訓練係不刪除該p網路100先前針對於其所訓練的該等影像。甚者,愈多個神經結118貢獻於各個神經元116且在各個神經結的校正權重Wi,d,n之數目為愈多,對於一個特定影像的訓練係影響對於其他影像的訓練為愈少。 An example of the modification of the correction weights W i,d,n during training is shown in FIG. The values of the correction weights W i,d,n are set before the start of training to have the weight value set to a form of random weight distribution from 0 ± 10% of the correction weight range, and reach the final after training. distributed. The calculations described for the collective signal are directed to all of the neurons 116 in the p-network 100. The training program described for one training image can be repeated for other training images. Such a program may result in the occurrence of training errors for some previously trained images, since some of the correction weights W i, d, n may be involved in several images. Therefore, the training system with another image may partially disturb the distribution of the correction weights W i, d, n formed for the previous image. However, due to the fact that each neural node 118 includes a set of correction weights W i,d,n , the training system with new images that may increase the training error does not delete the p network 100 that was previously trained for it. These images. Moreover, more neural nodes 118 contribute to each neuron 116 and the number of correction weights W i,d,n at each neural node is greater, and the training effect on a particular image is more important for the training of other images. less.

各個訓練時期係通常隨著對於所有訓練影像的總訓練誤差及/或局部訓練誤差之實質收斂而結束。誤差係可使用已知的統計方法來評估,諸如例如:均方誤差(MSE,Mean Squared Error)、平均絕對誤差(MAE,Mean Absolute Error)、或標準誤差平均(SAE,Standard Error Mean)。若總誤差或一些局部誤差係太高,附加的訓練時期係可進行而直到誤差係降低到小於一個預定誤差值。藉著界定在用於訓練的不同影像(在圖8所示)之間的類似性的百分比之稍早所述的影像辨識過程係本身為沿著先前所界定的類別之影像的分類過程。 Each training period usually ends with a substantial convergence of the total training error and/or local training error for all training images. Errors can be evaluated using known statistical methods such as, for example, Mean Squared Error (MSE), Mean Absolute Error (MAE), or Standard Error Mean (SAE). If the total error or some local errors are too high, an additional training period can be performed until the error is reduced to less than a predetermined error value. The image recognition process described earlier by the percentage of similarity defined between the different images used for training (shown in Figure 8) is itself a classification process along the images of the previously defined categories.

對於叢集(clustering),即:將影像劃分為其先前並未指定的自然分類或群組,方法200之基本訓練演算法係可用自組織映像(SOM,Self-Organizing Maps)方式來修改。對應於一個既定輸入影像的期望輸出影像126係可直接在訓練p網路100之過程中而形成,基於具有輸出神經元總和120的最大值之一組的獲勝神經元。圖22係顯示該種方法200的基本演算法之使用可如何產生一個主要組的輸出神經元總和,其中,該組係進而被 收斂以俾使數個較大的總和係保留其值、或增大,而所有其他總和係視為等於零。此轉變組的輸出神經元總和係可被接受為期望輸出影像126。 For clustering, that is, dividing the image into natural categories or groups that were not previously specified, the basic training algorithm of method 200 can be modified by Self-Organizing Maps (SOM). The desired output image 126 corresponding to a given input image can be formed directly during the training of the p-network 100 based on the winning neurons having one of the maximum values of the output neuron sum 120. Figure 22 is a diagram showing how the use of the basic algorithm of the method 200 can produce a primary group of output neuron sums, wherein the group is further Convergence so that several larger sums retain their value, or increase, while all other sums are considered equal to zero. The output neuron sum of this transition group can be accepted as the desired output image 126.

如上所述而形成,該組的期望輸出影像126係包括叢集或群組。如此,該組的期望輸出影像126係允許線性不可分離的影像之叢集,此為與古典的網路10有所不同。圖13係顯示所述的方式可如何有助於叢集一個複合的假設影像“貓-汽車”,其中,該影像之不同特徵係指定到不同叢集-貓與汽車。如所描述而建立之一組的期望輸出影像126係可使用,例如:用於建立不同的分類、統計分析、影像選擇,基於由於叢集所形成的準則。此外,由該p網路100所產生的期望輸出影像126係可使用作為其亦可順著對於所屬的p網路100所述的方式而形成之另一個或附加的p網路之輸入影像。因此所形成,期望輸出影像126係可用於一種多層p網路的一個後續層。 Formed as described above, the desired output image 126 of the set includes clusters or groups. As such, the desired output image 126 of the set is a cluster of images that are linearly inseparable, which is different from the classical network 10. Figure 13 is a diagram showing how the described manner can contribute to clustering a composite hypothetical image "cat-car" in which different features of the image are assigned to different clusters - cats and cars. Establishing a set of desired output images 126 as described may be used, for example, to establish different classifications, statistical analyses, image selections, based on criteria formed by clustering. In addition, the desired output image 126 produced by the p-network 100 can be used as an input image of another or additional p-network that can also be formed in the manner described for the associated p-network 100. Thus, it is contemplated that the output image 126 is intended for use in a subsequent layer of a multi-layer p-network.

古典的神經網路10之訓練係概括為經由一種受監督的訓練方法所提供,該種方法係基於初步準備成對之一個輸入影像與一個期望的輸出影像。該種概括方法係亦使用於p網路100之訓練,然而,p網路100之提高的訓練速度係亦考慮到藉著一個外部訓練者之訓練。外部訓練者之任務係可例如由個人或由一個電腦程式來實行。作用為一個外部訓練者,該個人係可涉及在實行一個實體任務或操作在一個遊戲環境中。p網路100係接收其形式為關於一個特定情況與對於其變化的資料之輸入訊號。反映該訓練者動作的訊號係可被引入作為期望輸出影像126且允許p網路100為根據基本演算法而被訓練。以此方式,種種處理之模型化係可由p網路100所即時產生。 The classical neural network 10 training system is outlined as being provided via a supervised training method based on an initial preparation of an input image and a desired output image. This generalization method is also used for the training of the p-network 100. However, the improved training speed of the p-network 100 is also considered to be trained by an external trainer. The task of the external trainer can be performed, for example, by an individual or by a computer program. Acting as an external trainer, the individual can be involved in performing a physical task or operating in a gaming environment. The p-network 100 receives input signals in the form of data relating to a particular situation and changes thereto. A signal reflecting the trainer's motion can be introduced as the desired output image 126 and allows the p-network 100 to be trained according to the basic algorithm. In this way, the modeling of the various processes can be generated immediately by the p-network 100.

舉例來說,p網路100係可被訓練以藉由接收其有關於路況與駕駛者的動作之資訊而驅動一台運載工具。透過模型化大量種種的關鍵情況,該同個p網路100係可由多個不同的駕駛者所訓練且累積相較於通常可能由單一個駕駛者所為之更多的駕駛技能。p網路100係能夠在0.1秒內或更快速評估一個特定路況且積累實質的“駕駛經驗”,其可加強在種種情況的交通安全。p網路100還可被訓練以和一個電腦(例如:和下棋機器)合作。易於從訓練模式轉移到辨識模式(且反之亦然)之該p網路100的能力係考慮到一種“從錯誤中學習”模式之實現,當p網路100係由一個外部的訓練者來訓練。在該種情形,部分訓練的p網路100係可產生其本身的動作,例如:用以控制一個技術過程。訓練者係可控制p網路100的動作且當必要時而修正彼等動作。因此,p網路100之附加的訓練係可提供。 For example, the p-network 100 can be trained to drive a vehicle by receiving information about the road conditions and the driver's actions. By modeling a large number of key scenarios, the same p-network 100 can be trained by a number of different drivers and accumulate more driving skills than would normally be the case for a single driver. The p-network 100 is capable of assessing a particular road condition within 0.1 seconds or more quickly and accumulating substantial "driving experience" that enhances traffic safety in a variety of situations. The p network 100 can also be trained to work with a computer (eg, a chess machine). The ability of the p-network 100 to easily move from the training mode to the recognition mode (and vice versa) takes into account the implementation of a "learn from error" mode when the p-network 100 is trained by an external trainer. . In this case, the partially trained p-network 100 can generate its own actions, for example, to control a technical process. The trainer can control the actions of the p-network 100 and correct their actions when necessary. Therefore, an additional training system for the p-network 100 can be provided.

該p網路100的資訊容量係極大,但是並非不受限制。由於該p網路100的設定尺度(諸如:輸入、輸出、與區間的數目),且由於用來訓練該p網路的影像數目之增大,在某數目個影像之後,訓練誤差的數目與大小亦可能增大。當該種在誤差產生之增大被偵測時,誤差的數目及/或大小係可藉由提高p網路100的尺寸而降低,由於p網路係允許在訓練時期之間而提高神經元116的數目及/或跨於p網路或在其構件中之訊號區間“d”的數目。p網路100的擴展係可藉由增加新的神經元116、增加新的輸入102與神經結118、改變校正權重影響係數Ci,d,n之分佈、以及劃分現存的區間“d”而提供。 The information capacity of the p-network 100 is enormous, but not unlimited. Due to the set scale of the p-network 100 (such as: input, output, and number of intervals), and due to the increase in the number of images used to train the p-network, the number of training errors after a certain number of images is The size may also increase. When the increase in error is detected, the number and/or size of the errors can be reduced by increasing the size of the p-network 100, since the p-network allows for the enhancement of neurons between training sessions. The number of 116 and/or the number of signal intervals "d" across the p network or in its components. The extension of the p-network 100 can be achieved by adding new neurons 116, adding new inputs 102 and neural nodes 118, changing the distribution of the weighting influence coefficients C i, d, n , and dividing the existing interval "d". provide.

在大多數的情形中,p網路100將被訓練以確保其能夠辨識影像、型樣、以及對於該影像或對於一組影像所固有的相關性。在最簡單 的情形中,辨識過程係重複根據其揭示為方法200的部分者之基本演算法的訓練過程之最初的步驟。尤其是:˙直接辨識係以影像之格式化而開始,根據其為使用來格式化影像以供訓練之相同規則;˙該影像被傳送到受訓練的p網路100之輸入,分配器係指定對應於其在訓練期間所設定的輸入訊號值之校正權重Wi,d,n,且該等神經元係產生各別的神經元總和,如在圖8所示;˙若代表輸出影像126之造成的輸出總和係完全依從該p網路100被訓練所用的影像之一者,則有該物件之確實的辨識;且˙若輸出影像126係部分依從該p網路100被訓練所用的數個影像,結果係顯示如同一個百分比之關於不同影像的匹配比率。圖13係展示的是,在其為基於一隻貓與一台車輛的影像之組合所作成的複合影像之辨識期間,輸出影像126係代表既定的影像組合且指出其貢獻到該組合之各個初始影像的百分比。 In most cases, the p-network 100 will be trained to ensure that it is capable of recognizing images, patterns, and correlations inherent to the image or to a group of images. In the simplest case, the identification process repeats the initial steps of the training process according to its basic algorithm, which is disclosed as part of method 200. In particular: ̇ Direct recognition begins with the formatting of the image, according to the same rules that it uses to format the image for training; ̇ the image is transmitted to the input of the trained p-network 100, and the splitter is assigned Corresponding to the correction weights W i,d,n of the input signal values set during the training, and the neurons generate the sum of the individual neurons, as shown in FIG. 8; The resulting output sum is completely dependent on one of the images used by the p-network 100 to be trained, and the actual identification of the object; and if the output image 126 is partially compliant with the p network 100 being trained for several The image, the result is displayed as a percentage of the matching ratio for different images. Figure 13 shows that during the identification of a composite image based on a combination of images of a cat and a vehicle, the output image 126 represents a given image combination and indicates its contribution to each initial of the combination. The percentage of the image.

舉例來說,若一個特定人士的數個圖像被用於訓練,辨識的影像係可能90%對應於第一個圖像,60%對應於第二個圖像,且35%對應於第三個圖像。可能的是,辨識的影像係具有某個機率而對應於其他人士或甚至是動物的圖像,此係意指在該等圖像之間有一些相似性。然而,該種相似性的機率係很可能為較低。基於該等機率,辨識的可靠度係可確定,例如:基於貝氏定理(Bayes' theorem)。 For example, if several images of a particular person are used for training, the identified image may be 90% corresponding to the first image, 60% corresponding to the second image, and 35% corresponding to the third image. Images. It is possible that the identified image has a certain probability and corresponds to an image of another person or even an animal, which means that there is some similarity between the images. However, the probability of this similarity is likely to be lower. Based on these probabilities, the reliability of the identification can be determined, for example, based on Bayes' theorem.

藉著p網路100,亦可能實施多階段的辨識,其結合演算法與神經網路辨識方法的優點。該種多階段的辨識係可包括: ˙藉由一個預先訓練網路之一個影像的初始辨識,經由使用並非全部而是僅為1%-10%的輸入,其在此被稱為“基本輸入”。該部分的輸入係可均勻、隨機、或由任何其他分佈函數而分佈在p網路100之內。舉例來說,在其包括複數個其他物件之照片中的一個人士之辨識;及˙選擇最提供有用資訊的物件或物件部分以供更進一步的詳細辨識。該種辨識係可根據其預先設定在記憶體中之特定物件的結構而提供,如同在演算方法之中,或根據影像的色彩、亮度、及/或深度之一個梯度。舉例來說,在肖像之辨識中,以下辨識區係可作選擇:眼睛、嘴角、鼻子形狀、以及某些特定特徵,諸如:刺青,車牌號碼、或門牌號碼係亦可使用類似方式而選擇及辨識;及˙若必要時,選擇的影像之詳細辨識係亦為可能。 With p-network 100, it is also possible to implement multi-stage identification, which combines the advantages of algorithms and neural network identification methods. The multi-stage identification system can include: The initial identification of an image by a pre-trained network, by using not all but only 1%-10% of the input, is referred to herein as "basic input." The input to this portion can be distributed uniformly, randomly, or distributed within the p-network 100 by any other distribution function. For example, the identification of a person in a photo that includes a plurality of other objects; and ̇ select the item or object portion that provides the most useful information for further detailed identification. The identification may be provided based on the structure of the particular object that is pre-set in the memory, as in the calculation method, or based on a gradient of color, brightness, and/or depth of the image. For example, in the identification of portraits, the following identification zones can be selected: eyes, mouth corners, nose shape, and certain characteristics, such as tattoos, license plate numbers, or house number numbers can also be selected in a similar manner. Identification; and 详细 If necessary, a detailed identification of the selected image is also possible.

該p網路100的一個電腦模擬之形成及其訓練係可藉由使用任何程式設計語言而基於以上說明來提供。舉例來說,一種物件導向程式設計係可使用,其中,神經結權重108、校正權重112、分配器114、與神經元116係代表程式設計物件或物件類別,關係是經由連結或訊息而建立在物件類別之間,且互動的演算法係設定在物件之間以及在物件類別之間。 The formation of a computer simulation of the p-network 100 and its training can be provided based on the above description by using any programming language. For example, an object-oriented programming can be used, wherein the neural node weights 108, the correction weights 112, the allocator 114, and the neurons 116 represent programming objects or object categories, and the relationship is established via a link or message. Between object categories, and interactive algorithms are set between objects and between object categories.

該p網路100的軟體模擬之形成及訓練係可包括下列者: The software simulation formation and training system of the p network 100 may include the following:

1.用於p網路100之形成及訓練的準備,尤其是:˙根據一個既定任務,成組的訓練輸入影像之轉換成為數位形式;˙造成的數位影像之分析,包括其要被使用於訓練的輸入訊號的參數之選擇,例如:頻率、大小、相位、或座標;及 ˙設定用於訓練訊號的一個範圍、在該所屬範圍之內的若干個區間、以及校正權重影響係數Ci,d,n的分佈。 1. Preparation for the formation and training of the p-network 100, in particular: ̇ According to a given task, the conversion of the grouped training input images into a digital form; the analysis of the digital image caused by the ,, including its use in Selection of parameters of the trained input signal, such as: frequency, size, phase, or coordinates; and ̇ setting a range for the training signal, a number of intervals within the range, and correcting the weight influence coefficient C i , The distribution of d, n .

2.該p網路的軟體模擬之形成,包括: 2. The formation of the software simulation of the p network, including:

˙對於該p網路100的一組輸入之形成。舉例來說,輸入的數目係可等於在訓練輸入影像中的訊號的數目; The formation of a set of inputs for the p-network 100. For example, the number of inputs can be equal to the number of signals in the training input image;

˙一組神經元之形成,其中各個神經元係代表一個相加裝置; a group of neurons formed, wherein each neuron represents an additive device;

˙一組其具有神經結權重的神經結之形成,其中各個神經結係連接到一個p網路輸入與一個神經元; a group of nerve nodes having a weight of a neural node, wherein each nerve node is connected to a p-network input and a neuron;

˙在各個神經結中的權重校正方塊之形成,其中該等權重校正方塊係包括分配器與校正權重,且其中各個校正權重係具有以下特徵:○校正權重輸入索引(i);○校正權重神經元索引(n);○校正權重區間索引(d);及○校正權重初始值(Wi,d,n)。 Forming a weight correction block in each neural node, wherein the weight correction block includes a divider and a correction weight, and wherein each of the correction weights has the following characteristics: ○ correction weight input index (i); ○ correction weight nerve Meta index (n); ○ correction weight interval index (d); and ○ correction weight initial value (W i, d, n ).

˙指定在區間與校正權重之間的一個相關性。 ̇ Specify a correlation between the interval and the correction weight.

3.用一個輸入影像來訓練各個神經元,包括: 3. Train each neuron with an input image, including:

˙指定校正權重影響係數Ci,d,n,包括:○確定其對應於由各個輸入所接收的訓練輸入影像的輸入訊號之一個區間;及○指定對於所有神經結的所有校正權重之校正權重影響係數Ci,d,n的大小。 ̇Specifying the correction weight influence coefficient C i,d,n , including: ○ determining a section corresponding to the input signal of the training input image received by each input; and ○ specifying the correction weight of all correction weights for all the neural nodes The magnitude of the influence coefficient C i,d,n .

˙藉由將其貢獻於該神經元之所有神經結權重的校正權重值W i,d,n 乘以 對應的校正權重影響係數Ci,d,n而後相加,計算對於各個神經元“n”的神經元輸出總和(Σn):Σ n i,d,n W i,d,n ×C i,d,n Multiplying the correction weights W i , d , n of all the neural node weights that contribute to the neuron by the corresponding correction weight influence coefficients C i,d,n and then adding them to calculate for each neuron “n” The sum of the neuron outputs (Σ n ): Σ n = Σ i, d, n W i, d, n × C i, d, n

˙經由該神經元輸出總和Σn從對應的期望輸出訊號On之相減而計算該偏差或訓練誤差(Tn):T n =O n n T n = O n -Σ n: ˙ sum of the neuron output Σ n calculates the training error or deviation (T n) are subtracted from the desired output signal O n via the corresponding

˙經由將訓練誤差除以其連接到神經元“n”之神經結的數目“S”而計算對於其貢獻於神經元“n”之所有校正權重的相等校正值(Δn):Δ n =T n /S ˙ via the number of training error divided by its connection to the neuron "n" of the ganglia of the "S" is calculated for which contribute to neuronal "n" of all the correction weight is equal to the correction value (Δ n): Δ n = T n /S

˙藉由將該校正值Δn除以對應的校正權重影響係數Ci,d,n而後相加到各個校正權重,修改其貢獻於各別的神經元之所有校正權重Wi,d,nW i,d,n 修改=W i,n,d +Δ n /C i,d,n ̇ By dividing the correction value Δ n by the corresponding correction weight influence coefficient C i,d,n and then adding to each correction weight, modifying all the correction weights W i,d,n contributing to the respective neurons : W i , d , n modified = W i , n , d + Δ n /C i , d , n .

對於其貢獻於神經元“n”之所有校正權重而計算該相等校正值(Δn)及修改該等校正權重Wi,d,n之另一種方法係可包括下列者: Another method for calculating the equal correction value (Δ n ) and modifying the correction weights W i,d,n for all of the correction weights contributing to the neuron "n" may include the following:

˙將該期望輸出影像的訊號On除以一個神經元輸出總和Σn:Δ n =O n /Σn 除 Dividing the signal O n of the desired output image by a neuron output sum Σ n : Δ n =O n / Σ n

˙藉由將該等校正權重乘以該校正值Δn,修改其貢獻於該神經元之校正權重Wi,d,nW i,d,n 修改=W i,d,n ×Δ n ˙ the like by the correction weight is multiplied by the correction value Δ n, the right to modify its contribution to the correction of the neuron weight W i, d, n: W i, d, n modify = W i, d, n × Δ n

4.使用所有的訓練影像來訓練該p網路100,包括:˙對於其被包括在一個訓練時期中之所有選擇的訓練影像而重複上述的過程;及 ˙確定該特定訓練時期的一個或多個誤差,將彼等誤差和一個預定可接受的誤差位準相比較,且重複訓練時期而直到該等訓練誤差係成為小於該預定可接受的誤差位準。 4. Using all of the training images to train the p-network 100, including: 重复 repeating the above-described process for all selected training images that are included in a training session; Determining one or more errors for the particular training period, comparing the errors to a predetermined acceptable error level, and repeating the training period until the training errors become less than the predetermined acceptable error level .

使用物件導向程式設計之p網路100的軟體模擬之一個實際的實例係描述在下文且顯示在圖14-21。 A practical example of a software simulation of a p-network 100 using object-oriented programming is described below and shown in Figures 14-21.

一個“神經元單元(NeuronUnit)”物件類別之形成係可包括形成:˙“神經結”類別之成組的物件;˙神經元116提出一個變數,其中,相加係實行在訓練期間;及˙計算器122提出一個變數,其中,期望神經元總和120的值被儲存且校正值Δn之計算係實行在訓練過程期間。 A "Neuron Unit" object class formation system can include: forming a group of objects of the "nerve knot"category; the sacral neuron 116 proposing a variable, wherein the addition system is implemented during training; and proposed a variable calculator 122, wherein the desired value of the neuron 120 is stored and the sum of the correction value Δ is calculated based n the implementation of the training process period.

類別“神經元單元”係提供p網路100的訓練且可包括:˙神經元總和120之形成;˙設定期望總和;˙校正值Δn之計算;及˙將計算的校正值Δn相加到校正權重Wi,n,dThe category "neuron unit" provides training for the p-network 100 and may include: the formation of the ̇ neuron sum 120; ̇ setting the desired sum; calculating the ̇ correction value Δ n ; and 相 adding the calculated correction value Δ n To the correction weight W i,n,d .

物件類別“神經結(Synapse)”之形成係可包括:˙成組的校正權重Wi,n,d;及˙指出其連接到神經結118的輸入之指標。 The formation of the object category "Synapse" may include: a set of correction weights W i, n, d ; and ̇ indicate an indicator of its input to the neural node 118.

類別“神經結”係可實行以下功能:˙校正權重Wi,n,d之初始化;˙將權重Wi,n,d乘以係數Ci,d,n;及˙權重Wi,n,d之校正。 The category "nerve knot" can perform the following functions: 初始化 initialization of the weights W i, n, d ; 乘 multiply the weights W i, n, d by the coefficients C i, d, n ; and ̇ weights W i, n, Correction of d .

物件類別“輸入訊號(InputSignal)”之形成係可包括:˙關於其連接到一個既定輸入102的神經結118之成組的索引;˙其包括輸入訊號104的值之變數;˙可能的最小與最大輸入訊號之值;˙區間的數目“d”;及˙區間長度。 The formation of the object category "InputSignal" may include: 索引 an index of the group of neural nodes 118 connected to a given input 102; 包括 it includes the variable of the value of the input signal 104; ̇ the smallest possible The value of the maximum input signal; the number of the interval "d"; and the length of the interval.

類別“輸入訊號”係可提供以下功能:˙該p網路100的結構之形成,其包括:○在一個輸入102與神經結118之間的連結之增加與移除;及○設定對於一個特定輸入102的神經結118之區間的數目“d”。 The category "input signal" provides the following functions: 形成 The formation of the structure of the p-network 100 includes: ○ the addition and removal of a link between an input 102 and a neural node 118; and ○ setting for a particular The number "d" of the intervals of the neural nodes 118 of 102 is entered.

˙設定最小與最大的輸入訊號104之參數;˙貢獻到該p網路100的操作:○設定一個輸入訊號104;及○設定校正權重影響係數Ci,d,ṅ setting the parameters of the minimum and maximum input signals 104; ̇ contributing to the operation of the p network 100: ○ setting an input signal 104; and ○ setting the correction weight influence coefficient C i, d, n .

物件類別“p網路(PNet)”之形成係包括一組的物件類別:˙神經元單元;及˙輸入訊號。 The formation of the object category "pNet" (PNet) includes a set of object categories: ̇ neuron units; and ̇ input signals.

類別“p網路”係提供以下功能:˙設定“輸入訊號”類別之物件的數目;˙設定“神經元單元”類別之物件的數目;及˙物件“神經元單元”與“輸入訊號”的功能之群組請求。 The category "p network" provides the following functions: ̇ setting the number of objects in the "input signal" category; 数目 setting the number of objects in the "neuron unit" category; and ̇ object "neuron unit" and "input signal" Group request for features.

在訓練過程期間,該等循環係可形成,其中: ˙等於零的神經元輸出總和係在該循環開始之前而形成;˙貢獻於既定“神經元單元”的所有神經結被檢視。對於各個神經結118:○基於輸入訊號104,分配器係形成一組的校正權重影響係數Ci,d,n;○該神經結118的所有權重Wi,n,d被檢視,且對於各個權重:■權重Wi,n,d的值係乘以對應的校正權重影響係數Ci,d,n;■該相乘結果係相加到該形成神經元輸出總和;˙校正值Δn係計算;˙校正值Δn係除以校正權重影響係數Ci,d,n,即,Δn/Ci,d,n;及˙貢獻於既定“神經元單元”的所有神經結118被檢視。對於各個神經結118,該所屬的神經結的所有權重Wi,n,d被檢視,且對於各個權重,其值被修改到對應的校正值ΔnDuring the training process, the cycles can be formed, wherein: the sum of the neuron outputs with ̇ equal to zero is formed before the start of the cycle; 所有 all the neural nodes contributing to the established "neuron unit" are examined. For each neural node 118: ○ based on the input signal 104, the dispenser forms a set of correction weight influence coefficients C i,d,n ; ○ the ownership weight W i,n,d of the neural node 118 is examined, and for each Weight: ■ The value of weight W i,n,d is multiplied by the corresponding correction weight influence coefficient C i,d,n ;■ The multiplication result is added to the sum of the formed neuron outputs; ̇correction value Δ n Calculation; ̇ correction value Δ n is divided by the correction weight influence coefficient C i,d,n , ie, Δ n /C i,d,n ; and ̇ all the neural nodes 118 contributing to the established "neuron unit" are examined . For each neural node 118, the weight of the associated neural node W i, n, d is examined, and for each weight, its value is modified to the corresponding correction value Δ n .

該p網路100之附加的訓練之前述可能性係允許訓練與影像的辨識之結合,致使該訓練過程能被加速且其準確度被改善。當透過一組依序改變的影像來訓練p網路100,諸如:透過其彼此為稍微不同之影片的連續畫面來訓練,附加的訓練係可包括:˙用第一影像來訓練;˙下個影像之辨識以及在新影像與該網路被初始訓練所用的影像之間的類似性的百分比。若辨識誤差係小於其預定值,附加的訓練係不需要;及˙若辨識誤差係超過預定值,附加的訓練係提供。 The aforementioned possibility of additional training of the p-network 100 allows for a combination of training and image recognition, so that the training process can be accelerated and its accuracy improved. When the p-network 100 is trained through a sequence of sequentially changed images, such as through continuous pictures of films that are slightly different from each other, the additional training system may include: 训练 training with the first image; The recognition of the image and the percentage of similarity between the new image and the image used for the initial training of the network. If the identification error is less than its predetermined value, additional training is not required; and 附加 if the identification error exceeds a predetermined value, additional training is provided.

藉由上述的基本訓練演算法之p網路100的訓練係有效用於 解決影像辨識的問題,但未排除其歸因於重疊影像的資料之損失或訛誤。因此,p網路100之使用於記憶體目的係雖然為可能,但可能非完全可靠。本實施例係描述p網路100之訓練,其提供對於資訊之損失或訛誤的防護。另一個限制係可被引入到基本網路訓練演算法,其要求每個校正權重Wi,n,d係可被訓練僅為一次。在第一訓練週期之後,權重Wi,n,d的值係維持固定或定值。此係可藉由輸入對於各個校正權重的另一個存取索引“a”而達成,存取索引“a”係代表在訓練過程期間而對該所屬的校正權重Wi,n,d的存取數目之上述的索引。 The training system of the p-network 100 by the basic training algorithm described above is effectively used to solve the problem of image recognition, but the loss or fallacy of the data attributed to the overlapping images is not excluded. Therefore, the use of the p-network 100 for memory purposes is possible, but may not be completely reliable. This embodiment describes the training of the p-network 100, which provides protection against loss or corruption of information. Another limitation can be introduced into the basic network training algorithm, which requires that each correction weight W i,n,d can be trained only once. After the first training period, the values of the weights W i, n, d are maintained at a fixed or fixed value. This can be achieved by inputting another access index "a" for each correction weight, which represents access to the associated correction weight W i,n,d during the training process. The number of the above index.

如上所述,各個校正權重係可採用Wi,n,d,a之命名,其中,“a”係在訓練過程期間而對該所屬的權重的存取數目。在最簡單的情形中,對於未修改(即:未固定)的權重,a=0,而對於其已經由描述的基本演算法所修改或固定的權重,a=1。甚者,當應用基本演算法,具有固定值a=1之校正權重Wi,n,d,a係可被排除在校正為作成於其的權重之外。在該種情形,式[5]、[61、與[7]係可轉變如下: As noted above, each correction weight may be named after W i,n,d,a , where "a" is the number of accesses to the associated weight during the training process. In the simplest case, for an unmodified (ie, unfixed) weight, a=0, and for a weight that has been modified or fixed by the described basic algorithm, a=1. Moreover, when the basic algorithm is applied, the correction weights W i, n, d, a with a fixed value a = 1 can be excluded from being corrected to the weights made thereto. In this case, the formulas [5], [61, and [7] can be transformed as follows:

上述的限制係可部分應用到先前訓練的校正權重Wi,n,d,a之校正,但是僅為應用到其形成最重要的影像之該等權重。舉例來說,在單一個人士的一組肖像之訓練內,一個特定影像係可被宣告為主要者且指定優先等級。在該優先等級的影像之訓練後,在訓練過程之中為變化的所有校正權重Wi,n,d,a係可被固定,即:其中該索引a=1,因此將該權重指定為Wi,n,d,1,且同個人士的其他影像係可維持可改變。該優先等級係可包括其他影像,例如:經使用作為加密金鑰且/或含有關鍵的數值資料之彼等者。 The above limitations may be applied in part to the correction of the previously trained correction weights W i, n, d, a , but only to the weights to which the most important images are applied. For example, within a group of portrait training for a single person, a particular image can be declared as the primary and assigned a priority. After the training of the image of the priority level, all the correction weights W i,n,d,a which are changed during the training process can be fixed, that is, wherein the index a=1, so the weight is designated as W i, n, d, 1 , and other images of the same person can be maintained and changeable. The priority level may include other images, such as those that are used as encryption keys and/or contain key numerical data.

對於校正權重Wi,n,d,a之變化係亦可能未完全禁止,但是受限於索引“a”之增大。即,權重Wi,n,d,a之各個後續使用係可用以降低其變化的 能力。一個特定的校正權重Wi,n,d,a愈經常被使用,隨著各個存取之權重變化係愈小,且因此在後續的影像之訓練期間,儲存的影像係變化較小且遭受降低的訛誤。舉例來說,若a=0,在權重Wi,n,d,a之任何的變化係可能;當a=1,對於該權重之變化的機率係可減小到該權重值的±50%;隨著a=2,變化的機率係可降低到該權重值的±25%。 The change to the correction weight W i,n,d,a may also not be completely prohibited, but is limited by the increase of the index "a". That is, each subsequent use of the weights W i, n, d, a can be used to reduce its ability to change. The more often a particular correction weight W i,n,d,a is used, the smaller the weight change with each access, and thus the stored image changes less and suffers during subsequent training of the image. Fallacy. For example, if a=0, any change in weight W i,n,d,a is possible; when a=1, the probability of change for the weight can be reduced to ±50% of the weight value With a = 2, the probability of change can be reduced to ± 25% of the weight value.

在達到預定數目個存取之後,如由索引“a”所表示,舉例來說,當a=5,權重Wi,n,d,a之進一步的改變係可能被禁止。該種方式係可提供在單一個p網路100之內的高智能與資訊安全之結合。使用網路誤差計算機構,可允許的誤差之階層係可設定,俾使具有在一個預定準確度範圍內的損失之資訊係可儲存,其中該準確度範圍係根據一個特定任務而指定。換言之,針對於視覺影像來操作的p網路100,誤差係可設定在無法由肉眼所捕捉的階層,其將提供在儲存容量之顯著“因數”的增大。上述者係可致能視覺資訊之高度有效儲存的建立,例如:電影。 After a predetermined number of accesses have been reached, as indicated by the index "a", for example, when a = 5, further changes in the weights W i, n, d, a may be prohibited. This approach provides a combination of high intelligence and information security within a single p-network 100. Using a network error calculation mechanism, the level of allowable error can be set such that information having a loss within a predetermined accuracy range can be stored, wherein the accuracy range is specified according to a particular task. In other words, for the p-network 100 operating on the visual image, the error can be set at a level that cannot be captured by the naked eye, which will provide a significant "factor" increase in storage capacity. The above can enable the establishment of highly efficient storage of visual information, such as movies.

選擇性清除電腦記憶體之能力係可對於p網路100之持續的高階運作而言為有用。該種記憶體之選擇性清除係可藉由移除某些影像而未喪失或損壞其餘的儲存資訊來作成。該種清除係可提供如後:˙參與在影像形成的所有校正權重Wi,n,d,a之識別,例如:藉由將影像引入到該網路或藉由編譯用於各個影像之該系列的使用校正權重;˙對於各別的校正權重Wi,n,d,a的索引“a”之降低;及˙當索引“a”係降低到零,用零或用其接近對於該所屬的權重之可能值的範圍的中間之一個隨機值來取代校正權重Wi,n,d,aThe ability to selectively erase computer memory is useful for the continued high-level operation of p-network 100. The selective removal of such a memory can be made by removing certain images without losing or damaging the remaining stored information. The removal system can provide the following: ̇ recognition of all correction weights W i, n, d, a involved in image formation, for example, by introducing images into the network or by compiling for each image The use of the series of correction weights; 降低 the reduction of the index "a" for the individual correction weights W i, n, d, a ; and ̇ when the index "a" is reduced to zero, with zero or with it close to the The correction weights W i,n,d,a are replaced by a random value in the middle of the range of possible values of the weights.

索引“a”之一個適當等級與系列的縮減係可根據實驗選擇 以識別其隱藏在序列的影像之中的強的型樣。舉例來說,對於在訓練期間所引入到p網路100之每100個影像,可有索引“a”之減小計數1,直到“a”達到零值。在該種情形,“a”的值係可對應於新影像之引入而增大。在“a”的增大與減小之間的競爭係可導致一個情況,其中隨機變化係逐漸從記憶體所移除,而其已經多次使用且確認的校正權重Wi,n,d,a係可儲存。當p網路100係透過具有例如相同對象或類似環境之類似屬性的大量影像而訓練,經常使用的校正權重Wi,n,d,a係不斷確認其值且在此等區域中的資訊係成為非常穩定。再者,隨機的雜訊係將逐漸消失。換言之,具有在索引“a”的逐漸減小之p網路100係可作用為一種有效的雜訊濾波器。 An appropriate level of index "a" and a series of reductions can be selected experimentally to identify strong patterns that are hidden in the images of the sequence. For example, for every 100 images introduced to the p-network 100 during training, there may be a decrease count of index "a" until the "a" reaches zero. In this case, the value of "a" can be increased corresponding to the introduction of a new image. The competition between the increase and decrease of "a" can lead to a situation in which the random variation is gradually removed from the memory, and it has been used multiple times and the confirmed correction weights W i,n,d, a series can be stored. When the p-network 100 is trained through a large number of images having similar attributes such as the same object or similar environment, the frequently used correction weights W i, n, d, a are constantly confirming their values and the information systems in such areas Become very stable. Furthermore, the random noise system will gradually disappear. In other words, the p-network 100 with a decreasing index at index "a" can function as an effective noise filter.

在沒有資訊喪失的情況下之p網路100的訓練之已述實施例係允許建立其具有高容量與可靠度的一種p網路記憶體。該種記憶體係可使用作為大容量的一種高速電腦記憶體,其提供比“快取記憶體”為更高的速度,但是將不會如同典型關於“快取記憶體”系統所為者而提高電腦成本與複雜度。根據已公開的資料,概括而言,當用神經網路來錄製電影時,記憶體係可在沒有錄製品質之重大損失的情況下而壓縮數十或數百倍。換言之,一種神經網路係能夠操作為一種極有效的存檔程式。將神經網路的此能力和該p網路100的高速訓練能力結合係可允許建立高速的資料傳輸系統、具有高儲存容量的記憶體、以及高速的解密程式多媒體檔案,即:代碼變換器(codex)。 The described embodiment of the training of the p-network 100 in the absence of loss of information allows for the establishment of a p-network memory with high capacity and reliability. This kind of memory system can be used as a high-capacity high-speed computer memory, which provides higher speed than "cache memory", but will not improve the computer as the typical "cache memory" system. Cost and complexity. According to the published information, in general, when a movie is recorded using a neural network, the memory system can be compressed by tens or hundreds of times without significant loss of recording quality. In other words, a neural network can operate as an extremely efficient archiving program. Combining this capability of the neural network with the high-speed training capabilities of the p-network 100 allows for the creation of high-speed data transmission systems, memory with high storage capacity, and high-speed decryption program multimedia files, ie, transcoders ( Codex).

歸因於事實在於,在p網路100之中,資料係儲存為一組的校正權重Wi,n,d,a,其為一個型式之碼記錄,經由現存的方法且在未使用一個同等網路與金鑰的情況下之解碼或未授權存取到p網路係不太可能。因此, p網路100係可提供相當大程度的資料保護。此外,不同於習用的電腦記憶體,對於p網路100的個別儲存元件之損壞係呈現微不足道的有害效應,由於其他元件係大為補償喪失的功能。在影像辨識過程中,經使用的影像之固有型樣係實際為並未由於對於一個或多個元件之損壞而失真。上述者係可顯著改善電腦的可靠度且允許使用某些記憶體區塊,其在正常情況下將被視為有缺陷。此外,此型式的記憶體係歸因於缺少對於在p網路100之中的關鍵位元組的永久位址而較不容易受到駭客攻擊,使得其不受到由種種電腦病毒對於該種系統之攻擊的影響。 Due to the fact that in the p-network 100, the data is stored as a set of correction weights W i,n,d,a , which is a type of code record, via an existing method and without an equivalent It is not possible to decode or unauthorized access to the p network in the case of a network and a key. Therefore, the p-network 100 provides a considerable degree of data protection. In addition, unlike conventional computer memory, the damage to the individual storage elements of the p-network 100 is negligible, since other components greatly compensate for the lost functionality. In the image recognition process, the intrinsic pattern of the image used is actually not distorted by damage to one or more components. The above can significantly improve the reliability of the computer and allow the use of certain memory blocks, which would normally be considered defective. Moreover, this type of memory system is less susceptible to hacking attacks due to the lack of a permanent address for key bytes within the p-network 100, such that it is not subject to various computer viruses for such systems. The impact of the attack.

藉著在用於訓練的不同影像間的類似性百分比之確定的前述影像辨識過程係亦可運用作為根據先前界定類別之一種影像分類過程,如上所述。對於叢集,其係影像之劃分為非預先界定的自然分類或群組,基本的訓練過程係可修改。本實施例係可包括: The aforementioned image recognition process by determining the percentage of similarity between different images for training can also be applied as an image classification process according to a previously defined category, as described above. For clusters, the image is divided into non-predefined natural categories or groups, and the basic training process can be modified. This embodiment can include:

˙用於訓練的一組輸入影像之準備,而不包括經準備的輸出影像; 准备 Preparation of a set of input images for training, not including prepared output images;

˙形成與訓練該網路,藉著神經元輸出總和之形成,如同其根據基本演算法所作成; Forming and training the network, by the formation of the sum of the neuron outputs, as it is based on the basic algorithm;

˙在具有最大輸出總和之輸出的造成輸出影像之選擇,即:獲勝者輸出、或一群的獲勝者輸出,其可為類似於Kohonen網路而組織; The choice of the output image resulting from the output of the largest output sum, ie the winner output, or a group of winner outputs, which may be organized similar to the Kohonen network;

˙一個期望輸出影像之建立,其中,該獲勝者輸出或該群的獲勝者輸出係接收最大值。同時:○選擇的獲勝者輸出之數目係可預定,例如:在1到10之範圍中,或獲勝者輸出係可根據規則“不小於最大神經元總和之N%”而選擇,其中,“N”係可例如在90-100%之內;及 ○所有其他輸出係可設定為等於零。 The establishment of a desired output image, wherein the winner output or the winner output of the group receives the maximum value. At the same time: ○ The number of selected winners can be predetermined, for example, in the range of 1 to 10, or the winner output can be selected according to the rule "not less than N% of the sum of the largest neurons", where "N " can be, for example, within 90-100%; and ○ All other output systems can be set equal to zero.

˙根據基本演算法之訓練,藉著使用建立的期望輸出影像,圖13;及 ̇ According to the training of the basic algorithm, by using the established desired output image, Figure 13;

˙針對其具有對於不同獲勝者或獲勝者群的各個影像之形成的其他影像,重複所有程序。 重复 Repeat all procedures for other images that have their own images for different winners or groups of winners.

以上述方式所形成之該組的期望輸出影像係可使用以描述該複數個輸入影像可自然分開成為其的叢組或群組。該組的期望輸出影像係可使用以產生不同的分類,諸如:用於根據建立的準則且在統計分析中之影像的選擇。上述者係亦可使用於輸入與輸出影像之前述的倒轉。換言之,期望輸出影像係可使用作為用於另一個(即:附加)網路的輸入影像,且該附加網路的輸出係可為其適用於電腦輸入的任何形式所呈現之影像。 The desired output image of the set formed in the manner described above can be used to describe a cluster or group into which the plurality of input images can be naturally separated. The desired output image of the group can be used to generate different classifications, such as: selection of images for use in statistical analysis based on established criteria. The above can also be used for the aforementioned inversion of the input and output images. In other words, the desired output image can be used as an input image for another (ie, additional) network, and the output of the additional network can be any image that is rendered for any form of computer input.

在p網路100之中,在藉著上述演算法之訓練的單一個循環之後,期望輸出影像係可產生為有小的輸出總和變化,此可能使訓練過程減慢且還可能降低其準確度。為了改善p網路100之訓練,點的初始變化係可被人工提高或延伸,使得該等點的大小變化係將涵蓋可能輸出值的整個範圍,例如:-50到+50,如在圖21所示。點的初始變化之該種延伸係可能為線性或非線性。 In the p-network 100, after a single cycle of training by the above algorithm, the desired output image system can be produced with a small output sum change, which may slow the training process and may also reduce its accuracy. . In order to improve the training of the p-network 100, the initial changes in the points can be artificially increased or extended such that the size changes of the points will cover the entire range of possible output values, for example: -50 to +50, as in Figure 21 Shown. This type of extension of the initial change in the point may be linear or non-linear.

一個情況係可能發展,其中某個輸出的最大值係一個離群值或錯誤,例如:雜訊的一種表現形式。該情況係可由其為許多個小訊號所環繞的一個最大值之現象來顯現。當獲勝者輸出係選擇,小訊號值係可忽略,透過其為其他大訊號所環繞的最大訊號之選擇作為獲勝者。針對此目的,變異縮減之已知的統計技術係可使用,諸如:重要性取樣。該種方式係可允許移除雜訊而且維持基本有用的型樣。獲勝者群組之建立係致能線 性不可分離的影像(即:關於超過一個叢組的影像)之叢組,如在圖13所示。以上係可提供在準確度的重大改善且減小叢組誤差的數目。 A situation may develop where the maximum value of an output is an outlier or error, such as a representation of noise. This situation can be manifested by the phenomenon that it is a maximum value surrounded by many small signals. When the winner's output is selected, the small signal value can be ignored, and the winner of the largest signal surrounded by other large signals is selected as the winner. For this purpose, known statistical techniques for variability reduction can be used, such as: importance sampling. This approach allows for the removal of noise and maintains a substantially useful pattern. Winning group establishment line A cluster of inseparable images (ie, images of more than one cluster), as shown in Figure 13. The above can provide a significant improvement in accuracy and reduce the number of cluster errors.

在p網路100的訓練之過程中,受到校正的典型誤差係: During the training of the p-network 100, the typical error corrected is:

誤差校正亦可能為藉助於用一個外部訓練者來訓練之上述的演算法。 The error correction may also be an algorithm described above by training with an external trainer.

該p網路100之硬體的實現係可提供在數位及/或類比的微晶片。一個代表性之p網路100的微晶片係可運用於資訊之儲存及處理。該p網路100的微晶片係可基於種種的可變電阻器、場效電晶體、記憶電阻器(memristor)、電容器、切換元件、電壓產生器、非線性的光電池、等等。可變電阻器係可使用作為神經結權重108及/或校正權重112。複數個此種電阻器係可並聯、串聯、以及串並聯而連接。假使為各別的電阻器之並聯連接,訊號係可由電流值所編碼,其可接著為有利於電流之自動化的類比總和。為了得到正或負的訊號,二組電阻器(激勵與抑制)係可提供在各個神經結上。在該種硬體結構中,抑制的訊號係可從激勵的訊號而減去。 The hardware implementation of the p-network 100 can provide digital and/or analog microchips. A representative p-network 100 microchip system can be used for information storage and processing. The microchip system of the p-network 100 can be based on a variety of variable resistors, field effect transistors, memristors, capacitors, switching elements, voltage generators, non-linear photovoltaic cells, and the like. A variable resistor can be used as the nerve knot weight 108 and/or the correction weight 112. A plurality of such resistors can be connected in parallel, in series, and in series and in parallel. If the respective resistors are connected in parallel, the signal can be encoded by the current value, which can then be an analogy sum that facilitates the automation of the current. In order to obtain a positive or negative signal, two sets of resistors (excitation and suppression) can be provided on each nerve node. In this hardware configuration, the suppressed signal can be subtracted from the excited signal.

各個校正權重112係可實施為一種如同記憶電阻器的裝置(記憶電阻器)。如為由熟習此技藝人士所瞭解,一個記憶電阻器係具有由在電路中的一個電流、或由一個電位或電荷所控制的電阻之一種可變電阻器。適當的記憶電阻器功能性係可經由一個實際的記憶電阻器裝置、以及軟體或其實體模擬所達成。在p網路100以低電壓電位之操作中,記憶電阻器係可操作如同一個簡單的電阻器。在訓練模式期間,記憶電阻器的電阻係可改變,例如:藉由一個強的電壓脈衝。記憶電阻器的值之變化(電阻的增大或減小)係可取決於該電壓的極性,而該值變化的大小係可取決於電壓脈衝的大小。 Each of the correction weights 112 can be implemented as a device (memory resistor) like a memory resistor. As is known to those skilled in the art, a memory resistor has a variable resistor that is controlled by a current in the circuit or by a potential or charge. Appropriate memory resistor functionality can be achieved via an actual memory resistor device, as well as software or its physical simulation. In operation of the p-network 100 at low voltage potentials, the memory resistor operates as a simple resistor. During the training mode, the resistance of the memory resistor can be varied, for example, by a strong voltage pulse. The change in the value of the memory resistor (the increase or decrease in resistance) may depend on the polarity of the voltage, and the magnitude of the change in the value may depend on the magnitude of the voltage pulse.

此詳細說明與該等圖式係支援及描述本揭露內容,而本揭露內容的範疇係僅由申請專利範圍所界定。儘管用於實施所主張的揭露內容之一些最佳模式與其他實施例係已經詳細描述,種種替代設計與實施例係存在以供實行在隨附申請專利範圍所界定的揭露內容。再者,在該等圖式所示的實施例或在本說明中所述的種種實施例的特徵係無須被理解為彼此獨立的實施例。更確切而言,可能的是,在一個實施例之實例的一者所述的特徵各者係可和來自其他實施例的一個或複數個其他期望特徵作結合,造成其未用文字描述或關於該等圖式的其他實施例。是以,該等其他實施例係屬於在隨附申請專利範圍的範疇之架構內。 The detailed description and the drawings are to support and describe the disclosure, and the scope of the disclosure is defined only by the scope of the claims. While some of the best modes and embodiments of the claimed subject matter have been described in detail, various alternative designs and embodiments exist for the disclosure of the scope of the appended claims. Furthermore, the embodiments shown in the figures or the features of the various embodiments described in the specification are not necessarily construed as being independent of the embodiments. Rather, it is possible that features described in one of the examples of one embodiment may be combined with one or more other desired features from other embodiments, such that they are not described in words or Other embodiments of the drawings. Accordingly, the other embodiments are within the scope of the scope of the appended claims.

100‧‧‧漸進式神經網路(p網路) 100‧‧‧ Progressive neural network (p network)

102‧‧‧輸入 102‧‧‧ Input

104‧‧‧輸入訊號 104‧‧‧Input signal

106‧‧‧輸入影像 106‧‧‧ Input image

110‧‧‧權重校正方塊 110‧‧‧weight correction block

112‧‧‧校正權重 112‧‧‧correction weight

114‧‧‧分配器 114‧‧‧Distributor

116‧‧‧神經元 116‧‧‧ neurons

117‧‧‧輸出 117‧‧‧ Output

118‧‧‧神經結 118‧‧‧ nerve knot

119‧‧‧神經元單元 119‧‧‧neuronal unit

120‧‧‧神經元總和 120‧‧‧The sum of neurons

122‧‧‧權重校正計算器 122‧‧‧ Weight Correction Calculator

124‧‧‧期望輸出訊號 124‧‧‧ Expected output signal

126‧‧‧輸出影像 126‧‧‧ Output image

128‧‧‧偏差 128‧‧‧ Deviation

Claims (16)

一種神經網路,其包含:該神經網路的複數個輸入,各個輸入被裝配以接收具有一輸入值的一個輸入訊號;複數個神經結,其中各個神經結被連接到該複數個輸入的一者且包括複數個校正權重,其中各個校正權重係由一權重值所界定;一組分配器,其中各個分配器被運作連接到該複數個輸入的一者以供接收各別的輸入訊號且被裝配以從和該輸入值相關之該複數個校正權重來選擇一個或多個校正權重;一組神經元,其中各個神經元係具有至少一個輸出且經由該複數個神經結的一者而和該複數個輸入的至少一者為連接,且其中各個神經元被裝配以將選擇自被連接到各別的神經元之各個神經結的該等校正權重之該等權重值合計且藉此產生一神經元總和;及一權重校正計算器,其被裝配以接收具有一值的一期望輸出訊號,確定該神經元總和與該期望輸出訊號值的一偏差,且使用該確定的偏差來修改各別的校正權重值,俾使將該等修改的校正權重值合計以確定該神經元總和係使該神經元總和與該期望輸出訊號值的該偏差為最小化,藉此訓練該神經網路。 A neural network comprising: a plurality of inputs of the neural network, each input being assembled to receive an input signal having an input value; a plurality of neural nodes, wherein each neural node is coupled to one of the plurality of inputs And including a plurality of correction weights, wherein each correction weight is defined by a weight value; a set of distributors, wherein each of the distributors is operatively coupled to one of the plurality of inputs for receiving respective input signals and Assembling to select one or more correction weights from the plurality of correction weights associated with the input value; a set of neurons, wherein each neuron has at least one output and via the one of the plurality of neural nodes At least one of the plurality of inputs is a connection, and wherein each neuron is assembled to sum up the weight values of the correction weights selected from the respective neural nodes connected to the respective neuron and thereby generate a neural a summation; and a weight correction calculator configured to receive a desired output signal having a value, determine the sum of the neurons and the desired output a deviation of the values, and using the determined deviation to modify the respective correction weight values, such that the modified correction weight values are aggregated to determine that the sum of the neurons is the sum of the neurons and the desired output signal value This deviation is minimized, thereby training the neural network. 如申請專利範圍第1項之神經網路,其中:該神經元總和與該期望輸出訊號的該偏差之確定係包括該期望輸出訊號值除以該神經元總和藉此產生一偏差係數;且該等各別的校正權重值之修改係包括用以產生該神經元總和的各個校 正權重乘以該偏差係數。 The neural network of claim 1, wherein: the determination of the deviation of the sum of the neurons from the desired output signal comprises dividing the desired output signal value by the sum of the neurons to generate a deviation coefficient; and The modification of the individual correction weight values includes the schools used to generate the sum of the neurons. The positive weight is multiplied by the deviation coefficient. 如申請專利範圍第1項之神經網路,其中該神經元總和與該期望輸出訊號的該偏差係在其間的一數學差異,且其中該等各別修改的校正權重之產生係包括該數學差異之分攤到用以產生該神經元總和的各個校正權重。 The neural network of claim 1, wherein the deviation of the sum of the neurons from the desired output signal is a mathematical difference therebetween, and wherein the respective modified correction weights include the mathematical difference It is distributed to the respective correction weights used to generate the sum of the neurons. 如申請專利範圍第3項之神經網路,其中該數學差異之分攤係包括將該確定的差異平均劃分在用以產生該神經元總和的各個校正權重之間。 The neural network of claim 3, wherein the mathematical difference is divided by equally dividing the determined difference between respective correction weights used to generate the sum of the neurons. 如申請專利範圍第3項之神經網路,其中:各個分配器係另外裝配以將複數個影響係數指定到該等各別的複數個校正權重,俾使各個影響係數是以一預定比例而被指定到該複數個校正權重的一者以產生該各別的神經元總和;各個神經元係裝配以將對於經連接到其的所有該等神經結之該校正權重與該指定的影響係數之一乘積合計;且該權重校正計算器係裝配以根據由該各別的影響係數所建立之該比例而將該確定的差異之一部分施加到用以產生該神經元總和的各個校正權重。 The neural network of claim 3, wherein each of the dispensers is additionally configured to assign a plurality of influence coefficients to the respective plurality of correction weights such that the respective influence coefficients are at a predetermined ratio. Assigning to one of the plurality of correction weights to generate the sum of the respective neurons; each neuron is assembled to have the correction weight for all of the neural nodes connected thereto and one of the specified influence coefficients The product is summed; and the weight correction calculator is assembled to apply a portion of the determined difference to the respective correction weights used to generate the sum of the neurons based on the ratio established by the respective influence factors. 如申請專利範圍第5項之神經網路,其中:各個各別的複數個影響係數是由一影響分佈函數所界定;該複數個輸入值被接收到一值範圍,其根據一區間分佈函數而被劃分為區間,俾使各個輸入值被接收在一各別的區間內,且各個校正權重係對應於該等區間的一者;且 各個分配器係使用該各別接收的輸入值以選擇該各別的區間,且將該各別的複數個影響係數指定到對應於該選擇的各別的區間之校正權重且到對應於一鄰近於該選擇的各別的區間的區間之至少一個校正權重。 For example, in the neural network of claim 5, wherein: each of the plurality of influence coefficients is defined by an influence distribution function; the plurality of input values are received into a range of values, which is based on an interval distribution function Divided into intervals, such that each input value is received in a respective interval, and each correction weight corresponds to one of the intervals; Each of the dispensers uses the separately received input values to select the respective intervals, and assigns the respective plurality of influence coefficients to the correction weights corresponding to the selected respective intervals and to correspond to a proximity At least one of the intervals of the selected respective intervals is corrected for the weight. 如申請專利範圍第6項之神經網路,其中各個校正權重係另外由一組索引所界定,該組索引係包括:一輸入索引,其被設定以識別對應於該輸入的校正權重;一區間索引,其被設定以指定用於該各別校正權重的選擇區間;及一神經元索引,其被設定以指定對應於該神經元的校正權重。 The neural network of claim 6, wherein each of the correction weights is additionally defined by a set of indexes, the set of indexes comprising: an input index configured to identify a correction weight corresponding to the input; An index, which is set to specify a selection interval for the respective correction weights; and a neuron index that is set to specify a correction weight corresponding to the neuron. 如申請專利範圍第7項之神經網路,其中各個校正權重係進而由一存取索引所界定,該存取索引被設定以結算該各別校正權重在該神經網路的訓練期間由該輸入訊號所存取的次數。 The neural network of claim 7, wherein each of the correction weights is further defined by an access index, the access index being set to settle the respective correction weights during the training of the neural network by the input The number of times the signal was accessed. 一種訓練神經網路的方法,其包含:經由對於該神經網路的一輸入,接收具有一輸入值的一輸入訊號;將該輸入訊號傳遞到被運作連接到該輸入的一分配器;經由該分配器,從和該輸入值為相關之複數個校正權重來選擇一個或多個校正權重,其中各個校正權重係由一權重值所界定且為定位在連接到該輸入的一神經結之上;經由透過該神經結而和該輸入為連接且具有至少一輸出之一神經元,將所選擇的校正權重之該等權重值合計以產生一神經元總和;經由一權重校正計算器,接收具有一值的一期望輸出訊號;經由該權重校正計算器,確定該神經元總和與該期望輸出訊號值的一偏差;及 經由該權重校正計算器,使用該確定的偏差來修改各別的校正權重值,.俾使將該等修改的校正權重值合計以確定該神經元總和係使該神經元總和與該期望輸出訊號值的該偏差為最小化,藉此訓練該神經網路。 A method of training a neural network, comprising: receiving an input signal having an input value via an input to the neural network; transmitting the input signal to a dispenser operatively coupled to the input; The allocator selects one or more correction weights from a plurality of correction weights associated with the input value, wherein each of the correction weights is defined by a weight value and is positioned above a neural node connected to the input; And summing the weighted values of the selected correction weights to generate a sum of neurons via a neural node coupled to the input through the neural node and having at least one output; receiving a one via a weight correction calculator a desired output signal of the value; determining, by the weight correction calculator, a deviation of the sum of the neurons from the expected output signal value; Using the weight correction calculator, the determined deviation is used to modify the respective correction weight values, so that the modified correction weight values are aggregated to determine the sum of the neurons to cause the sum of the neurons and the desired output signal This deviation of values is minimized, thereby training the neural network. 如申請專利範圍第9項之方法,其中:該確定該神經元總和與該期望輸出訊號值的該偏差係包括將該期望輸出訊號值除以該神經元總和藉此產生一偏差係數;且該修改該等各別的校正權重係包括將用以產生該神經元總和的各個校正權重乘以該偏差係數。 The method of claim 9, wherein: determining the deviation of the sum of the neuron and the expected output signal value comprises dividing the desired output signal value by the sum of the neurons to generate a deviation coefficient; and Modifying the respective correction weights includes multiplying each of the correction weights used to generate the sum of the neurons by the deviation coefficient. 如申請專利範圍第9項之方法,其中該確定該神經元總和與該期望輸出訊號值的該偏差係包括確定在其間的一數學差異,且其中該修改該等各別的校正權重係包括將該數學差異分攤到用以產生該神經元總和的各個校正權重。 The method of claim 9, wherein the determining the deviation of the sum of the neurons from the expected output signal value comprises determining a mathematical difference therebetween, and wherein modifying the respective correction weights includes The mathematical difference is spread to the respective correction weights used to generate the sum of the neurons. 如申請專利範圍第11項之方法,其中該數學差異之該分攤係包括將該確定的差異平均劃分在用以產生該神經元總和的各個校正權重之間。 The method of claim 11, wherein the distribution of the mathematical difference comprises dividing the determined difference equally between respective correction weights used to generate the sum of the neurons. 如申請專利範圍第9項之方法,其更包含:經由該分配器,將複數個影響係數指定到該複數個校正權重,且包括以一預定比例而將各個影響係數指定到該複數個校正權重的一者以產生該神經元總和;經由該神經元,將對於經連接到其的所有該等神經結之該校正權重與該指定的影響係數之一乘積合計;且經由該權重校正計算器,根據由該各別的影響係數所建立之該比例而將該確定的差異之一部分施加到用以產生該神經元總和的各個校正權重。 The method of claim 9, further comprising: assigning, by the allocator, a plurality of influence coefficients to the plurality of correction weights, and assigning each influence coefficient to the plurality of correction weights at a predetermined ratio One of generating a sum of the neurons; via the neuron, summing the correction weights for all of the neural nodes connected thereto to one of the specified influence coefficients; and via the weight correction calculator, A portion of the determined difference is applied to each of the corrected weights used to generate the sum of the neurons based on the ratio established by the respective influencing factors. 如申請專利範圍第13項之方法,其中該複數個影響係數是由一影響分佈函數所界定;該種方法更包含:將該輸入值接收到一值範圍,其根據一區間分佈函數而被劃分為區間,俾使該輸入值被接收在一各別的區間內,且各個校正權重係對應於該等區間的一者;且經由該分配器,使用該接收的輸入值以選擇該各別的區間,且將該複數個影響係數指定到對應於該選擇的各別的區間之校正權重且到對應於一鄰近於該選擇的各別的區間的區間之至少一個校正權重。 The method of claim 13, wherein the plurality of influence coefficients are defined by an influence distribution function; the method further comprises: receiving the input value into a range of values, which is divided according to an interval distribution function In the interval, the input value is received in a respective interval, and each correction weight corresponds to one of the intervals; and the received input value is used to select the respective And an interval, and assigning the plurality of influence coefficients to the correction weights corresponding to the selected respective sections and to at least one correction weight corresponding to a section adjacent to the selected respective sections. 如申請專利範圍第14項之方法,其更包含:另外由一組索引來界定各個校正權重,其中該組索引係包括:一輸入索引,其被設定以識別對應於該輸入的校正權重;一區間索引,其被設定以指定用於該各別校正權重的選擇區間;及一神經元索引,其被設定以指定對應於該神經元的校正權重。 The method of claim 14, further comprising: additionally defining, by a set of indexes, respective correction weights, wherein the set of indexes comprises: an input index configured to identify a correction weight corresponding to the input; An interval index, which is set to specify a selection interval for the respective correction weights; and a neuron index that is set to specify a correction weight corresponding to the neuron. 如申請專利範圍第15項之方法,其更包含:另外由一存取索引來界定各個校正權重,該存取索引被設定以結算該各別校正權重在該神經網路的訓練期間由該輸入訊號所存取的次數。 The method of claim 15, further comprising: additionally defining, by an access index, each of the correction weights, the access index being set to settle the respective correction weights during the training of the neural network by the input The number of times the signal was accessed.
TW105101628A 2015-01-22 2016-01-20 Neural network and method of neural network training TWI655587B (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US201562106389P 2015-01-22 2015-01-22
US62/106,389 2015-01-22
??PCT/US15/19236 2015-03-06
PCT/US2015/019236 WO2015134900A1 (en) 2014-03-06 2015-03-06 Neural network and method of neural network training
US201562173163P 2015-06-09 2015-06-09
US62/173,163 2015-06-09
US14/862,337 US9390373B2 (en) 2014-03-06 2015-09-23 Neural network and method of neural network training
US14/862,337 2015-09-23

Publications (2)

Publication Number Publication Date
TW201636905A true TW201636905A (en) 2016-10-16
TWI655587B TWI655587B (en) 2019-04-01

Family

ID=57847669

Family Applications (1)

Application Number Title Priority Date Filing Date
TW105101628A TWI655587B (en) 2015-01-22 2016-01-20 Neural network and method of neural network training

Country Status (1)

Country Link
TW (1) TWI655587B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI596366B (en) * 2016-10-24 2017-08-21 財團法人工業技術研究院 Positioning method and image capturing device thereof
TWI662511B (en) * 2017-10-03 2019-06-11 財團法人資訊工業策進會 Hierarchical image classification method and system
TWI709852B (en) * 2017-11-17 2020-11-11 美商艾維泰公司 System and method for anomaly detection via a multi-prediction-model architecture
US20210279528A1 (en) * 2020-03-03 2021-09-09 Assa Abloy Ab Systems and methods for fine tuning image classification neural networks
TWI742312B (en) * 2017-10-02 2021-10-11 宏達國際電子股份有限公司 Machine learning system, machine learning method and non-transitory computer readable medium for operating the same
TWI753039B (en) * 2017-02-22 2022-01-21 香港商阿里巴巴集團服務有限公司 Image recognition method and device
US11416733B2 (en) 2018-11-19 2022-08-16 Google Llc Multi-task recurrent neural networks
US11829476B2 (en) 2020-05-27 2023-11-28 Hon Hai Precision Industry Co., Ltd. Computing device and model parameters security protection method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743535B (en) * 2019-05-21 2024-05-24 北京市商汤科技开发有限公司 Neural network training method and device and image processing method and device
CN111525921B (en) * 2020-05-15 2023-09-08 矽力杰半导体技术(杭州)有限公司 System and method for signal conversion in neural networks

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5299285A (en) * 1992-01-31 1994-03-29 The United States Of America As Represented By The Administrator, National Aeronautics And Space Administration Neural network with dynamically adaptable neurons
TWI226012B (en) * 2003-12-17 2005-01-01 Wintek Corp Neural network correcting method for touch panel
EP2122542B1 (en) * 2006-12-08 2017-11-01 Medhat Moussa Architecture, system and method for artificial neural network implementation
TW201203137A (en) * 2010-07-09 2012-01-16 Univ Nat Taipei Technology Data correction method for remote terminal unit

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI596366B (en) * 2016-10-24 2017-08-21 財團法人工業技術研究院 Positioning method and image capturing device thereof
TWI753039B (en) * 2017-02-22 2022-01-21 香港商阿里巴巴集團服務有限公司 Image recognition method and device
TWI742312B (en) * 2017-10-02 2021-10-11 宏達國際電子股份有限公司 Machine learning system, machine learning method and non-transitory computer readable medium for operating the same
TWI662511B (en) * 2017-10-03 2019-06-11 財團法人資訊工業策進會 Hierarchical image classification method and system
TWI709852B (en) * 2017-11-17 2020-11-11 美商艾維泰公司 System and method for anomaly detection via a multi-prediction-model architecture
US11416733B2 (en) 2018-11-19 2022-08-16 Google Llc Multi-task recurrent neural networks
US20210279528A1 (en) * 2020-03-03 2021-09-09 Assa Abloy Ab Systems and methods for fine tuning image classification neural networks
US11763551B2 (en) * 2020-03-03 2023-09-19 Assa Abloy Ab Systems and methods for fine tuning image classification neural networks
US11829476B2 (en) 2020-05-27 2023-11-28 Hon Hai Precision Industry Co., Ltd. Computing device and model parameters security protection method

Also Published As

Publication number Publication date
TWI655587B (en) 2019-04-01

Similar Documents

Publication Publication Date Title
JP6382354B2 (en) Neural network and neural network training method
TWI655587B (en) Neural network and method of neural network training
US9619749B2 (en) Neural network and method of neural network training
KR102558300B1 (en) Neural Networks and How to Train Neural Networks
CN111191709B (en) Continuous learning framework and continuous learning method of deep neural network
Thagard Thought experiments considered harmful
Lavin et al. Analyzing and simplifying model uncertainty in fuzzy cognitive maps
Easom-Mccaldin et al. On depth, robustness and performance using the data re-uploading single-qubit classifier
Vijayalakshmi et al. Hybrid dual-channel convolution neural network (DCCNN) with spider monkey optimization (SMO) for cyber security threats detection in internet of things
CN112257785A (en) Serialized task completion method and system based on memory consolidation mechanism and GAN model
Chai et al. WPSS: dropout prediction for MOOCs using course progress normalization and subset selection
Reddy et al. Effect of image colourspace on performance of convolution neural networks
JP7148078B2 (en) Attribute estimation device, attribute estimation method, attribute estimator learning device, and program
JP2022148878A (en) Program, information processing device and method
JP2019219756A (en) Control device, control method, program, and information recording medium
Ivanova OPTIMIZING SIMULATION MODELS OF AN ARTIFICIAL NEURAL NETWORK FOR DIGITAL RECOGNITION.
Guo Stock Price Predictions Using Machine Learning Models
Bairamian Adversarial Strategy Learning
WO2020075462A1 (en) Learner estimating device, learner estimation method, risk evaluation device, risk evaluation method, and program
US20210056440A1 (en) Synthetic genius machine and knowledge creation system
Zammit et al. Seeding diversity into AI art paper type
CN116957058A (en) Countermeasure training method, device, equipment and medium for space-time traffic prediction model
Mahner et al. Dimensions that matter: Interpretable object dimensions in humans and deep neural networks
Lu Towards robust neural networks: evaluation and construction
Vodianyk et al. Evolving Node Transfer Functions in Deep Neural Networks for Pattern Recognition