CN101625735A - FPGA implementation method based on LS-SVM classification and recurrence learning recurrence neural network - Google Patents
FPGA implementation method based on LS-SVM classification and recurrence learning recurrence neural network Download PDFInfo
- Publication number
- CN101625735A CN101625735A CN200910023583A CN200910023583A CN101625735A CN 101625735 A CN101625735 A CN 101625735A CN 200910023583 A CN200910023583 A CN 200910023583A CN 200910023583 A CN200910023583 A CN 200910023583A CN 101625735 A CN101625735 A CN 101625735A
- Authority
- CN
- China
- Prior art keywords
- alpha
- neural network
- sub
- sigma
- neuron
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Complex Calculations (AREA)
Abstract
The invention discloses a FPGA implementation method based on LS-SVM classification and recurrence learning recurrence neural network; the method is implemented according to the following steps: building LS-SVM classification or topological structure of recurrence learning recurrence neural network according to sample size; selecting appropriate kernel functions, selecting and calculating parameters of the kernel functions; carrying out discrete treatment on an obtained dynamic equation and determining step length; selecting digits of binary coding with complement used in experiment; building a basic element database which comprises an arithmetic element, a memory cell and a control unit; building LS-SVM classification or neuron elements of recurrence learning recurrence neural network; treating the built neuron elements as basic elements; and transferring corresponding quantity of neurons to built the whole network. The FPGA implementation method not only speeds up LS-SVM training speed, but also overcomes the shortcoming that an artificial circuit has deficient flexibility and can better deal with the change of the application environment.
Description
Technical field
The invention belongs to mode identification technology, relate to a kind of FPGA implementation method, also relate to a kind of FPGA implementation method based on the LS-SVM recurrence learning recurrence neural network based on LS-SVM classification learning recurrent neural network.
Background technology
(Support Vector Machines is SVM) with good popularization ability, extremely low classification and approximate error, mathematical easy processing and succinct advantages such as geometric interpretation, by extensively as a kind of classification and the instrument that returns for support vector machine.Research to support vector machine at present mainly concentrates on theoretical research and optimization Algorithm aspect.By comparison, its applied research is relative less with the research that algorithm is realized, has only comparatively limited experimental study report at present.Simultaneously, most general-purpose computers software of these algorithms is realized, realizes that this has obviously limited SVM application in practice greatly and be not suitable for hardware.
The hardware implementation method of the SVM that has proposed at present all is based on the simple application of mimic channel.The dirigibility of mimic channel own is relatively poor, and topological structure determines that the back just is difficult to change, and circuit itself takes up room greatly, consumes energy is many.And support vector machine often will be in the face of the environment of real-time change in application, and a lot of occasions also have requirement for the physical condition of realizing circuit itself, just is restricted in actual applications so mimic channel is realized the method for SVM study.The development of FPGA technology in recent years is very fast, particularly FPGA itself have make up topological structure rapidly, consume low, adaptability strong and advantage such as simplicity of design, the hardware that develops into SVM of FPGA realizes providing a new thinking.
The training problem of standard support vector machine comes down to find the solution a quadratic programming problem, and least square method supporting vector machine adopts equality constraint, primary standard SVM problem is converted into linear equation finds the solution problem, thereby simplified complexity of calculation, and algorithm is easily realized, fast convergence rate.Therefore the FPGA to LS-SVM realizes research, and is all significant for practical application and the theoretical research of SVM.
Summary of the invention
The objective of the invention is, a kind of FPGA implementation method based on LS-SVM classification learning recurrent neural network is provided, a kind of FPGA implementation method based on the LS-SVM recurrence learning recurrence neural network also is provided, accelerates the recurrent neural network training speed, the dirigibility that has improved circuit.
Technical scheme of the present invention is, a kind of FPGA implementation method based on LS-SVM classification learning recurrent neural network, and this method is implemented according to the following steps:
Step 1: the topological structure according to sample size structure LS-SVM classification learning recurrent neural network is: general-α
1q
I1-α
Nq
IN, 1 ,-γ
-1α
iAnd-by
iInsert in the ∑ and sue for peace, the output terminal of ∑ inserts integrator ∫, and the output of integrator ∫ is α
i, and α
iAgain through weights-q
IjFeed back in each corresponding ∑, form a recurrent neural network;
Given classification based training collection (z
i, y
i) be one group of sample to be classified, i=1,2 ... N, for all z
i∈ R
NY is all arranged
i∈ (+1 ,-1) is the corresponding classification of sample, and its categorised decision face is described as
Wherein W is a weight vector, and b is a threshold value,
The Nonlinear Mapping of expression training sample from the input space to the feature space, the classification learning of LS-SVM promptly solves following affined optimization problem:
Find the solution this problem and introduce the Lagrange function:
α wherein
iBe the Lagrange multiplier, utilize the KKT condition to ask local derviation can obtain the optimal conditions of this problem to each variable of Lagrange function respectively:
Q wherein
Ij=y
iy
jK
Ij, and
Be defined as kernel function, obtain the dynamic equation of LS-SVM classification learning neural network model:
α wherein
iBe the Lagrange multiplier, b is a threshold value, (z
i, y
i) be one group of sample to be classified, i=1,2 ... N,
Step 3: the result of integrating step 2, discretize is carried out in dynamic equation (6), (7) handle, and definite step delta T;
Equation (6), (7) are carried out obtaining Discrete Dynamic equation (8), (9) after discretize is handled
Time interval Δ T in equation (8), (9) promptly is the step-length of sampling;
Step 4: the digits of binary coding of determining the band complement code: the binary coding with the binary coding C of 32 band complement codes is converted to 16 band complement code comprises the figure place of integer-bit and the figure place of decimal place;
Step 5: according to the basic component library of step 4 structure, comprise arithmetic element, storage unit and control module, wherein arithmetic element comprises multiply accumulating unit MAC, multiplier unit Mul, subtractor unit Sub, accumulator element AC; Storage unit comprises ROM and RAM; Control module calls IP kernel realization MAC, Mul, Sub, AC and ROM among the ISE9.1;
Step 6: the component library that utilizes step 5 to obtain makes up neuron elements respectively:
Make up LS-SVM classification learning neuron module,
A1, finish when data storage, this moment, reading of data simultaneously from ROM and RAM was input in the MAC unit, wherein a triggering along in α
j(t) and q
IjBe corresponding mutually, when the MAC unit calculates, calculate 1-b (t) y
i, at first finish b (t) y by the map unit Mul (1) that uses Mul
iComputing is finished 1-b (t) y by the map unit Sub (1) that uses Sub then
i
A2, wait MAC unitary operation finish, with the result
And 1-b (t) y
iBe input to the map unit Sub (2) of Sub, the position parallel with Sub (2) also has Mul (2), finishes γ in this time period simultaneously
-1α
i(t) calculating;
A3, the result of Mul (2) and Sub (2) is input among the Sub (3), finishes
Calculating;
A4, again the result of Sub (3) and the Δ T that configures are sent into Mul (3), obtain operation result α this moment
j(Δ T) is again with α
j(Δ T) is sent in the AC module and adds up, and finally obtains α
j(t+ Δ T) is in (9)
Part is regarded α as
j(t) to α
j(t+ Δ T) increment, when it is 0, promptly the AC module be input as 0 the time, when the output of AC module no longer changed, neuronic output was stable, the output result is α
j
Step 7: regard each neuron elements as primary element, Neuro1~Neuroi represents i neuron, Neuro_b represents the threshold value neuron, call i neuron and the Neuro_b threshold value neuron select of access in parallel module, connect by i+1 neuronic rule, constituted the neural network of SVM learning functionality, make each neuron be controlled by a clock control signal simultaneously, after calculating finishes in each neuron one-period, all produce an effective control signal, when all neuronic control signals that receive when the ANN (Artificial Neural Network) Control unit are effective, producing a total effective control signal makes whole neurons enter the computing of following one-period, when network carries out recursive operation when stablizing, try to achieve the convergence parameter alpha, b.
Another technical scheme of the present invention is, a kind of FPGA implementation method based on the LS-SVM recurrence learning recurrence neural network, and this method is implemented according to the following steps:
Given training set (z
i, y
i), i=1 ..., N, wherein, i input data z
i∈ R
nAnd i output data y
i∈ R, the regression function similar to classification problem is:
Wherein W is a weight vector, and b is a side-play amount,
The Nonlinear Mapping of expression from the input space to the feature space, the regression problem of LS-SVM promptly solves following optimization problem:
The same Lagrange function that makes up:
α wherein
iBe the Lagrange multiplier, obtain by the KKT condition with the derivation of classification problem that problem is optimum must to be satisfied:
Obtain the dynamic equation of LS-SVM recurrence learning neural network model:
α wherein
iBe the Lagrange multiplier, b is a threshold value, (z
i, y
i) be one group of sample to be classified, i=1,2 ... N,
Step 3: the result of integrating step 2, dynamic equation (15) (16) is carried out discretize handle, and definite step delta T;
Time interval Δ T in equation (17), (18) promptly is the step-length of sampling;
Step 4: the digits of binary coding of determining the band complement code: the binary coding with the binary coding C of 32 band complement codes is converted to 16 band complement code comprises the figure place of integer-bit and the figure place of decimal place;
Step 5: according to the basic component library of step 4 structure, comprise arithmetic element, storage unit and control module, wherein arithmetic element comprises multiply accumulating unit MAC, multiplier unit Mul, subtractor unit Sub, accumulator element AC; Storage unit comprises ROM and RAM; Control module calls IP kernel realization MAC, Mul, Sub, AC and ROM among the ISE9.1;
Step 6: the primary element storehouse that utilizes step 5 to obtain, make up LS-SVM recurrence learning recurrence neuron elements,
B1, finish when data storage, reading of data simultaneously from ROM and RAM was input in the MAC unit and finished this moment
Computing, finish y simultaneously
iThe calculating of-b (t);
B2, wait MAC unitary operation finish, with the result
And y
i-b (t) is input to the map unit Sub (2) of Sub, and the position parallel with Sub (2) also has Mul (1), finishes γ in this time period simultaneously
-1α
i(t) calculating;
B3, the result of Mul (1) and Sub (2) is input among the Sub (3), finishes
Calculating;
B4, again the result of Sub (3) and the Δ T that configures are sent into Mul (2), obtain operation result α this moment
j(Δ T) is again with α
j(Δ T) is sent in the AC module and adds up, and finally obtains α
j(t+ Δ T) is in (9)
Part is regarded α as
j(t) to α
j(t+ Δ T) increment, when it is 0, promptly the AC module be input as 0 the time, when the output of AC module no longer changed, neuronic output was stable, the output result is α
j
Step 7: regard each neuron elements as primary element, Neuro1~Neuroi represents i neuron, Neuro_b represents the threshold value neuron, call i neuron and the Neuro_b threshold value neuron select of access in parallel module, connect by i+1 neuronic rule, constituted the neural network of SVM learning functionality, make each neuron be controlled by a clock control signal simultaneously, after calculating finishes in each neuron one-period, all produce an effective control signal, when all neuronic control signals that receive when the ANN (Artificial Neural Network) Control unit are effective, producing a total effective control signal makes whole neurons enter the computing of following one-period, when network carries out recursive operation when stablizing, try to achieve the convergence parameter alpha, b.
FPGA implementation method of the present invention has been finished the classification and the recurrence of LS-SVM learning recurrence neural network preferably, has not only accelerated the recurrent neural network training speed, has also improved the dirigibility of circuit, makes the application of LS-SVM more extensive.
Description of drawings
Fig. 1 is LS-SVM classification learning neural network topology structure figure of the present invention;
Fig. 2 is the realization flow figure of LS-SVM classification learning neural network of the present invention;
Fig. 3 is the binary coding synoptic diagram of band complement code of the present invention 16;
Fig. 4 is the flow graph of coded signal of the present invention between arithmetic element;
Fig. 5 is a processing procedure synoptic diagram in the coded data training of the present invention;
Fig. 6 is the neuronic FPGA realization flow of LS-SVM classification learning neural network of the present invention figure;
Fig. 7 is the structural drawing of LS-SVM recurrence learning neural network of the present invention;
Fig. 8 is LS-SVM recurrence learning neural network topology structure figure of the present invention;
Fig. 9 is the neuronic FPGA realization flow of LS-SVM recurrence learning neural network of the present invention figure;
Figure 10 is the α that embodiment of the invention 1LSSVCLN obtains with Simulink, the convergence oscillogram of b;
Figure 11 is that embodiments of the invention 1 pass through α among Fig. 7, the linear inseparable decision surface synoptic diagram that the result of b tries to achieve, and " o " represents positive class sample; The negative class sample of " * " representative;
Figure 12 is the α that embodiment of the invention 1LSSVCLN obtains with the FPGA realization, and b restrains oscillogram;
Figure 13 is that embodiments of the invention 1 pass through α among Fig. 9, the linear inseparable decision surface synoptic diagram that the result of b tries to achieve, and " o " represents positive class sample; The negative class sample of " * " representative;
Figure 14 is that the embodiment of the invention 2 usefulness Simulink carry out the α that LSSVRLN emulation obtains, the convergence oscillogram of b;
Figure 15 is that the embodiment of the invention 2 is passed through α among Figure 11, and b is to the regression result curve map of 12 points;
Figure 16 is that the embodiment of the invention 2 is passed through α among Figure 13, and b is to the regression result curve map of 12 points;
Figure 17 is that the embodiment of the invention 2 usefulness FPGA realize the α that LSSVRLN obtains, the convergence oscillogram of b.
Embodiment
The present invention is described in detail below in conjunction with the drawings and specific embodiments.
Structure LS-SVM classification learning recurrent neural network,
Given classification based training collection (z
i, y
i) be one group of sample to be classified, i=1,2 ... N.For all z
i∈ R
NY is all arranged
i∈ (+1 ,-1), its categorised decision face can be described as
Wherein W is a weight vector, and b is a threshold value,
Be used for representing the Nonlinear Mapping of training sample from the input space to the feature space.The classification learning of LS-SVM promptly solves following affined optimization problem:
Find the solution this problem and can introduce the Lagrange function:
α wherein
iBe the Lagrange multiplier.Utilize the KKT condition to ask local derviation can obtain the optimal conditions of this problem to each variable of Lagrange function respectively:
Q wherein
Ij=y
iy
jK
Ij, and
Be defined as kernel function.If kernel function satisfies the Merce condition, and symmetrical matrix Q
c=[q
Ij] be positive definite, then this problem is an optimized protruding problem, promptly it has only an overall situation to separate.
Neural network model is described by following dynamic equation:
Fig. 1 is the topology diagram of LS-SVM classification learning neural network ,-α
1q
I1-α
Nq
IN, 1 ,-γ
-1α
iAnd-by
iInsert in the ∑ and sue for peace, the output terminal of ∑ inserts integrator ∫, and the output of integrator ∫ is α
i, and α
iAgain through weights-q
IjFeed back in each corresponding ∑, form a recurrent neural network.
Because FPGA itself can not directly realize continuous dynamic equation, so Discrete Dynamic equation (8), (9) are carried out obtaining after discretize is handled in equation (6), (7):
Δ T is the time interval of sampling in formula (8), (9).
How the present invention will to realizing that with FPGA equation (8), (9) describe.The programmability of FPGA itself has strengthened the dirigibility of circuit design, so the main means among the present invention are VHDL programming languages that FPGA adopts usually, it is how to be used in the LS-SVM classification learning neural network that Fig. 2 further specifies VHDL language.The primary element storehouse that the representative of the square frame of the bottom makes up among Fig. 2: mainly comprise multiply accumulating unit (MAC), multiplier unit (Mul), subtractor unit (Sub), accumulator element (AC) and storage unit ROM and RAM.On the realization means of primary element, we adopt the VHDL language programming to realize for some, also have some then to adopt third-party IP kernel to realize.IP kernel involved in the present invention is soft IP kernel, does not contain concrete physical message.The square frame of the second layer is represented neuron, and wherein neuro1-neuroi represents i neuron, and they all are to call the element in the primary element storehouse in the bottom and realize by writing the vhd file.Uppermost square frame is represented whole SVM neural network, and it also is to realize by writing the neuron file that the vhd file calls the second layer.
Element in the component library that the inventive method is created, input is all adopted the binary coding of being with complement code with output signal.The computing of corresponding primary element is fixed-point arithmetic, and the position of radix point is self-determining by the deviser in fixed-point arithmetic, the position that radix point is deposited in the unit of not storing separately in the hardware of reality is realized.For the deviser, it is a virtual existence, only needs the deviser to remember its position, is not influence in the calculating process of reality.The code length scope of signal can by deviser's experience or software emulation obtains.
Another important problem of the inventive method is that intermediate data is handled.The network that is proposed all has recursive nature, for this class network, imports given back training process and be oneself a continuous iterative process, is difficult to carry out human intervention in this process.Comprised arithmetic elements such as multiplier and multiply accumulating device for the computing of neuron in one-period, this class arithmetic element can cause expression way inconsistent of output signal and input signal.Figure 3 shows that the binary coded number of one 16 band complement code, S conventional letter position wherein, F is an integer-bit, I is a decimal place.As Fig. 4, describe with the example that is combined as of a multiplier and a subtracter.The input signal a of multiplier (Mul) and b are that length is the binary coding of 16 band complement code, and then its output signal C is that length is the binary coding of 32 band complement code.And will be as an input signal of subtracter (Sub) for the output signal C of multiplier, but the input signal code length that subtracter (Sub) is set is the binary coding of 16 bit strip complement codes, so will the binary coding C of 32 band complement codes be converted to the binary coding C of 16 band complement code through a data processing procedure before this.By shown in Figure 5, the C of 32 codings is done following processing, give up the diagram part.If the residue integral part can be represented all integer-bit, casting out integer-bit part does not so influence the result.For the fractions omitted part, abundant if the signal decimal place obtains, give up fraction part shown in Figure 5 so, be very little to result's influence, can ignore.When through after the section processes shown in Figure 5, just can obtain binary coding with the band complement code of 16 of Fig. 3 same form.
After the primary element storehouse is set up, in Fig. 6, leftmost square frame is represented the primary element storehouse, comprising the unit that adds up (MAC), multiplier unit (Mul), subtractor unit (Sub), accumulator element (AC) and storage unit ROM and RAM sum counter (COUNT).Q in the middle block diagram
IjBe the weights of network, Mul (1), Mul (2) and Mul (3) are the mapping elements to the element Mul in the primary element storehouse.Sub (1), Sub (2) and Sub (3) are the mapping elements to the element Sub in the primary element storehouse.y
iBe the sign of classification, γ is a penalty factor.α in the rightmost block diagram
1(t), α
2(t) ... α
i(t) represent Lagrange multiplier, b represents the threshold value of network.For the LS-SVM classification learning neural network with recursive nature, the weights q of network
IjBe by the decision of the value of training sample.Be different from common neural network, the weights of this network are not real-time update.For a specific problem, the value of training sample is fixed, i.e. LS-SVM classification learning neural network weight q
IjIn the process of training, do not change.So before network training, calculated weights q according to training sample
Ij, leave in the ROM cell.
According to the flow process structure LS-SVM classification learning neuron module of Fig. 6, after above-mentioned preliminary work was carried out, network entered training process.Because present networks has recurrence, so only j neuron running in the one-period described here.When last one-period finishes, produce one group of training parameter α
1(t), α
2(t) ... α
i(t) and b (t).This cycle is when beginning, and this group parameter value is deposited in the ram cell, and doing like this is to consider the simplification of the saving problem of resource and structure and the pipelining used.Though LS-SVM classification learning neural network itself has the concurrency of height, the method does not influence whole concurrency.When subsequent calculations need be used these parameter values, from RAM, read again.As Fig. 6, the flow direction of arrow is represented the trigger sequence of module, can be divided into following four-stage to the neuronic FPGA implementation procedure of whole LS-SVM classification learning neural network:
1, finish when data storage, this moment, reading of data simultaneously from ROM and RAM was input in the MAC unit, wherein a triggering along in α
j(t) and q
IjBe corresponding mutually.When calculating, calculates the MAC unit 1-b (t) y
i, at first finish b (t) y by the map unit Mul (1) that uses Mul
iComputing is finished 1-b (t) y by the map unit Sub (1) that uses Sub then
i, such two or several module parallel computation can be saved the training time of network;
2, wait for that the MAC unitary operation finishes, with the result
And 1-b (t) y
iBe input to the map unit Sub (2) of Sub, as can see from Figure 6, the position parallel with Sub (2) also has Mul (2), finishes γ in this time period simultaneously
-1α
i(t) calculating;
3, the result with Mul (2) and Sub (2) is input among the Sub (3), finishes
Calculating;
4, again the result of Sub (3) and the Δ T that configures are sent into Mul (3), obtain operation result α this moment
j(Δ T) is again with α
j(Δ T) is sent in the AC module and adds up, and finally obtains α
j(t+ Δ T).In (9)
Part can be regarded as α
j(t) to α
j(t+ Δ T) increment, when it is 0, promptly the AC module be input as 0 the time, when the output of AC module no longer changed, neuronic output was stable, the output result is α
j
When using VHDL language to realize, neuron (neuro) is used as a solid element.Make up the neuro.vhd file, call element in the component library by the component statement, do like this when needs carry out the part change to network, only need the elementary cell in change or the replacement component library can constitute new network, have better expansibility.
Just finished the structure of a function of neurons module by the flow process of Fig. 6.After finishing the Neuro modular design according to the structure among Fig. 2, be equivalent in original component library, adding the Neuro unit.As shown in Figure 7, Neuro1~Neuroi represents i neuron, and Neuro_b represents the threshold value neuron, i neuron and the Neuro_b threshold value neuron select module that inserts in parallel, connect by i+1 neuronic rule, constituted the neural network of SVM learning functionality.This recurrence network has very strong concurrency and systematicness as can see from Figure 7.In order well to finish these characteristics of network, make each neuron be controlled by a clock control signal in the present invention, after calculating finishes in each neuron one-period, the capital produces an effective control signal, when all neuronic control signals that receive when the ANN (Artificial Neural Network) Control unit are effective, will produce a total effective control signal and make whole neurons enter the computing of following one-period.The same component statement that uses calls the neuron module, and the method is convenient to increase and decrease neuronic number in the neural network so that the variation of network size.
Structure LS-SVM recurrence learning recurrence neural network,
Given training set (z
i, y
i), i=1 ..., N, wherein, i input data z
i∈ R
nAnd i output data y
i∈ R, the regression function similar to classification problem is:
Wherein W is a weight vector, and b is a side-play amount.
The Nonlinear Mapping of expression from the input space to the feature space.The regression problem of LS-SVM promptly solves following optimization problem:
The same Lagrange function that makes up:
α wherein
iBe the Lagrange multiplier.Must satisfy by the KKT condition with the similar problem that obtains of deriving of classification problem is optimum:
The recurrence network that is proposed is described by following dynamic equation:
Same, the system of this dynamic equation (15), (16) description satisfies the KKT condition (13) of former problem, (14) as can be seen.
Fig. 8 is the topology diagram of LS-SVM recurrence learning recurrence neural network ,-α
1Ω
I1-α
NΩ
IN, y
i,-γ
-1α and-b inserts in the ∑ and sues for peace, the output terminal of ∑ inserts integrator ∫, the output of integrator ∫ is α
i, and α
iAgain through weights-Ω
IjFeed back in each corresponding ∑.
Because FPGA itself can not directly realize continuous dynamic equation, so equation (15) (16) is carried out obtaining after discretize is handled Discrete Dynamic equation (17) (18):
How the inventive method also will to realizing that with FPGA equation (17), (18) describe.
After the primary element storehouse is set up, construct the neuron module of LS-SVM recurrence learning recurrence neural network according to the flow process of Fig. 9.Leftmost square frame is represented the primary element storehouse among Fig. 9, comprising the unit that adds up (MAC), multiplier unit (Mul), subtractor unit (Sub), accumulator element (AC) and storage unit ROM and RAM sum counter (COUNT).Ω in the middle block diagram
IjBe the weights of network, Mul (1) and Mul (2) are the mapping elements to the element Mul in the primary element storehouse.Sub (1), Sub (2) and Sub (3) are the mapping elements to the element Sub in the primary element storehouse.y
iBe the sign of classification, γ is a penalty factor.α in the rightmost block diagram
1(t), α
2(t) ... α
i(t) represent Lagrange multiplier, b represents the threshold value of network.For the LS-SVM recurrence learning neural network with recursive nature, the weights Ω of network
IjBe by the decision of the value of training sample.Be different from common neural network, the weights of this network are not real-time update.For a specific problem, the value of training sample is fixed, i.e. LS-SVM recurrence learning neural network weight Ω
IjIn the process of training, do not change.So before network training, calculated weights Ω according to training sample
Ij, leave in the ROM cell.
After preliminary work was carried out, network entered training process.Because present networks has recurrence, so only j neuron running in the one-period described here.When last one-period finishes, produce one group of training parameter α
1(t), α
2(t) ... α
i(t) and b (t).This cycle is when beginning, and this group parameter value is deposited in the ram cell, and doing like this is to consider the simplification of the saving problem of resource and structure and the pipelining used.Though LS-SVM recurrence learning neural network itself has the concurrency of height, the method does not influence whole concurrency.When subsequent calculations need be used these parameter values, from RAM, read again.As shown in Figure 9, the flow direction of arrow is represented the trigger sequence of module, can be divided into following four-stage to the neuronic FPGA implementation procedure of whole LS-SVM recurrence learning recurrence neural network:
(1), finish when data storage, reading of data simultaneously from ROM and RAM was input in the MAC unit and finished this moment
Computing, finish y simultaneously
iThe calculating of-b (t);
(2), wait for that the MAC unitary operation finishes, with the result
And y
i-b (t) is input to the map unit Sub (2) of Sub.As can see from Figure 9, the position parallel with Sub (2) also has Mul (1), finishes γ in this time period simultaneously
-1α
i(t) calculating;
(3), the result of Mul (1) and Sub (2) is input among the Sub (3), finish
Calculating;
(4), the result with Sub (3) sends into Mul (2) with the Δ T that configures again, obtains operation result α this moment
j(Δ T) is again with α
j(Δ T) is sent in the AC module and adds up, and finally obtains α
j(t+ Δ T).In (9)
Part can be regarded as α
j(t) to α
j(t+ Δ T) increment, when it is 0, promptly the AC module be input as 0 the time, when the output of AC module no longer changed, neuronic output was stable, the output result is α
j
Similar with the LS-SVM Classification Neural, finish neuronic design according to the flow process of Fig. 9, and then finish the FPGA implementation method of whole network according to the structure of Fig. 2 and Fig. 7.
Therefore, the FPGA implementation method of the least square method supporting vector machine based on above-mentioned LS-SVM classification or recurrence learning recurrence neural network of the present invention, implement according to the following steps:
Step 1: according to the topological structure of LS-SVM classification of sample size structure or recurrence learning recurrence neural network, as Fig. 1 or shown in Figure 8;
Step 2: select suitable kernel function for use, select the kernel function parameter, and calculate
Step 3: dynamic equation (6) (7) or (15) (16) are carried out the discretize processing, and definite step-length;
Step 4: select the digits of binary coding of the band complement code of experiment use, comprising the figure place of integer-bit and the figure place of decimal place;
Step 5: construct basic component library, comprise arithmetic element, storage unit and control module;
Step 6:, make up LS-SVM classification or recurrence learning recurrence neural network neuron elements according to the flow process of Fig. 6 or Fig. 9;
Step 7: regard the neuron that step 6 makes up as primary element, the neuron that calls respective numbers makes up whole network by Fig. 7.
z
1=(4.7,1.4),z
2=(4.5,1.5),z
3=(4.9,1.5),z
4=(6,2.5),z
5=(5.1,1.9),z
6=(5.9,2.1),z
7=(5.6,1.8)。Corresponding classification is (+1 ,+1 ,+1 ,-1 ,-1 ,-1 ,-1).These 7 points extract two category feature petallength and petal width from Iris-versicolor in the Iris data problem and Iris-virginica two classes.
Step 1: according to the topological structure of 7 sample architecture LS-SVM classification learning recurrent neural networks;
Step 3: dynamic equation (6) (7) is carried out discretize handle, get step delta T=2
-3S;
Step 4: selecting the digits of binary coding of the band complement code of this experiment use is 16, comprising 1 bit sign position, 4 integer-bit and 11 decimal places;
Step 5: structure multiply accumulating unit (MAC), multiplier unit (Mul), subtractor unit (Sub), accumulator element (AC) and storage unit ROM and RAM, wherein call IP kernel realization MAC, Mul, Sub, AC and ROM among the ISE9.1, write the vhd file and realize;
Step 6: according to Fig. 6 flow process, the q that will obtain according to sample calculation earlier
IjBe stored among the ROM, design connection and trigger sequence between each arithmetic element according to computation sequence then;
Step 7: will construct good neuron and regard a basic element as.The neuron that calls respective numbers constitutes whole network shown in Figure 7, makes network carry out recursive operation then, when network stabilization, tries to achieve the convergence parameter alpha, b.
Figure 10 shows that this example adopts LS-SVM classification learning neural network to carry out each parameter convergence curve that the Simulink emulation experiment obtains, wherein:
α=(0.3404,0.3681,0.5758,0.2756,0.6158,0.1499,0.2430) b=-0.2249。
Symbol among Figure 10: fine line, circular lines, plus line, the asterisk line, left triangle line, triangle line on the right triangle line, following triangle line is represented α, α successively respectively
1, α
2, α
3, α
4, α
5, α
6, the b line.
Curve is by top parameter alpha among Figure 11, the classification lineoid that b tries to achieve, and its expression formula is:
' o ' among Figure 11 represents first kind sample, and ' * ' represents the second class sample.
Figure 12 shows that this example adopts the value of each parameter that the FPGA implementation method of LS-SVM classification learning neural network obtains, wherein:
α=(0.3292,0.4302,0.9177,0.3563,0.9709,0.0977,0.2520),b=-0.2463。
Curve among Figure 13 is by top parameter alpha, the classification lineoid that b tries to achieve, and its expression formula is:
Provide simultaneously this example here based on these two kinds of implementation method required times of LS-SVM classification learning neural network: the used time of Simulink emulation experiment is 2.432 * 10
-3S; FPGA realizes that the used time is 2.365 * 10
-4S.From the classifying quality of last classification lineoid, two kinds of implementation methods have all been finished the linear inseparable classification of this example preferably, but are to use the method for FPGA to realize that the required time is shorter.
The functional value of 12 somes tabulation among table 1 embodiment 2
Step 1: according to the topological structure of 12 sample size structure SVM recurrent neural networks;
Step 2: adopt gaussian kernel function, wherein γ
-1=1, σ=1, and calculate
Step 3: dynamic equation (15) (16) is carried out discretize handle, get step delta T=2
-3S;
Step 4: selecting the digits of binary coding of the band complement code of this experiment use is 16, comprising 1 bit sign position, 4 integer-bit and 11 decimal places;
Step 5: structure multiply accumulating unit (MAC), multiplier unit (Mul), subtractor unit (Sub), accumulator element (AC) and storage unit ROM and RAM, wherein call IP kernel realization MAC, Mul, Sub, AC and ROM among the ISE9.1, write the vhd file and realize;
Step 6: according to Fig. 9 flow process, the q that will obtain according to sample calculation earlier
IjBe stored among the ROM, design connection and trigger sequence between each arithmetic element according to computation sequence then;
Step 7: will construct good neuron and regard a basic element as.The neuron that calls respective numbers constitutes whole network shown in Figure 7, makes network carry out recursive operation then.When network stabilization, try to achieve parameter alpha, b.
Figure 14 shows that this example adopts LS-SVM recurrence learning neural network to carry out each parameter convergence curve that the Simulink emulation experiment obtains, wherein:
α=(-24.9382,-14.0225,-16.0573,-15.3244,-13.7700,-11.3375,-7.6692,-2.6501,4.7093,13.9112,16.5170,70.7750) b=48.2587
Symbolic significance among Figure 14 is: solid line, and pentagram notation, plus sige, circle, asterisk, period, cross, last triangle, following triangle, left triangle, right triangle, diamond symbols, square symbol are represented α, α successively
1, α
2, α
3, α
4, α
5, α
6, α
7, α
8, α
9α
10α
11α
12The b line.
Curve among Figure 15 is the function regression result that parameter value is drawn above adopting, and its regression function expression formula is:
' o ' representative point to be returned.
Figure 17 is that this example adopts the FPGA of LS-SVM recurrence learning neural network to realize each the parameter convergence curve that obtains, wherein:
α=(-25.9467,-15.0320,-17.0666,-16.3331,-14.7794,-13.3862,-8.6783,-3.65704.6998,13.9033,16.5094,70.7668) b=48.2789
Curve among Figure 16 is the result who adopts the function regression of LS-SVM learning neural network, and its expression formula is:
Provide simultaneously this example here based on these two kinds of implementation method required times of LS-SVM classification learning neural network: the used time of Simulink emulation experiment is 3.012 * 10
-3S; FPGA realizes that the used time is 3.574 * 10
-4S.From Figure 15 and Figure 16, as can be seen,, all generally be distributed in utilization LSSVRLN and ask on the curve that obtains though these 12 sample points exist certain deviation among the figure.This just illustrates that these the two kinds of methods based on LS-SVM recurrence learning neural network have all well realized recurrence, but is to use the method for FPGA to realize that the used time is shorter.
In sum, the FPGA implementation method of LS-SVM classification of the present invention and recurrence learning recurrence neural network has not only been accelerated the training speed of support vector machine, also solved the problem of dirigibility deficiency in the mimic channel implementation method, made the range of application of support vector machine more extensive.
Claims (2)
1, a kind of FPGA implementation method based on LS-SVM classification learning recurrent neural network, its characteristics are: this method is implemented according to the following steps:
Step 1: the topological structure according to sample size structure LS-SVM classification learning recurrent neural network is: general-α
1q
I1-α
Nq
IN, 1 ,-γ
-1α
iAnd-by
iInsert in the ∑ and sue for peace, the output terminal of ∑ inserts integrator ∫, and the output of integrator ∫ is α
i, and α
iAgain through weights-q
IjFeed back in each corresponding ∑, form a recurrent neural network;
Given classification based training collection (z
i, y
i) be one group of sample to be classified, i=1,2 ... N, for all z
i∈ R
NY is all arranged
i∈ (+1 ,-1) is the corresponding classification of sample, and its categorised decision face is described as
Wherein W is a weight vector, and b is a threshold value,
The Nonlinear Mapping of expression training sample from the input space to the feature space, the classification learning of LS-SVM promptly solves following affined optimization problem:
Find the solution this problem and introduce the Lagrange function:
α wherein
iBe the Lagrange multiplier, utilize the KKT condition to ask local derviation can obtain the optimal conditions of this problem to each variable of Lagrange function respectively:
Q wherein
Ij=y
iy
jK
Ij, and
Be defined as kernel function, obtain the dynamic equation of LS-SVM classification learning neural network model:
α wherein
iBe the Lagrange multiplier, b is a threshold value, (z
i, y
i) be one group of sample to be classified, i=1,2 ... N,
Step 2: select gaussian kernel function for use, select parameter γ
-1=1, σ=1, and calculate
Step 3: the result of integrating step 2, discretize is carried out in dynamic equation (6), (7) handle, and definite step delta T;
Equation (6), (7) are carried out obtaining Discrete Dynamic equation (8), (9) after discretize is handled
Time interval Δ T in equation (8), (9) promptly is the step-length of sampling;
Step 4: the digits of binary coding of determining the band complement code: the binary coding with the binary coding C of 32 band complement codes is converted to 16 band complement code comprises the figure place of integer-bit and the figure place of decimal place;
Step 5: according to the basic component library of step 4 structure, comprise arithmetic element, storage unit and control module, wherein arithmetic element comprises multiply accumulating unit MAC, multiplier unit Mul, subtractor unit Sub, accumulator element AC; Storage unit comprises ROM and RAM; Control module calls IP kernel realization MAC, Mul, Sub, AC and ROM among the ISE9.1;
Step 6: the component library that utilizes step 5 to obtain makes up neuron elements respectively:
Make up LS-SVM classification learning neuron module,
A1, finish when data storage, this moment, reading of data simultaneously from ROM and RAM was input in the MAC unit, wherein a triggering along in α
j(t) and q
IjBe corresponding mutually, when the MAC unit calculates, calculate 1-b (t) y
i, at first finish b (t) y by the map unit Mul (1) that uses Mul
iComputing is finished 1-b (t) y by the map unit Sub (1) that uses Sub then
i
A2, wait MAC unitary operation finish, with the result
And 1-b (t) y
iBe input to the map unit Sub (2) of Sub, the position parallel with Sub (2) also has Mul (2), finishes γ in this time period simultaneously
-1α
i(t) calculating;
A3, the result of Mul (2) and Sub (2) is input among the Sub (3), finishes
Calculating;
A4, again the result of Sub (3) and the Δ T that configures are sent into Mul (3), obtain operation result α this moment
j(Δ T) is again with α
j(Δ T) is sent in the AC module and adds up, and finally obtains α
j(t+ Δ T) is in (9)
Part is regarded α as
j(t) to α
j(t+ Δ T) increment, when it is 0, promptly the AC module be input as 0 the time, when the output of AC module no longer changed, neuronic output was stable, the output result is α
j
Step 7: regard each neuron elements as primary element, Neuro1~Neuroi represents i neuron, Neuro_b represents the threshold value neuron, call i neuron and the Neuro_b threshold value neuron select of access in parallel module, connect by i+1 neuronic rule, constituted the neural network of SVM learning functionality, make each neuron be controlled by a clock control signal simultaneously, after calculating finishes in each neuron one-period, all produce an effective control signal, when all neuronic control signals that receive when the ANN (Artificial Neural Network) Control unit are effective, producing a total effective control signal makes whole neurons enter the computing of following one-period, when network carries out recursive operation when stablizing, try to achieve the convergence parameter alpha, b.
2, a kind of FPGA implementation method based on the LS-SVM recurrence learning recurrence neural network, its characteristics are that this method is implemented according to the following steps:
Step 1, according to the topological structure of sample size structure LS-SVM recurrence learning recurrence neural network: general-α
1Ω
I1-α
NΩ
IN, y
i,-γ
-1α and-b inserts in the ∑ and sues for peace, the output terminal of ∑ inserts integrator ∫, the output of integrator ∫ is α
i, and α
iAgain through weights-Ω
IjFeed back in each corresponding ∑,
Given training set (z
i, y
i), i=1 ..., N, wherein, i input data z
i∈ R
nAnd i output data y
i∈ R, the regression function similar to classification problem is:
Wherein W is a weight vector, and b is a side-play amount,
The Nonlinear Mapping of expression from the input space to the feature space, the regression problem of LS-SVM promptly solves following optimization problem:
The same Lagrange function that makes up:
α wherein
iBe the Lagrange multiplier, obtain by the KKT condition with the derivation of classification problem that problem is optimum must to be satisfied:
Obtain the dynamic equation of LS-SVM recurrence learning neural network model:
α wherein
iBe the Lagrange multiplier, b is a threshold value, (z
i, y
i) be one group of sample to be classified, i=1,2 ... N,
Step 3: the result of integrating step 2, dynamic equation (15) (16) is carried out discretize handle, and definite step delta T;
Time interval Δ T in equation (17), (18) promptly is the step-length of sampling;
Step 4: the digits of binary coding of determining the band complement code: the binary coding with the binary coding C of 32 band complement codes is converted to 16 band complement code comprises the figure place of integer-bit and the figure place of decimal place;
Step 5: according to the basic component library of step 4 structure, comprise arithmetic element, storage unit and control module, wherein arithmetic element comprises multiply accumulating unit MAC, multiplier unit Mul, subtractor unit Sub, accumulator element AC; Storage unit comprises ROM and RAM; Control module calls IP kernel realization MAC, Mul, Sub, AC and ROM among the ISE9.1;
Step 6: the component library that utilizes step 5 to obtain, make up LS-SVM recurrence learning recurrence neuron elements,
B1, finish when data storage, reading of data simultaneously from ROM and RAM was input in the MAC unit and finished this moment
Computing, finish y simultaneously
iThe calculating of-b (t);
B2, wait MAC unitary operation finish, with the result
And y
i-b (t) is input to the map unit Sub (2) of Sub, and the position parallel with Sub (2) also has Mul (1), finishes γ in this time period simultaneously
-1α
i(t) calculating;
B3, the result of Mul (1) and Sub (2) is input among the Sub (3), finishes
Calculating;
B4, again the result of Sub (3) and the Δ T that configures are sent into Mul (2), obtain operation result α this moment
j(Δ T) is again with α
j(Δ T) is sent in the AC module and adds up, and finally obtains α
j(t+ Δ T) is in (9)
Part is regarded α as
j(t) to α
j(t+ Δ T) increment, when it is 0, promptly the AC module be input as 0 the time, when the output of AC module no longer changed, neuronic output was stable, the output result is α
j
Step 7: regard each neuron elements as primary element, Neuro1~Neuroi represents i neuron, Neuro_b represents the threshold value neuron, call i neuron and the Neuro_b threshold value neuron select of access in parallel module, connect by i+1 neuronic rule, constituted the neural network of SVM learning functionality, make each neuron be controlled by a clock control signal simultaneously, after calculating finishes in each neuron one-period, all produce an effective control signal, when all neuronic control signals that receive when the ANN (Artificial Neural Network) Control unit are effective, producing a total effective control signal makes whole neurons enter the computing of following one-period, when network carries out recursive operation when stablizing, try to achieve the convergence parameter alpha, b.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200910023583A CN101625735A (en) | 2009-08-13 | 2009-08-13 | FPGA implementation method based on LS-SVM classification and recurrence learning recurrence neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200910023583A CN101625735A (en) | 2009-08-13 | 2009-08-13 | FPGA implementation method based on LS-SVM classification and recurrence learning recurrence neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101625735A true CN101625735A (en) | 2010-01-13 |
Family
ID=41521581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN200910023583A Pending CN101625735A (en) | 2009-08-13 | 2009-08-13 | FPGA implementation method based on LS-SVM classification and recurrence learning recurrence neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101625735A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101833691A (en) * | 2010-03-30 | 2010-09-15 | 西安理工大学 | Realizing method of least square support vector machine serial structure based on EPGA (Filed Programmable Gate Array) |
CN102135951A (en) * | 2011-03-07 | 2011-07-27 | 哈尔滨工业大学 | FPGA (Field Programmable Gate Array) implementation method based on LS-SVM (Least Squares-Support Vector Machine) algorithm restructured at runtime |
CN104598917A (en) * | 2014-12-08 | 2015-05-06 | 上海大学 | Support vector machine classifier IP (internet protocol) core |
CN104680236A (en) * | 2015-02-13 | 2015-06-03 | 西安交通大学 | FPGA implementation method of kernel function extreme learning machine classifier |
CN104915195A (en) * | 2015-05-20 | 2015-09-16 | 清华大学 | Method for achieving neural network calculation based on field-programmable gate array |
CN105719000A (en) * | 2016-01-21 | 2016-06-29 | 广西师范大学 | Neuron hardware structure and method of simulating pulse neural network by adopting neuron hardware structure |
CN106650923A (en) * | 2015-10-08 | 2017-05-10 | 上海兆芯集成电路有限公司 | Neural network elements with neural memory and neural processing unit array and sequencer |
WO2017177446A1 (en) * | 2016-04-15 | 2017-10-19 | 北京中科寒武纪科技有限公司 | Discrete data representation-supporting apparatus and method for back-training of artificial neural network |
CN107301453A (en) * | 2016-04-15 | 2017-10-27 | 北京中科寒武纪科技有限公司 | The artificial neural network forward operation apparatus and method for supporting discrete data to represent |
WO2017185248A1 (en) * | 2016-04-27 | 2017-11-02 | 北京中科寒武纪科技有限公司 | Apparatus and method for performing auto-learning operation of artificial neural network |
CN107924358A (en) * | 2015-07-31 | 2018-04-17 | 阿姆Ip有限公司 | Probability processor monitoring |
CN108885712A (en) * | 2015-11-12 | 2018-11-23 | 渊慧科技有限公司 | Neurolinguistic programming |
CN113239651A (en) * | 2021-07-12 | 2021-08-10 | 苏州贝克微电子有限公司 | Artificial intelligence implementation method and system for circuit design |
CN113485762A (en) * | 2020-09-19 | 2021-10-08 | 广东高云半导体科技股份有限公司 | Method and apparatus for offloading computational tasks with configurable devices to improve system performance |
-
2009
- 2009-08-13 CN CN200910023583A patent/CN101625735A/en active Pending
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101833691A (en) * | 2010-03-30 | 2010-09-15 | 西安理工大学 | Realizing method of least square support vector machine serial structure based on EPGA (Filed Programmable Gate Array) |
CN102135951A (en) * | 2011-03-07 | 2011-07-27 | 哈尔滨工业大学 | FPGA (Field Programmable Gate Array) implementation method based on LS-SVM (Least Squares-Support Vector Machine) algorithm restructured at runtime |
CN102135951B (en) * | 2011-03-07 | 2013-09-11 | 哈尔滨工业大学 | FPGA (Field Programmable Gate Array) implementation method based on LS-SVM (Least Squares-Support Vector Machine) algorithm restructured at runtime |
CN104598917A (en) * | 2014-12-08 | 2015-05-06 | 上海大学 | Support vector machine classifier IP (internet protocol) core |
CN104598917B (en) * | 2014-12-08 | 2018-04-27 | 上海大学 | A kind of support vector machine classifier IP kernel |
CN104680236B (en) * | 2015-02-13 | 2017-08-01 | 西安交通大学 | The FPGA implementation method of kernel function extreme learning machine grader |
CN104680236A (en) * | 2015-02-13 | 2015-06-03 | 西安交通大学 | FPGA implementation method of kernel function extreme learning machine classifier |
CN104915195A (en) * | 2015-05-20 | 2015-09-16 | 清华大学 | Method for achieving neural network calculation based on field-programmable gate array |
CN104915195B (en) * | 2015-05-20 | 2017-11-28 | 清华大学 | A kind of method that neural computing is realized based on field programmable gate array |
CN107924358A (en) * | 2015-07-31 | 2018-04-17 | 阿姆Ip有限公司 | Probability processor monitoring |
CN106650923B (en) * | 2015-10-08 | 2019-04-09 | 上海兆芯集成电路有限公司 | Neural network unit with neural memory and neural processing unit and sequencer |
CN106650923A (en) * | 2015-10-08 | 2017-05-10 | 上海兆芯集成电路有限公司 | Neural network elements with neural memory and neural processing unit array and sequencer |
CN108885712A (en) * | 2015-11-12 | 2018-11-23 | 渊慧科技有限公司 | Neurolinguistic programming |
CN108885712B (en) * | 2015-11-12 | 2022-06-10 | 渊慧科技有限公司 | Neural programming |
US11803746B2 (en) | 2015-11-12 | 2023-10-31 | Deepmind Technologies Limited | Neural programming |
CN105719000B (en) * | 2016-01-21 | 2018-02-16 | 广西师范大学 | A kind of neuron hardware unit and the method with this unit simulation impulsive neural networks |
CN105719000A (en) * | 2016-01-21 | 2016-06-29 | 广西师范大学 | Neuron hardware structure and method of simulating pulse neural network by adopting neuron hardware structure |
CN107301453A (en) * | 2016-04-15 | 2017-10-27 | 北京中科寒武纪科技有限公司 | The artificial neural network forward operation apparatus and method for supporting discrete data to represent |
WO2017177446A1 (en) * | 2016-04-15 | 2017-10-19 | 北京中科寒武纪科技有限公司 | Discrete data representation-supporting apparatus and method for back-training of artificial neural network |
WO2017185248A1 (en) * | 2016-04-27 | 2017-11-02 | 北京中科寒武纪科技有限公司 | Apparatus and method for performing auto-learning operation of artificial neural network |
CN113485762A (en) * | 2020-09-19 | 2021-10-08 | 广东高云半导体科技股份有限公司 | Method and apparatus for offloading computational tasks with configurable devices to improve system performance |
CN113239651A (en) * | 2021-07-12 | 2021-08-10 | 苏州贝克微电子有限公司 | Artificial intelligence implementation method and system for circuit design |
CN113239651B (en) * | 2021-07-12 | 2021-09-17 | 苏州贝克微电子有限公司 | Artificial intelligence implementation method and system for circuit design |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101625735A (en) | FPGA implementation method based on LS-SVM classification and recurrence learning recurrence neural network | |
CN106447034B (en) | A kind of neural network processor based on data compression, design method, chip | |
Shin et al. | 14.2 DNPU: An 8.1 TOPS/W reconfigurable CNN-RNN processor for general-purpose deep neural networks | |
CN106529670A (en) | Neural network processor based on weight compression, design method, and chip | |
CN110175671A (en) | Construction method, image processing method and the device of neural network | |
CN106650924B (en) | A kind of processor based on time dimension and space dimension data stream compression, design method | |
CN101833691A (en) | Realizing method of least square support vector machine serial structure based on EPGA (Filed Programmable Gate Array) | |
CN109948149B (en) | Text classification method and device | |
CN110458279A (en) | A kind of binary neural network accelerated method and system based on FPGA | |
CN105488565A (en) | Calculation apparatus and method for accelerator chip accelerating deep neural network algorithm | |
Imani et al. | Fach: Fpga-based acceleration of hyperdimensional computing by reducing computational complexity | |
CN106874478A (en) | Parallelization random tags subset multi-tag file classification method based on Spark | |
CN112947300A (en) | Virtual measuring method, system, medium and equipment for processing quality | |
CN104680236A (en) | FPGA implementation method of kernel function extreme learning machine classifier | |
CN108764588A (en) | A kind of temperature influence power prediction method based on deep learning | |
CN108921292A (en) | Approximate calculation system towards the application of deep neural network accelerator | |
CN108345934A (en) | A kind of activation device and method for neural network processor | |
CN115034430A (en) | Carbon emission prediction method, device, terminal and storage medium | |
CN113849910B (en) | Dropout-based BiLSTM network wing resistance coefficient prediction method | |
CN107480771A (en) | The implementation method and device of activation primitive based on deep learning | |
Wen et al. | MapReduce-based BP neural network classification of aquaculture water quality | |
CN114676522B (en) | Pneumatic shape optimization design method, system and equipment integrating GAN and migration learning | |
Li et al. | FPGA implementation of LSTM based on automatic speech recognition | |
CN104090813A (en) | Analysis modeling method for CPU (central processing unit) usage of virtual machines in cloud data center | |
CN108229026B (en) | Electromagnetic field modeling simulation method based on dynamic kernel extreme learning machine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Open date: 20100113 |