CN108133239A - For the method and apparatus for the treatment of classification device sample - Google Patents

For the method and apparatus for the treatment of classification device sample Download PDF

Info

Publication number
CN108133239A
CN108133239A CN201810045342.3A CN201810045342A CN108133239A CN 108133239 A CN108133239 A CN 108133239A CN 201810045342 A CN201810045342 A CN 201810045342A CN 108133239 A CN108133239 A CN 108133239A
Authority
CN
China
Prior art keywords
multiplier
value
output valve
feature vector
exp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810045342.3A
Other languages
Chinese (zh)
Inventor
夏昌盛
黎明
张韵东
李国新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhong Xing Wei Ai Chip Technology Co Ltd
Original Assignee
Beijing Zhong Xing Wei Ai Chip Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhong Xing Wei Ai Chip Technology Co Ltd filed Critical Beijing Zhong Xing Wei Ai Chip Technology Co Ltd
Priority to CN201810045342.3A priority Critical patent/CN108133239A/en
Publication of CN108133239A publication Critical patent/CN108133239A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Complex Calculations (AREA)

Abstract

An embodiment of the present invention provides the method and apparatus for treatment classification device sample.This method includes:The i-th index value is received, wherein, the i-th index value quantifies to obtain to the i-th input value, and the i-th input value is obtained from i-th of original feature vector in the N-dimensional original feature vector for treated classification samples based on radial basis function RBF is handled;According to the i-th index value, the i-th output valve is searched in the index table stored from memory, wherein, the i-th output valve is the approximation of exp (the i-th input value);Multiplication process is carried out to the i-th output valve using multiplier, until exporting j-th of new feature vector in the K reform feature vectors of sample to be sorted when i is equal to N.In embodiments of the present invention, by tabling look-up and RBF is realized in multiplying, without exp () function to be carried out to the expansion of such as Taylor polynomial etc, operand can not only be reduced, and can greatly reduce calculating error.

Description

For the method and apparatus for the treatment of classification device sample
Technical field
The present invention relates to technical field of data processing, and in particular, to for the method and apparatus for the treatment of classification device sample.
Background technology
Supporting vector state machine (Support Vector Machine, SVM) is a kind of extremely effective sorting algorithm, It is mainly trained using middle-size and small-size data training library, two pretty good class objects of effect can be reached.
Basic Linear SVM only supports the classification task of linear separability.If realizing responsible nonlinear transformation classification, The feature vector for introducing the liters such as radial basis function (Radial Basis Function, RBF) dimension function pair sample is needed to carry out Dimension is risen, then SVM is recycled to classify.
However, at present when carrying out liter dimension using RBF, realize that effect is unsatisfactory.
Invention content
An embodiment of the present invention provides the method and apparatus for treatment classification device sample, can simply and efficiently realize RBF。
In a first aspect, a kind of method for treatment classification device sample is provided, including:The i-th index value is received, wherein, I-th index value quantifies to obtain to the i-th input value, and i-th input value is to be sorted based on radial basis function RBF Obtained from i-th of original feature vector in the N-dimensional original feature vector of sample is handled, the RBF is represented as exp (the 1st input value) * exp (the 2nd input value) * ... * exp (N input values), N and i is positive integer, and i is less than or equal to N;Root According to i-th index value, the i-th output valve is searched in the index table stored from memory, wherein, i-th output valve is exp The approximation of (the i-th input value), the index table are used to represent the approximation of exponential function exp ();Using multiplier to described I-th output valve carries out multiplication process, until exporting the jth in the K reform feature vectors of the sample to be sorted when i is equal to N A new feature vector, j-th of new feature vector is N number of output valve corresponding with the N-dimensional original feature vector Product, K and j are positive integer, and K is more than N, and j is less than or equal to K.
In a kind of possible realization method, the index table be used to representing z and exponential function exp (z) approximation it Between correspondence, z is the value of y bits, and y is the magnitude range based on N number of input value and preset, N number of input Value is respectively to obtained from N-dimensional original feature vector processing, y is positive integer based on the RBF.
It is described that multiplication process, packet are carried out to i-th output valve using multiplier in a kind of possible realization method It includes:When i is 1, i-th output valve is carried out using the multiplier described in current multiplying and general of the multiplier for 1 The result of current multiplying carries out the multiplier of multiplying next time as the multiplier;When 1<i<During N, multiplied using described Musical instruments used in a Buddhist or Taoist mass carries out current multiplying to i-th output valve, wherein, multiplier was the 1st output valve multiplying to (i-1) output valve Product and the multiplier that multiplying next time is carried out using the result of the current multiplying as the multiplier;When i is N When, current multiplying is carried out to i-th output valve using the multiplier, wherein, multiplier is the 1st output valve to (i- 1) product of output valve and the result of the output current multiplying are as j-th of new feature vector.
In a kind of possible realization method, the result using the current multiplying is carried out as the multiplier The multiplier of multiplying next time, including:The result of the current multiplying is input to using the delay unit described In multiplier, the multiplier to carry out multiplying next time as the multiplier.
In a kind of possible realization method, i-th input value is obtained according to following equation:
I-th input value=- γ * (xi-Rji)2,
Wherein, γ is preset value, xiRepresent i-th of original feature vector, RjiIt represents original in the N-dimensional The specified point R being pre-selected in feature space corresponding to feature vectorjI-th of value.
Second aspect provides a kind of device for treatment classification device sample, including:It is stored with the storage of index table Device, the index table are used to represent the approximation of exponential function exp ();And the multiplier being connected with the memory;
Wherein, the memory is used for:The i-th index value is received, wherein, i-th index value is that the i-th input value is quantified It obtains, i-th input value is i-th in the N-dimensional original feature vector for treat classification samples based on radial basis function RBF Obtained from original feature vector is handled, the RBF is represented as exp (the 1st input value) * exp (the 2nd input value) * ... * Exp (N input values), N and i are positive integer, and i is less than or equal to N;According to i-th index value, from the index table The i-th output valve is searched, wherein, i-th output valve is the approximation of exp (the i-th input value);To described in multiplier output I-th output valve;
The multiplier, is used for:I-th output valve is received from the memory;Multiplication is carried out to i-th output valve Processing, it is described until exporting j-th of new feature vector in the K reform feature vectors of the sample to be sorted when i is equal to N J-th of new feature vector is the product of N number of output valve corresponding with the N-dimensional original feature vector, and K and j are positive integer, K is more than N, and j is less than or equal to K.
In a kind of possible realization method, the index table be used to representing z and exponential function exp (z) approximation it Between correspondence, z is the value of y bits, and y is the magnitude range based on N number of input value and preset, N number of input Value is respectively to obtained from N-dimensional original feature vector processing, y is positive integer based on the RBF.
In a kind of possible realization method, the multiplier is specifically used for:When i be 1 when, to i-th output valve into Current multiplying that row multiplier is 1 and using the result of the current multiplying as progress multiplying next time Multiplier;When 1<i<During N, current multiplying is carried out to i-th output valve, wherein, multiplier is the 1st output valve to (i-1) The product of output valve and using the result of the current multiplying as carry out multiplying next time multiplier;When i is N When, current multiplying is carried out to i-th output valve, wherein, product of the multiplier for the 1st output valve to (i-1) output valve, And the result of the output current multiplying is as j-th of new feature vector.
In a kind of possible realization method, described device further includes delay unit;The multiplier has the first input End, the second input terminal, the first output terminal and second output terminal;The first input end is connected with the memory;Described One output terminal is connected with delay unit, and the output terminal of the delay unit is connected with second input terminal;
The delay unit is used for from the multiplier reception current multiplying as a result, and will be described current The result of multiplying is input in second input terminal;The multiplier is used for will be described in second input terminal reception The result of current multiplying is as the multiplier of progress multiplying next time and by j-th of new feature vector from institute State second output terminal output.
In a kind of possible realization method, i-th input value is obtained according to following equation:
I-th input value=- γ * (xi-Rji)2,
Wherein, γ is preset value, xiRepresent i-th of original feature vector, RjiIt represents original in the N-dimensional The specified point R being pre-selected in feature space corresponding to feature vectorjI-th of value.
In embodiments of the present invention, by tabling look-up and RBF is realized in multiplying, without exp () function is carried out Such as the expansion of Taylor polynomial etc, operand can not only be reduced, and can greatly reduce calculating error.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is only some embodiments of the present invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing, wherein:
Fig. 1 shows the relational graph between the computation of table lookup error of exp () and the input of exp ().
Fig. 2 is the schematic flow chart of the method for treatment classification device sample according to embodiments of the present invention.
Fig. 3 is the schematic block diagram of the device for treatment classification device sample according to embodiments of the present invention.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is only the part of the embodiment of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art obtained under the premise of creative work is not made it is all its Its embodiment, shall fall within the protection scope of the present invention.
In order to realize complicated nonlinear change classification, can use RBF treat the original feature vectors of classification samples into Row rises dimension, then carries out svm classifier again.
Typically Linear SVM grader predictor formula is:
Wherein, x represents the N-dimensional original feature vector (x of sample to be sorted1,x2,x3,…,xN), w is weight matrix, and b is Offset.From equation (1) as can be seen that w can be based onTThe positive and negative of x+b predicts that target two is classified, i.e. target y for 0 class still 1 class.In addition, N is positive integer.
After dimension is risen by RBF, the calculation formula of the predicted portions of SVM classifier becomes:wT·Φ(x)+b。Φ(x) It can represent that the original feature vector of sample to be sorted rises the new feature vector of dimension after RBF maps, it is assumed that dimension K, In, K is positive integer, and K is more than N.
Wherein,
Wherein, R can represent the specified point in the corresponding original feature space of the original feature vector of sample to be sorted.Refer to Fixed point can be used as reference point, and quantity can change according to actual conditions.It for example, can be in original feature space in advance Select K specified point.Wherein, j-th of specified point can be represented as Rj, j can be positive integer, and less than or equal to K. RjiIt can represent RjI-th of value.
In addition, γ can be into parameter general during row index Gaussian operation, can be preset any value. For example, the value of γ can rule of thumb be preset.
Assuming that Ai=-γ * (xi-Rji)2(3),
So above-mentioned equation 2 can be represented as:
Φ(x)j=exp (A1+A2+A3+…+AN) (4)
From equation (2) as can be seen that each feature vector in Φ (x) is the distance of original feature vector and each R points Gaussian Profile value.Φ(x)jCan be Φ (x) K dimensional feature vectors in j-th of feature vector, for ease of description, J-th of new feature vector being properly termed as herein in the K reform feature vectors of sample to be sorted.
From the equation above as can be seen that for realizing RBF, key is to calculate exp () function.Traditionally, exp The operation of () function is all by carrying out mathematic decomposition to the function, for example, expanding into Taylor polynomial, then passing through iteration Obtain result.However, this mode operand is very big, it is less efficient.
By way of also presence one kind realizes exp () function hardware.For example, the realization for equation (4), Ke Yizhi Lookup table mode was connected to obtain exp (A1+A2+A3+…+AN).In such a case, it is possible to first calculate (A1+A2+A3+…+AN), Assuming that obtained result is B.So B can be quantified to obtain quantized value, it is assumed that the quantized value is represented as B '.Then, Using B ' as index value, from the approximation of the corresponding exp () of index table search in memory, so as to obtain exp (A1+A2+A3 +…+AN) final result.
However, this mode is likely to result in larger error.For example, it is assumed that AiRanging from -1 to 0, then (A1+A2+ A3+…+AN) ranging from-N to 0.For such value range, it is assumed that corresponding quantizing range can reach 2p, wherein, p For positive integer.
It is to be appreciated that quantization exponent number p decides the accurate precision of calculating.Mathematically it has been proved that this pass through Table look-up obtained exp () function approximation maximum relative error, be inversely proportional to quantization exponent number, be proportional to quantizing range.This Sample, due to (A1+A2+A3+…+AN) range it is relatively large, caused computation of table lookup error is also relatively large.
Fig. 1 shows the relational graph tabled look-up between error and the input of exp () of exp ().In Fig. 1, horizontal axis can be with table Show the multiple of quantizing range, the longitudinal axis can represent the multiple of the relative error of computation of table lookup.
From figure 1 it appears that as quantizing range increases to 10 times from 1 times, computation of table lookup error can increase to originally 100 times.Assuming that AiRanging from -1 to 0, N 10, then (A1+A2+A3+…+AN) ranging from -10 to 0.Assuming that meter of tabling look-up Calculate exp (Ai) relative error be that (i.e. absolute error is exp (A to di) * d), in this way, exp (A1+A2+A3+…+AN) it is opposite accidentally Difference can arrive 100d, and absolute error is exactly exp (A1+A2+A3+…+AN)*100d。
Moreover, the dimension N with feature vector is bigger, exp (A1+A2+A3+…+AN) the increase multiple of error get over Greatly.In this way, in the case of equal exponent number p, error can increased dramatically.
In this regard, in embodiments of the present invention, following modification can be carried out for equation (4):
Φ(x)j=exp (A1+A2+A3+…+AN)
=exp (A1)*exp(A2)*exp(A3)*…*exp(AN) (5)
In order to reduce computation of table lookup error, A can be directed to1、A2、A3、……、ANIt is handled successively, respectively obtains exp (A1)、exp(A2)、exp(A3)、……、exp(AN).Then n times cycle is carried out by multiplier, to obtain exp (A1)*exp (A2)*exp(A3)*…*exp(AN)。
In this case, it is still assumed that exp (Ai) relative error be d, then for exp (A1)*exp(A2)*exp (A3)*…*exp(AN) for, it is assumed that its absolute error is D,
D=(exp (A1)*(1+d))*(exp(A2)*(1+d))*(exp(A3)*(1+d))*…*(exp(AN)*(1+d))
After being decomposed to above equation, the Monomial coefficient of d is exp (A1)*exp(A2)*exp(A3)*…*exp (AN).It, can phase after multiple item because d is small error amount in itself for the secondary term coefficient of d and again for coefficient up To smaller, can ignore.Therefore, the main component item of absolute error D is exp (A1)*exp(A2)*exp(A3)*…*exp (AN) * N*d, relative error is about N*d.
As it can be seen that with directly against (A1+A2+A3+…+AN) table look-up to calculate exp (A1+A2+A3+…+AN) relative error phase Than the error of this mode of the embodiment of the present invention is much smaller.For example, when N is 10, for (A1+A2+A3+…+AN) look into Table calculates exp (A1+A2+A3+…+AN) relative error for 100d, and the error of this mode of the embodiment of the present invention is only 10d。
As it can be seen that the embodiment of the present invention can effectively reduce the error of RBF, so as to more precisely to sample to be sorted This N-dimensional original feature vector carries out a liter dimension.
In addition, this mode can be realized by simple hardware configuration, cost of implementation can be saved.
Above technical scheme is described below in conjunction with specific embodiment.
Fig. 2 is the schematic flow chart of the method for treatment classification device sample according to embodiments of the present invention.
As shown in Fig. 2, in step 201, the i-th index value can be received.
Specifically, the i-th index value can be obtained from quantifying to the i-th input value.I-th input value can be based on The N that RBF treats classification samples is that i-th of original feature vector in original feature vector is handled.RBF can be by It is expressed as exp (the 1st input value) * exp (the 2nd input value) * ... * exp (N input values).
For example, the i-th input value can be expressed as Ai, AiIt can be determined according to equation (3).Correspondingly, RBF can lead to Equation (5) is crossed to represent.
In step 202, can the i-th output valve be searched in the index table stored from memory according to the i-th index value.
Specifically, the i-th output valve can be the approximation of exp (the i-th input value).The index table stored in memory It can be used to indicate that the approximation of exponential function exp ().
In step 203, multiplication process can be carried out to the i-th output valve using multiplier, until when i is equal to N, exporting J-th of new feature vector in the K reform feature vectors of sample to be sorted.
(5) are as can be seen that j-th of new feature vector can be corresponding respectively with N-dimensional original feature vector from the equation above N number of output valve product.
In embodiments of the present invention, according to corresponding i-th index value of the i-th input value with being obtained using RBF, from index The i-th output valve of the approximation for representing exp (the i-th input value) is searched in table, then the i-th output valve is multiplied using multiplier Method processing.By above process circular treatment n times, when i is equal to N, N corresponding with N-dimensional original feature vector can be obtained J-th of new feature vector in the product of a output valve, i.e. K reforms feature vector.As it can be seen that in embodiments of the present invention, pass through It tables look-up and RBF is realized in multiplying, without exp () function to be carried out to the expansion of such as Taylor polynomial etc, not only Operand can be reduced, and can greatly reduce calculating error.
In one embodiment, above-mentioned index table can be used to indicate that between z and the approximation of exponential function exp (z) Correspondence.Wherein, z is the value of y bits, and y is positive integer.Y can be the magnitude range based on N number of input value and preset 's.It is understood that N number of input value can be respectively processed to obtain based on RBF to N-dimensional original vector.For example, such as It is upper described, the i-th input value can be expressed as Ai, then N number of input value can be respectively expressed as A1、A2、A3、……、AN。N A input value can be obtained according to equation (3).
Correspondingly, above-mentioned i-th index value can be the value of y bits quantified to the i-th input value.
As it can be seen that in this embodiment it is possible to obtain quantizing range based on the magnitude range of N number of input value, in this way can The range of index table stored in memory is reasonably determined, so as to avoid causing since the range of index table is excessive The waste of storage resource can not effectively provide corresponding output valve since the range of index table is too small.
In one embodiment, in step 203, when i is 1, it is 1 to carry out multiplier to the i-th output valve using multiplier Current multiplying, and using the result of current multiplying as multiplier carry out multiplying next time multiplier.
When 1<i<During N, current multiplying is carried out to the i-th output valve using multiplier, wherein, multiplier is the 1st output valve Multiplying next time is carried out to the product of (i-1) output valve, and using the result of current multiplying as multiplier Multiplier.
When i is N, current multiplying is carried out to the i-th output valve using multiplier, wherein, multiplier for the 1st output valve extremely The product of (i-1) output valve, and the result of the current multiplying is exported as j-th of new feature vector.
In the present embodiment, can n times be recycled by multiplier, N number of output valve is multiplied successively, it is new to obtain j-th Feature vector.In such manner, it is possible to be multiplexed hardware cell to the maximum extent, it is simple and efficient in hardware realization, saves cost of implementation.
In one embodiment, in step 203, it can utilize delay unit that the result of current multiplying is defeated again Enter into multiplier, the multiplier of multiplying next time is carried out as multiplier.As it can be seen that in the present embodiment, it is single by being delayed Member assists realizing the multiplication successively between N number of output valve, is simple and efficient in hardware realization.
Fig. 3 is the schematic block diagram of the device for treatment classification device sample according to embodiments of the present invention.
As shown in figure 3, device 300 can include memory 310 and multiplier 320.Multiplier 320 can be with memory 310 are connected.For example, the output terminal of memory 310 can be connected with the input terminal of multiplier 320.
Memory 310 can be stored with index table, which can represent the approximation of exponential function exp ().
Memory 310 can receive the i-th index value.Wherein, the i-th index value can quantify to obtain to the i-th input value, I-th input value is that i-th of original feature vector in the N-dimensional original feature vector for treated classification samples based on RBF is handled Obtained from.RBF can be represented as exp (the 1st input value) * exp (the 2nd input value) * ... * exp (N input values), N and i It is positive integer, i is less than or equal to N.
Memory 310 can search the i-th output valve according to the i-th index value from index table, then defeated to multiplier 320 Go out the i-th output valve.Wherein, the i-th output valve is the approximation of exp (the i-th input value).
Multiplier 320 can receive the i-th output valve from memory 310.
Multiplier 320 can carry out multiplication process to the i-th output valve, the K until exporting sample to be sorted when i is equal to N J-th of new feature vector in reform feature vector.Wherein, j-th of new feature vector is right respectively with N-dimensional original feature vector The product of N number of output valve answered, K and j are positive integer, and K is more than N, and j is less than or equal to K.
As it can be seen that in embodiments of the present invention, by tabling look-up and RBF is realized in multiplying, without by exp () function into The expansion of row such as Taylor polynomial etc, can not only reduce operand, and can greatly reduce calculating error in this way. In addition, the embodiment of the present invention can efficiently realize RBF using simple memory and multiplier architecture, realized so as to save Cost.
In one embodiment, index table can be used to indicate that corresponding between z and the approximation of exponential function exp (z) Relationship, z are the values of y bits, and y is positive integer.Y is the magnitude range based on N number of input value and preset.N number of input value It is respectively to obtained from the processing of above-mentioned N-dimensional original feature vector based on RBF.Correspondingly, the i-th index value can be defeated to i-th Enter the value of y bits that value quantization obtains.
In one embodiment, when i is 1, it is 1 to work as pre-multiplication that multiplier 320, which can carry out i-th output valve multiplier, Operation, and using the result of current multiplying as the multiplier for carrying out multiplying next time.When 1<i<During N, multiplier 320 Current multiplying can be carried out to the i-th output valve, wherein, product of the multiplier for the 1st output valve to (i-1) output valve, and And using the result of current multiplying as the multiplier for carrying out multiplying next time.When i is N, multiplier 320 can be to the I output valves carry out current multiplying, wherein, product of the multiplier for the 1st output valve to (i-1) output valve, and export and work as The result of pre-multiplication operation is as j-th of new feature vector.
In this embodiment, multiplier can recycle n times, so as to obtain exp (the 1st input value) * exp (the 2nd inputs Value) * ... * exp (N input values) result.In such manner, it is possible to be multiplexed hardware to the maximum extent, reduce hardware implementation cost.This Outside, as previously mentioned, this mode can greatly reduce calculating error.
In one embodiment, device 300 can also include delay unit 530.As shown in figure 3, multiplier 320 has the One input terminal, the second input terminal, the first output terminal and second output terminal.
The first input end of multiplier 320 can be connected with memory 310.First output terminal of multiplier 320 can be with It is connected with delay unit 330.The output terminal of delay unit 320 can be connected with the second input terminal of multiplier 320.
Delay unit 330 can be from the current multiplying of the first output terminal of multiplier 320 reception as a result, and ought The result of pre-multiplication operation is input in the second input terminal of multiplier 320.What multiplier 320 can receive the second input terminal The result of current multiplying is as the multiplier for carrying out multiplying next time.Multiplier 320 can be defeated by second output terminal Go out obtained j-th of new feature vector.
In this embodiment, it realizes that the n times of multiplier recycle multiplication by delay unit, is simple and efficient in realization, Hardware implementation cost can be reduced.
It is understood that above-mentioned i-th input value and the i-th index value can be obtained by general computing device, It can be obtained by special hardware circuit, the embodiment of the present invention is not construed as limiting this.
It should be understood that for convenience of description and succinctly, the concrete function of the modules of device 300 and operation can With the corresponding process with reference to preceding method embodiment, will not be described in great detail herein.
Those of ordinary skill in the art may realize that each exemplary sides described with reference to the embodiments described herein The step of method, can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually with hard Part or software mode perform, specific application and design constraint depending on technical solution.Professional technician can be with Specifically described function is realized using distinct methods, but this realization is it is not considered that beyond the present invention to each Range.
It is apparent to those skilled in the art that for convenience and simplicity of description, the method for foregoing description With the specific work process of device, the corresponding process in preceding method embodiment can be referred to, details are not described herein.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only Only a kind of division of logic function can have other dividing mode in actual implementation, such as multiple units or component can be tied It closes or is desirably integrated into another system or some features can be ignored or does not perform.Another point, it is shown or discussed Mutual coupling, direct-coupling or communication connection can be the INDIRECT COUPLING or logical by some interfaces, device or unit Letter connection can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separate, be shown as unit The component shown may or may not be physical unit, you can be located at a place or can also be distributed to multiple In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme 's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also That each unit is individually physically present, can also two or more units integrate in a unit.
If the function is realized in the form of SFU software functional unit and is independent product sale or in use, can be with It is stored in a computer read/write memory medium.Based on such understanding, technical scheme of the present invention is substantially in other words The part contribute to the prior art or the part of the technical solution can be embodied in the form of software product, the meter Calculation machine software product is stored in a storage medium, is used including some instructions so that a computer equipment (can be People's computer, server or network equipment etc.) perform all or part of the steps of the method according to each embodiment of the present invention. And aforementioned storage medium includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited The various media that can store program ver-ify code such as reservoir (RAM, Random Access Memory), magnetic disc or CD.
The above description is merely a specific embodiment, but protection scope of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in change or replacement, should all contain Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.

Claims (10)

  1. A kind of 1. method for treatment classification device sample, which is characterized in that including:
    The i-th index value is received, wherein, i-th index value quantifies to obtain to the i-th input value, and i-th input value is base It is handled and is obtained in i-th of original feature vector that radial basis function RBF is treated in the N-dimensional original feature vector of classification samples It arrives, the RBF is represented as exp (the 1st input value) * exp (the 2nd input value) * ... * exp (N input values), N and i and is Positive integer, i are less than or equal to N;
    According to i-th index value, the i-th output valve is searched in the index table stored from memory, wherein, i-th output Value is the approximation of exp (the i-th input value), and the index table is used to represent the approximation of exponential function exp ();
    Multiplication process is carried out to i-th output valve using multiplier, the K until exporting the sample to be sorted when i is equal to N J-th of new feature vector in reform feature vector, j-th of new feature vector are and the N-dimensional original feature vector point The product of not corresponding N number of output valve, K and j are positive integer, and K is more than N, and j is less than or equal to K.
  2. 2. according to the method described in claim 1, it is characterized in that,
    The index table is used to representing correspondence between z and the approximation of exponential function exp (z), and z is the value of y bits, y It is the magnitude range based on N number of input value and preset, N number of input value is respectively to the N-dimensional based on the RBF Obtained from original feature vector processing, y is positive integer.
  3. 3. method according to claim 1 or 2, which is characterized in that described to be carried out using multiplier to i-th output valve Multiplication process, including:
    When i is 1, current multiplying that multiplier is 1 is carried out to i-th output valve using the multiplier and by institute The result for stating current multiplying carries out the multiplier of multiplying next time as the multiplier;
    When 1<i<During N, current multiplying is carried out to i-th output valve using the multiplier, wherein, multiplier is defeated for the 1st Go out value to the product of (i-1) output valve and using the result of the current multiplying as the multiplier carry out it is next The multiplier of secondary multiplying;
    When i is N, current multiplying is carried out to i-th output valve using the multiplier, wherein, multiplier is the 1st output Be worth to the product of (i-1) output valve and the result of the output current multiplying as j-th of new feature to Amount.
  4. 4. according to the method described in claim 3, it is characterized in that, the result using the current multiplying is as described in Multiplier carries out the multiplier of multiplying next time, including:
    The result of the current multiplying is input in the multiplier using the delay unit, using as the multiplication Device carries out the multiplier of multiplying next time.
  5. 5. method according to claim 1 or 2, which is characterized in that i-th input value is obtained according to following equation 's:
    I-th input value=- γ * (xi-Rji)2,
    Wherein, γ is preset value, xiRepresent i-th of original feature vector, RjiIt represents in the N-dimensional primitive character The specified point R being pre-selected in feature space corresponding to vectorjI-th of value.
  6. 6. a kind of device for treatment classification device sample, which is characterized in that including:
    The memory of index table is stored with, the index table is used to represent the approximation of exponential function exp ();And
    The multiplier being connected with the memory;
    Wherein, the memory is used for:
    The i-th index value is received, wherein, i-th index value quantifies to obtain to the i-th input value, and i-th input value is base It is handled and is obtained in i-th of original feature vector that radial basis function RBF is treated in the N-dimensional original feature vector of classification samples It arrives, the RBF is represented as exp (the 1st input value) * exp (the 2nd input value) * ... * exp (N input values), N and i and is Positive integer, i are less than or equal to N;
    According to i-th index value, the i-th output valve is searched from the index table, wherein, i-th output valve is exp (i-th Input value) approximation;
    I-th output valve is exported to the multiplier;
    The multiplier, is used for:
    I-th output valve is received from the memory;
    Multiplication process is carried out to i-th output valve, until exported when i is equal to N the K reforms feature of the sample to be sorted to J-th of new feature vector in amount, j-th of new feature vector is corresponding N number of with the N-dimensional original feature vector The product of output valve, K and j are positive integer, and K is more than N, and j is less than or equal to K.
  7. 7. device according to claim 6, which is characterized in that the index table is used to represent z and exponential function exp (z) Approximation between correspondence, z is the value of y bits, and y is the magnitude range based on N number of input value and preset, institute It is respectively to obtained from N-dimensional original feature vector processing, y is positive integer based on the RBF to state N number of input value.
  8. 8. the device described according to claim 6 or 7, which is characterized in that the multiplier is specifically used for:
    When i is 1, current multiplying that multiplier is 1 is carried out to i-th output valve and by the current multiplying Result as carry out multiplying next time multiplier;
    When 1<i<During N, current multiplying is carried out to i-th output valve, wherein, multiplier is defeated to (i-1) for the 1st output valve Go out the product of value and using the result of the current multiplying as the multiplier for carrying out multiplying next time;
    When i is N, current multiplying is carried out to i-th output valve, wherein, multiplier is defeated to (i-1) for the 1st output valve Go out the product of value and the result of the output current multiplying as j-th of new feature vector.
  9. 9. device according to claim 8, which is characterized in that described device further includes delay unit;
    The multiplier has first input end, the second input terminal, the first output terminal and second output terminal;
    The first input end is connected with the memory;
    First output terminal is connected with delay unit, and the output terminal of the delay unit is connected with second input terminal It connects;
    The delay unit is used for from the multiplier reception current multiplying as a result, and working as pre-multiplication by described in The result of operation is input in second input terminal;
    The multiplier is used for the result of the current multiplying that receives second input terminal as progress next time The multiplier of multiplying and j-th of new feature vector is exported from the second output terminal.
  10. 10. the device described according to claim 6 or 7, which is characterized in that i-th input value be according to following equation come It arrives:
    I-th input value=- γ * (xi-Rji)2,
    Wherein, γ is preset value, xiRepresent i-th of original feature vector, RjiIt represents in the N-dimensional primitive character The specified point R being pre-selected in feature space corresponding to vectorjI-th of value.
CN201810045342.3A 2018-01-17 2018-01-17 For the method and apparatus for the treatment of classification device sample Pending CN108133239A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810045342.3A CN108133239A (en) 2018-01-17 2018-01-17 For the method and apparatus for the treatment of classification device sample

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810045342.3A CN108133239A (en) 2018-01-17 2018-01-17 For the method and apparatus for the treatment of classification device sample

Publications (1)

Publication Number Publication Date
CN108133239A true CN108133239A (en) 2018-06-08

Family

ID=62399996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810045342.3A Pending CN108133239A (en) 2018-01-17 2018-01-17 For the method and apparatus for the treatment of classification device sample

Country Status (1)

Country Link
CN (1) CN108133239A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102279415A (en) * 2011-07-08 2011-12-14 北京吉星吉达科技有限公司 Method for calculating Fourier integral one-way wave depth migration based on graphics processor
CN103944533A (en) * 2014-04-04 2014-07-23 江苏卓胜微电子有限公司 Slotted filter
CN105892991A (en) * 2015-02-18 2016-08-24 恩智浦有限公司 Modular multiplication using look-up tables

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102279415A (en) * 2011-07-08 2011-12-14 北京吉星吉达科技有限公司 Method for calculating Fourier integral one-way wave depth migration based on graphics processor
CN103944533A (en) * 2014-04-04 2014-07-23 江苏卓胜微电子有限公司 Slotted filter
CN105892991A (en) * 2015-02-18 2016-08-24 恩智浦有限公司 Modular multiplication using look-up tables

Similar Documents

Publication Publication Date Title
Cai et al. Once-for-all: Train one network and specialize it for efficient deployment
Su et al. Redundancy-reduced mobilenet acceleration on reconfigurable logic for imagenet classification
CN109635916A (en) The hardware realization of deep neural network with variable output data format
US20170169326A1 (en) Systems and methods for a multi-core optimized recurrent neural network
KR20190051755A (en) Method and apparatus for learning low-precision neural network
US20170061279A1 (en) Updating an artificial neural network using flexible fixed point representation
CN109308520B (en) FPGA circuit and method for realizing softmax function calculation
US11314842B1 (en) Hardware implementation of mathematical functions
CN106980900A (en) A kind of characteristic processing method and equipment
CN107967132A (en) A kind of adder and multiplier for neural network processor
CN111240746A (en) Floating point data inverse quantization and quantization method and equipment
CN109325530A (en) Compression method based on the depth convolutional neural networks on a small quantity without label data
Wu et al. Efficient dynamic fixed-point quantization of CNN inference accelerators for edge devices
Elangovan et al. Ax-BxP: Approximate blocked computation for precision-reconfigurable deep neural network acceleration
CN108133239A (en) For the method and apparatus for the treatment of classification device sample
Wang et al. Rdo-q: Extremely fine-grained channel-wise quantization via rate-distortion optimization
CN111814978A (en) Method, apparatus and medium for calculating training computation of neural network model
Tatsumi et al. Mixing low-precision formats in multiply-accumulate units for DNN training
US10271051B2 (en) Method of coding a real signal into a quantized signal
CN107220025A (en) The method for handling the device and processing multiply-add operation of multiply-add operation
CN114372539B (en) Machine learning framework-based classification method and related equipment
Wróbel et al. Convolutional neural network compression for natural language processing
CN108897524A (en) Division function processing circuit, method, chip and system
CN113918882A (en) Data processing acceleration method of dynamic sparse attention mechanism capable of being realized by hardware
EP4024198A1 (en) Information processing device, information processing system, and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180608

RJ01 Rejection of invention patent application after publication