CN110197219B - Hardware implementation method of Bayes classifier supporting data classification - Google Patents

Hardware implementation method of Bayes classifier supporting data classification Download PDF

Info

Publication number
CN110197219B
CN110197219B CN201910442712.1A CN201910442712A CN110197219B CN 110197219 B CN110197219 B CN 110197219B CN 201910442712 A CN201910442712 A CN 201910442712A CN 110197219 B CN110197219 B CN 110197219B
Authority
CN
China
Prior art keywords
calculation module
probability
address
signal
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910442712.1A
Other languages
Chinese (zh)
Other versions
CN110197219A (en
Inventor
魏继增
薛臻
郭炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910442712.1A priority Critical patent/CN110197219B/en
Publication of CN110197219A publication Critical patent/CN110197219A/en
Application granted granted Critical
Publication of CN110197219B publication Critical patent/CN110197219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A naive Bayes classifier with an AXI interface is designed, the naive Bayes classifier is sent to a CPU through the AXI interface through the AXI bus, an index calculation module of the naive Bayes classifier is used for sequentially generating indexes of various categories and various attributes and sending the indexes to an address calculation module, the address calculation module calculates an address of an access probability fast table according to a feature vector to be tested given by a top module and received data of the index calculation module, the probability calculation module calculates posterior probabilities through a Bayes formula, data taken out of the probability fast table and indexes of various categories and various attributes mentioned by the address calculation module, and sends the category with the maximum posterior probability as a classification result to the top module, and the top module is used for coordinating the orderly operation of the index calculation module, the address calculation module, the probability calculation module and the probability fast table. The invention is suitable for all discretized data sets.

Description

Hardware implementation method of Bayes classifier supporting data classification
Technical Field
The invention relates to a Bayesian classifier. In particular to a hardware realization method of a Bayesian classifier supporting data classification.
Background
In recent years, as artificial intelligence has attracted more and more attention, machine learning has been developed. Machine learning is the core of artificial intelligence, is a multi-field cross subject, and relates to multiple subjects such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. It is specially studied how a computer simulates or implements human learning behavior to acquire new knowledge or skills and reorganizes the existing knowledge structure to continuously improve its performance.
With the more mature of the machine learning algorithm, the classification algorithm, which is a very important algorithm in machine learning, is also continuously researched and perfected, and the commonly used methods include bayes, decision trees, support vector machines, k neighbors, logistic regression, neural networks, deep learning and the like. Among many classification algorithms, the bayesian method has been widely applied to classification work of various texts, images and the like due to simplicity and high efficiency, and has an important position. The Bayes classification method is a statistical classification method based on Bayes theorem, which calculates the probability that a tuple to be tested belongs to each class and finally selects the class with the highest probability as the classification result. Bayesian classification methods can be classified into naive Bayes, semi-naive Bayes classification, and the like.
The naive Bayes classifier is a probability classifier which applies Bayes' theorem on the premise of assuming that each attribute of a feature vector is independent, and is the simplest one in Bayes classification. Since the 50 s of the last century, the naive bayesian classification method has been widely researched, and in the 60 s of the last century, the naive bayesian classification method is introduced into the text information retrieval world, and is still a popular classification algorithm until now, and is widely applied to various classification works. It is worth mentioning that although the naive bayes classifier is based on very naive ideas and very simple assumptions, it can still achieve quite good classification effect in many complex realistic situations, and at the same time, the naive bayes classifier has high scalability. Moreover, the naive bayesian classification only needs to estimate necessary parameters according to a small amount of training data and adds a variable independent hypothesis, so that only a method for estimating each variable is needed, and the whole covariance matrix does not need to be determined. Summarizing a sentence, the naive bayesian classification has a solid mathematical basis and stable classification efficiency, and at the same time, the parameters to be estimated are few, less sensitive to data loss, and the algorithm is simple enough.
An FPGA (Field Programmable Gate Array), which is a semi-custom circuit, is designed by Hardware Description Language (HDL), mainly Verilog and VHDL, to implement specific functions, and the configuration of the HDL on the FPGA can be erased at any time, so the FPGA can be used repeatedly. The FPGA has the advantages that the defects of a customized circuit are overcome, and the defect that the number of gate circuits of an original programmable device is limited is overcome. The FPGA has the characteristics of relatively rich internal resources, hardware programming support, low development cost, low risk and repeated use, so that the FPGA becomes the optimal choice for scientific research experiments.
At present, although various classification methods of machine learning have been applied to various fields of computers, most of them still focus on the research of algorithms themselves, and the research work of providing dedicated hardware structures for them is relatively less. From this point of view, it is a necessary trend to customize the hardware architecture suitable for the classification algorithm. Until now, most naive Bayes classifiers still run on a general processor, so that the efficiency is not high enough, and inconvenience is brought to big data processing such as cloud computing. Among many machine learning classification algorithms, the naive Bayes classification algorithm has the characteristics of solid mathematical foundation, simple algorithm and high classification efficiency, so that the naive Bayes classification algorithm is very suitable for hardware implementation.
Disclosure of Invention
The invention aims to solve the technical problem of providing a hardware implementation method of a naive Bayes classifier capable of supporting data classification, which can improve the operation efficiency of the naive Bayes classifier.
The technical scheme adopted by the invention is as follows: a hardware realization method of a Bayesian classifier supporting data classification comprises the steps of designing a naive Bayesian classifier with an AXI interface, sending the naive Bayesian classifier to a CPU through an AXI bus through the AXI interface, wherein the naive Bayesian classifier comprises a top module, an index calculation module, an address calculation module, a probability calculation module and a probability fast table, the index calculation module is used for sequentially generating indexes of all categories and all attributes and sending the indexes to the address calculation module, the address calculation module is used for calculating an address for accessing the probability fast table according to a characteristic vector to be tested given by the top module and received data of the index calculation module, the probability calculation module is used for calculating posterior probabilities through a Bayesian formula, data taken out of the probability fast table and all categories and all ordered attribute indexes mentioned by the address calculation module, and sending the category with the maximum posterior probability as a classification result to the top module, and the top module is used for coordinating the operation of the index calculation module, the address calculation module, the probability calculation module and the probability calculation module.
The format of the feature vector to be tested is as follows: the attribute value of each dimension of the feature vector is an integer starting from 0, and the category is an integer starting from 0.
The AXI interface is packaged in the naive Bayesian classifier and comprises the following components: a clock, a reset signal, and an AXI-Lite interface signal.
And when the naive Bayes classifier starts to work, the top module respectively gives a 1-bit starting signal to the index calculation module, the address calculation module, the probability calculation module and the probability cache and starts the index calculation module, the address calculation module, the probability calculation module and the probability cache.
The index calculation module has the input of clock signal and the reset signal of the top module, and outputs the category index signal and the attribute index signal for sending to the address calculation module.
The input signal of the address calculation module comprises a reset signal for receiving the top module and a feature vector to be tested, and a category index signal and an attribute index signal for receiving the index calculation module, the output signal comprises a probability fast table enable signal and a probability fast table address signal which are sent to the probability fast table, and a category index signal and an attribute index signal which are sent to the probability calculation module, and the address calculation formula of the address calculation module is as follows:
probabilistic fast table address = feature vector dimension x number of categories x attribute value + number of categories x attribute value.
The input signals of the probability computation module are as follows: the clock signal receives the reset signal of the top module, receives the probability table data of the probability table, receives the category index signal and the attribute index signal of the address calculation module, and the output signal comprises a result valid signal and a classification result signal which are sent to the top module.
The probability calculation module adopts a logarithm method to convert multiplication into an addition and subtraction method, logarithm base numbers are selected to be 2 as bases, logarithm processing is carried out, then the Bayesian principle is utilized to calculate all data categories, the category with the maximum posterior probability is selected, and the logarithm processing formula is as follows:
Figure BDA0002072572930000021
unfolding can obtain:
Figure BDA0002072572930000031
wherein, x = (x) 0 ,x 1 ,…,x n-1 ) Is an n-dimensional vector to be tested, y = { y = C | C =0,1, …, C-1} is an x possible class label.
The structure of the probability cache table is stored from a low address to a high address according to the sequence of the attribute values from small to large, and for the attributes with the same attribute values, the structure is stored from the low address to the high address according to the sequence of the category labels from small to large.
The data stored in the probability fast table is fixed-point unsigned numbers, in the process of fixed-point processing, the integer number depends on the extreme value of the probability, the selection of the decimal number comprehensively considers the classification precision and the size of the occupied storage space, and the more the reserved decimal number, the higher the classification accuracy.
The invention discloses a hardware implementation method of a Bayes classifier supporting data classification, which implements a naive Bayes classification algorithm on hardware and encapsulates the naive Bayes classification algorithm into an IP of an AXI interface. Different from the operation on a general processor, a special hardware structure is provided, so that the classification efficiency is improved, and convenience is provided for processing big data such as cloud computing. Moreover, the method is suitable for all discretization data sets, and only a floating point fixed-point scheme and a training result need to be replaced.
Drawings
FIG. 1 is a block diagram of a hardware implementation of a Bayesian classifier for supporting data classification in accordance with the present invention;
FIG. 2 is a block diagram of a naive Bayes classifier of the present invention;
FIG. 3 is a schematic diagram of a probability fast table according to the present invention.
In the figure
1: naive bayes classifier 11: top module
12: the index calculation module 13: address calculation module
14: the probability calculation module 15: probability fast meter
2: AXI interface 3: AXI bus
4:CPU
Detailed Description
The hardware implementation method of the bayesian classifier supporting data classification according to the present invention is described in detail below with reference to the embodiments and the accompanying drawings.
As shown in fig. 1, the hardware implementation method of the bayesian classifier supporting data classification of the present invention includes designing a naive bayesian classifier 1 having an AXI interface 2, and sending the naive bayesian classifier 1 to a CPU4 through the AXI interface 2 via an AXI bus 3.
The AXI interface 2 is packaged in the naive bayes classifier 1, and comprises: clock, reset signal and AXI-Lite interface signal, so it can be mounted on the system conveniently for working. The AXI interface is as follows:
table 1 bayes classifier top level module interface
Figure BDA0002072572930000032
/>
Figure BDA0002072572930000041
As shown in fig. 2, the naive bayes classifier 1 includes a top module 11, an index calculation module 12, an address calculation module 13, a probability calculation module 14, and a probability table 15, wherein the index calculation module 12 is configured to sequentially generate indexes of each category and each attribute and send the indexes to the address calculation module 13, the address calculation module 13 calculates an address of the access probability table 15 according to a feature vector to be tested given by the top module 11 and received data of the index calculation module 12, and the format of the feature vector to be tested is: the attribute value of each dimension of the feature vector is an integer starting from 0, and the category is an integer starting from 0. The probability calculation module 14 calculates posterior probabilities through a bayesian formula, data extracted from the probability cache table 15, and indexes of each class and each attribute mentioned from the address calculation module 13, and sends the class with the highest posterior probability as a classification result to the top module 11, where the top module 11 is used to coordinate the orderly operation of the index calculation module 12, the address calculation module 13, the probability calculation module 14, and the probability cache table 15.
When the naive bayes classifier 1 starts to work, the top module 11 respectively gives a 1-bit start signal to the index calculation module 12, the address calculation module 13, the probability calculation module 14 and the probability block table 15 and starts.
In the top module 11, there are 4 registers interacting with the AXI bus, namely a start register, a result register, a feature vector register to be tested and a result valid register. Wherein the start register indicates whether the IP can start to operate; the result register stores the classification result of the classifier; the feature vector register to be tested stores the feature vector passed on the AXI bus. The result valid register indicates whether the classification result stored in the result register is valid, i.e., whether the classification is completed.
When the start register is set to be 1, the naive Bayes classifier starts to classify the picture corresponding to the index in the picture index register, if the feature vector to be tested is larger than 32 bits, the classifier sequentially receives a plurality of 32-bit feature vector data, when the whole vector to be tested is obtained, the data is transmitted to the address calculation sub-module, and then all the sub-modules are started to start classification. After the classification is finished, the top module writes the classification result into the result register, and the CPU can obtain the classification result through the AXI bus by combining the result effective register.
The index calculation module 12 has a clock signal and a reset signal for receiving the top module 11 as inputs, and outputs a category index signal and an attribute index signal for sending to the address calculation module 13. As shown in the table 2 below, the following examples,
table 2 index calculation module interface
Figure BDA0002072572930000042
It can be known from the mathematical principle of naive bayes classification that the probability values of each category as the classification result need to be calculated once when the classification is carried out, so that indexes of each category and each attribute are sequentially generated in the index calculation module to be provided for the address calculation module to calculate the address of the access probability fast table and indicate the process of the probability calculation module.
The input signal of the address calculation module 13 has the reset signal and the feature vector to be tested for receiving the top module 11, and the category index signal and the attribute index signal for receiving the index calculation module 12, and the output signal has the probability cache enable signal and the probability cache address signal for sending to the probability cache 15, and the category index signal and the attribute index signal for sending to the probability calculation module 14, as shown in table 3,
TABLE 3 Address computation Module interface
Figure BDA0002072572930000051
And the address calculation module calculates the address of the probability cache table to access the probability cache table through an address calculation formula according to the category index, the attribute index and the attribute value of the feature vector to be tested, and simultaneously transmits the category index and the attribute index for the probability calculation module to use. The address calculation formula of the address calculation module 13 is as follows:
probabilistic fast table address = feature vector dimension x number of categories x attribute value + number of categories x attribute value.
The input signals of the probability computation module 14 are: the clock signal receives the reset signal of the top module 11, receives the probabilistic table data of the probabilistic table 15, receives the category index signal and the attribute index signal of the address calculation module 13, and outputs a result valid signal and a classification result signal which are sent to the top module 11. As shown in the table 4 below, the following examples,
TABLE 4 probability computation module interface
Figure BDA0002072572930000052
In the probability calculation module, the posterior probability is calculated once for each class by using data, class indexes and attribute indexes which are extracted from the probability fast table through a Bayesian formula, and finally the class which enables the posterior probability to be maximum is taken as a classification result. Initially, the valid signal of the classification result is set to 0, and when the classification is completed, the valid signal of the classification result is set to 1 and is transmitted to the top module together with the classification result.
The probability calculation module 14 converts the multiplication to the addition and subtraction method by using a logarithm method, selects a base 2 for logarithm processing, calculates all data categories by using a bayesian principle, and selects a category with the maximum posterior probability, wherein the logarithm processing formula is as follows:
Figure BDA0002072572930000061
/>
unfolding to obtain:
Figure BDA0002072572930000062
wherein, x = (x) 0 ,x 1 ,…,x n-1 ) Is an n-dimensional vector to be tested, y = { y = C | C =0,1, …, C-1} is an x possible class label.
As shown in fig. 3, the probabilistic fast table 15 is configured to store from a low address to a high address in the order of increasing attribute values, and to store from a low address to a high address in the order of increasing category labels for attributes having the same attribute values.
The data stored in the probabilistic fast table 15 is fixed-point unsigned number, in the fixed-point process, the integer number depends on the extreme value of the probability, the selection of the decimal place number comprehensively considers the classification precision and the size of the occupied storage space, and the more the reserved decimal places, the higher the classification accuracy.

Claims (7)

1. A hardware implementation method of a Bayesian classifier supporting data classification is characterized by comprising designing a naive Bayesian classifier (1) having an AXI interface (2), and the naive Bayes classifier (1) is sent to the CPU (4) through the AXI interface (2) via the AXI bus (3), the naive Bayes classifier (1) comprises a top-level module (11), an index calculation module (12), an address calculation module (13), a probability calculation module (14) and a probability fast table (15), the index calculation module (12) is used for sequentially generating indexes of all categories and all attributes and sending the indexes into the address calculation module (13), the address calculation module (13) calculates the address of the access probability fast table (15) according to the feature vector to be tested given by the top module (11) and the received data of the index calculation module (12), the probability calculation module (14) calculates the posterior probability by a Bayesian formula, the data extracted from the probability cache table (15) and the indexes of each category and each attribute extracted from the address calculation module (13), and the category with the maximum posterior probability is sent to a top module (11) as a classification result, the top-level module (11) is used for coordinating the orderly operation of the index calculation module (12), the address calculation module (13), the probability calculation module (14) and the probability fast table (15);
when the naive Bayes classifier (1) starts to work, the top-level module (11) respectively sends a 1-bit starting signal to the index calculation module (12), the address calculation module (13), the probability calculation module (14) and the probability fast table (15) and starts;
the index calculation module (12) inputs a clock signal and receives a reset signal of the top module (11), and outputs a category index signal and an attribute index signal which are used for being sent to the address calculation module (13);
the input signal of the address calculation module (13) has a reset signal and a feature vector to be tested for receiving the top module (11), and receives a category index signal and an attribute index signal of the index calculation module (12), the output signal has a probability fast table enable signal and a probability fast table address signal which are sent to the probability fast table (15), and a category index signal and an attribute index signal which are sent to the probability calculation module (14), and the address calculation formula of the address calculation module (13) is as follows:
probabilistic fast table address = feature vector dimension x number of categories x attribute value + number of categories x attribute value.
2. The hardware implementation method of the bayesian classifier supporting data classification as claimed in claim 1, wherein the format of the feature vector to be tested is: the attribute value of each dimension of the feature vector is an integer starting from 0, and the category is an integer starting from 0.
3. The hardware implementation method of bayesian classifier capable of supporting data classification as claimed in claim 1, wherein said AXI interface (2) is encapsulated in said naive bayesian classifier (1), comprising: a clock, a reset signal, and an AXI-Lite interface signal.
4. The hardware implementation method of the bayesian classifier supporting data classification as claimed in claim 1, wherein the input signals of said probability computation module (14) are: the clock signal receives a reset signal of the top module (11), receives probability fast table data of the probability fast table (15), receives a category index signal and an attribute index signal of the address calculation module (13), and outputs a result valid signal and a classification result signal which are sent to the top module (11).
5. The hardware implementation method of the bayesian classifier supporting data classification as claimed in claim 4, wherein said probability calculation module (14) adopts a logarithmic method to convert the multiplication into an addition and subtraction method, the logarithm base number is selected to be base 2, the logarithmic processing is performed, then all data classes are calculated by using bayesian principle to select the class with the maximum posterior probability, and said logarithmic processing formula is as follows:
Figure FDA0003991866270000021
unfolding to obtain:
Figure FDA0003991866270000022
wherein, x = (x) 0 ,x 1 ,…,x n-1 ) Is an n-dimensional vector to be tested, y = { y = C | C =0,1, …, C-1} is an x possible class label.
6. The hardware implementation method of the bayesian classifier supporting data classification as claimed in claim 1, wherein said probabilistic fast table (15) is structured to be stored from low address to high address in the order of increasing attribute values, and to be stored from low address to high address in the order of increasing class labels for attributes with the same attribute values.
7. The hardware implementation method of the bayesian classifier for supporting data classification as claimed in claim 6, wherein the stored data in the probabilistic fast table (15) is fixed-point unsigned numbers, during the fixed-point process, the integer number of bits depends on the extreme value of the probability, the selection of the decimal number considers the classification precision and the size of the occupied storage space, and the more the remaining decimal number, the higher the classification accuracy.
CN201910442712.1A 2019-05-25 2019-05-25 Hardware implementation method of Bayes classifier supporting data classification Active CN110197219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910442712.1A CN110197219B (en) 2019-05-25 2019-05-25 Hardware implementation method of Bayes classifier supporting data classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910442712.1A CN110197219B (en) 2019-05-25 2019-05-25 Hardware implementation method of Bayes classifier supporting data classification

Publications (2)

Publication Number Publication Date
CN110197219A CN110197219A (en) 2019-09-03
CN110197219B true CN110197219B (en) 2023-04-18

Family

ID=67752979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910442712.1A Active CN110197219B (en) 2019-05-25 2019-05-25 Hardware implementation method of Bayes classifier supporting data classification

Country Status (1)

Country Link
CN (1) CN110197219B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663061A (en) * 2012-03-30 2012-09-12 Ut斯达康通讯有限公司 Quick sorting and searching device for high-capacity lookup table and method for implementing quick sorting and searching device
CN108932135A (en) * 2018-06-29 2018-12-04 中国科学技术大学苏州研究院 The acceleration platform designing method of sorting algorithm based on FPGA

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102359265B1 (en) * 2015-09-18 2022-02-07 삼성전자주식회사 Processing apparatus and method for performing operation thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663061A (en) * 2012-03-30 2012-09-12 Ut斯达康通讯有限公司 Quick sorting and searching device for high-capacity lookup table and method for implementing quick sorting and searching device
CN108932135A (en) * 2018-06-29 2018-12-04 中国科学技术大学苏州研究院 The acceleration platform designing method of sorting algorithm based on FPGA

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Hongying Meng et al..FPGA implementation of Naive Bayes classifier for visual object recognition.《CVPR 2011 WORKSHOPS》.2011,全文. *
一种基于查找表的数据包快速分类技术;程文青等;《计算机应用研究》;20041028(第10期);全文 *
一种基于贝叶斯网络的随机测试方法在Cache一致性验证中的设计与实现;艾阳阳等;《计算机工程与科学》;20170815(第08期);全文 *

Also Published As

Publication number Publication date
CN110197219A (en) 2019-09-03

Similar Documents

Publication Publication Date Title
Salamat et al. F5-hd: Fast flexible fpga-based framework for refreshing hyperdimensional computing
US11307864B2 (en) Data processing apparatus and method
Kim et al. A 201.4 GOPS 496 mW real-time multi-object recognition processor with bio-inspired neural perception engine
US11307865B2 (en) Data processing apparatus and method
Kyrkou et al. A parallel hardware architecture for real-time object detection with support vector machines
Whatmough et al. FixyNN: Efficient hardware for mobile computer vision via transfer learning
Afifi et al. FPGA implementations of SVM classifiers: A review
Alawad et al. Stochastic-based deep convolutional networks with reconfigurable logic fabric
Elleuch et al. A fuzzy ontology: based framework for reasoning in visual video content analysis and indexing
Dass et al. Vitality: Unifying low-rank and sparse approximation for vision transformer acceleration with a linear taylor attention
Zhang et al. FPGA implementation of quantized convolutional neural networks
Li et al. Dynamic dataflow scheduling and computation mapping techniques for efficient depthwise separable convolution acceleration
CN105320764A (en) 3D model retrieval method and 3D model retrieval apparatus based on slow increment features
CN116822651A (en) Large model parameter fine adjustment method, device, equipment and medium based on incremental learning
Frasser et al. Fully parallel stochastic computing hardware implementation of convolutional neural networks for edge computing applications
Alawad Scalable FPGA accelerator for deep convolutional neural networks with stochastic streaming
Abreu et al. A framework for designing power-efficient inference accelerators in tree-based learning applications
Guan et al. Recursive binary neural network training model for efficient usage of on-chip memory
Song et al. Accelerating kNN search in high dimensional datasets on FPGA by reducing external memory access
CN114757347A (en) Method and system for realizing low bit quantization neural network accelerator
Huang et al. Scalable object detection accelerators on FPGAs using custom design space exploration
CN110197219B (en) Hardware implementation method of Bayes classifier supporting data classification
Struharik et al. Hardware implementation of decision tree ensembles
CN116401552A (en) Classification model training method and related device
Huang et al. An Integer-Only and Group-Vector Systolic Accelerator for Efficiently Mapping Vision Transformer on Edge

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant