CN107193895A - Extract the new method that language acknowledging model hides knowledge - Google Patents

Extract the new method that language acknowledging model hides knowledge Download PDF

Info

Publication number
CN107193895A
CN107193895A CN201710319420.XA CN201710319420A CN107193895A CN 107193895 A CN107193895 A CN 107193895A CN 201710319420 A CN201710319420 A CN 201710319420A CN 107193895 A CN107193895 A CN 107193895A
Authority
CN
China
Prior art keywords
pattern
premise
feature space
conclusion
rule
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710319420.XA
Other languages
Chinese (zh)
Inventor
杨娟
白云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Normal University
Original Assignee
Sichuan Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Normal University filed Critical Sichuan Normal University
Priority to CN201710319420.XA priority Critical patent/CN107193895A/en
Publication of CN107193895A publication Critical patent/CN107193895A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/10Interfaces, programming languages or software development kits, e.g. for simulating neural networks

Abstract

The present invention relates to data analysis field, the new method that language acknowledging model hides knowledge is extracted it discloses a kind of, is comprised the following steps:(A)The pattern of conclusion part is classified by unsupervised learning;(B)Analyze the common trait domain of premise pattern;(C)Recognize premise pattern feature space and conclusion part feature space;(D)Build inference rule.The beneficial effects of the invention are as follows:Feature extraction has been carried out by premise pattern and conclusion pattern to language acknowledging model, has made this feature more accurate than conventional rule;By increasing interpreter, feature space is set to possess interpretation, so as to solve the problems, such as the non-categorical Rule Extraction of language acknowledging model.

Description

Extract the new method that language acknowledging model hides knowledge
【Technical field】
The present invention relates to data analysis field, more particularly to a kind of new method for extracting the hiding knowledge of language acknowledging model.
【Background technology】
Conventional data analysis technique most commonly has supervision machine in addition to statistical model based on training dataset Practise algorithm, such as depth belief network (DBF), artificial neural network (ANN), SVMs (SVM) and based on case Learning method.Although this kind of black box sorting technique can obtain preferable classification results, but be difficult to obtain associated with conclusion Knowledge is hidden, actual Knowledge Discovery application (such as medical diagnosis, letter can not be solved by obtaining tag along sort only by black box mode With risk assessment and DSS etc.).Therefore other transparent intelligible graders are then widely used in structure In the Rule Extraction for changing data, for example, the domain-dependent knowledge representation of neck, i.e. language are built in classical language acknowledging computation model Method produces model, and uses the transparent intelligible classifier technique of such as decision tree, rule-based system and production system Realize this whitepack model.And the technology for being used successfully to decision tree and rule-based system is mainly evolution program and hereditary journey Sequence.But it is due to that to result in this kind of white-box techniques accurate in classification for the high-dimensional property and data continuity of a large amount of problem domain data Contradiction is produced in degree and regular interpretation, i.e., to obtain enough interpretations, then necessarily require and sacrifice classification Accuracy.Therefore this kind of technology classification forecasting accuracy is well below black box sorting technique.
A large amount of Rule Extraction technologies based on black box are used for structural knowledge discovery, such as Rule Extraction based on ANN With the Rule Extraction based on SVM.Typically, the Rule Extraction technology based on black box can lift it while accuracy is ensured It is explanatory, it is the mainstream technology of current Rule Extraction.Extraction based on black box can be divided into:The side of method and decomposition based on study Method.Addition one has interpretation to first method generally on the basis of the input of black box technology, output relation is only considered Machine learning module.Such as Barakat of Su Ha universities et al. then generates the sample of handmarking in SVM classifier, then will These samples are used for the machine learning module (such as decision tree) for training a whitepack.Raunal of Deportivo Coruna university et al. is then Propose to add a Symbolic Regression model on ANN, to simulate ANN behavior, this method is by setting up one group of rule come mould Intend ANN function.
And the Rules extraction method decomposed then is decomposed by classifier modules, it is intended to extract rule on the module level of decomposition Then.Such as Nunez of Spain Jia Tailuoni Polytechnics et al. propose SVM+ prototype vectors based on reconstruct (rejoins) Regular break method, this method determines the prototype vector of each class by a clustering algorithm first, then by these prototypes Vector is used for ellipse and the hypermatrix region for defining the input space together with SVM supporting vector.Similar work also has Zhejiang big The hypermatrix rule extraction that Zhang Ying et al. is proposed is learned, for the SVM extracting rules after training.This method is mainly using support Vector clusters algorithm finds out the prototype vector of each class, is then used to these prototype vectors together with supporting vector produce super square Shape region.
The high-dimensional property and continuity of data are limited to based on the transparent white-box techniques for being appreciated that grader, is made it difficult to With it is explanatory while ensure its accuracy, though and the Rule Extraction technology based on black box can ensure accuracy while carry Rise its explanatory, but be only capable of solving classification problem, therefore, prior art can not be used in the computation model in language acknowledging field On, i.e. conclusion part not tag along sort, but rule-based Q-character is changed.
【The content of the invention】
In order to solve the problems of the prior art, the new of knowledge is hidden the invention provides a kind of language acknowledging model that extracts Method, solution can not be used on the computation model in language acknowledging field in the prior art extracts the problem of hiding content.
The new method that language acknowledging model hides knowledge is extracted the invention provides a kind of, is comprised the following steps:(A) pass through Unsupervised learning is classified the pattern of conclusion part;(B) the common trait domain of premise pattern is analyzed;(C) premise mould is recognized Formula feature space and conclusion part feature space;(D) inference rule is built.
As a further improvement on the present invention:In the step (A), the pattern constructed to Different Rule is carried out without label Classification, and pre-training is carried out to every layer of neutral net.
As a further improvement on the present invention:The premise pattern corresponding to the conclusion pattern in inhomogeneity is analyzed, and to this A little classes carry out subclass division.
As a further improvement on the present invention:In the step (B), to classification constitute subclass premise pattern it is common Property field is analyzed.
As a further improvement on the present invention:In the step (C), the premise pattern feature of the subclass constituted to classification is empty Between and conclusion part feature space be identified.
As a further improvement on the present invention:In the step (D), according to the premise mould of the premise pattern of setting and subclass Formula feature space similarity builds inference rule.
As a further improvement on the present invention:In the step (D), build rule and carry out in the following manner:(D1) will Training data is substituted into the autocoding decoder built, and the neutral net weighted value preserved using it;(D2) carry out excellent Change makes the explanation that it is most approached feature space;(D3) feature for being directed to such premise pattern most approached is obtained Space interpretation and the feature space of conclusion pattern are explained;(D4) some is met by according to the feature space of a given premise pattern The space interpretation of premise class, the character pair position of the conclusion pattern of the premise pattern is constructed with the feature space of such conclusion pattern Mode build rule.
The beneficial effects of the invention are as follows:Feature is carried out by premise pattern and conclusion pattern to language acknowledging model to take out Take, make this feature more accurate than conventional rule;By increasing interpreter, feature space is set to possess interpretation, so as to solve The non-categorical Rule Extraction problem of language acknowledging of having determined model.
【Brief description of the drawings】
Fig. 1 is the training mode principle of classification figure that the present invention is based on autocoding (decoder);
Fig. 2 is the coding schematic diagram of the Chinese and English verb of one embodiment of the invention and its past tense language acknowledging model.
【Embodiment】
The present invention is further described for explanation and embodiment below in conjunction with the accompanying drawings.
It is a kind of to extract the new method that language acknowledging model hides knowledge, comprise the following steps:(A) by unsupervised learning by conclusion Partial pattern is classified;(B) the common trait domain of premise pattern is analyzed;(C) identification premise pattern feature space and knot By Partial Feature space;(D) inference rule is built.
In the step (A), the pattern constructed to Different Rule enter without labeling, and to every layer of neutral net Row pre-training.
The premise pattern corresponding to the conclusion pattern in inhomogeneity is analyzed, and subclass division is carried out to these classes.
In the step (B), the common trait domain of the premise pattern of the subclass constituted to classification is analyzed.
In the step (C), the premise pattern feature space of the subclass constituted to classification and conclusion part feature space It is identified.
In the step (D), built and pushed away according to the premise pattern feature space similarity of the premise pattern of setting and subclass Reason rule.
In the step (D), build rule and carry out in the following manner:(D1) training data will be substituted into the automatic of structure In coding decoder, and the neutral net weighted value preserved using it;(D2) optimizing makes it most force feature space Near explanation;(D3) a feature space explanation and the feature of conclusion pattern for such premise pattern most approached is obtained Space interpretation;(D4) by the space interpretation that some premise class is met according to the feature space of a given premise pattern, with such The feature space of conclusion pattern constructs the mode of the character pair position of the conclusion pattern of the premise pattern to build rule.
In the step (D1), using following algorithm,
Wherein TrainData is training number According to w12It is the weight of the 1st layer to the 2nd layer of the neutral net prestored, and wj-1jRefer to that -1 layer of the neutral net jth prestored arrives jth The weight of layer.Sigm () is sigmoid functions, NetjIt is layer j sigmoid outputs, this is a recursive function;The step (D2) in, optimized by genetic algorithm,
Wherein n is training data TrainData Batch total, repmat is a function that n row duplications are carried out to matrix X, and operator × refer to Hadmamard multiplies.NettIt is certainly Last layer of dynamic interpreter, X is an explanation to feature space.
In the step (A), the data classified are non-categorical problem data collection.
Language acknowledging model refers to language construct class model, such as the past tense construction and noun of English verb and its multiple The class models such as number form formula construction.Therefore the hiding Knowledge Discovery of such issues that can totally be described as:
It is known:Necessary forerunner's knowledge (being the basis for interpretation of coding and the hiding knowledge excavated), the premise mould obtained by observation Formula, the conclusion pattern obtained by observation, one conclusion pattern of each forerunner's pattern correspondence.
The hiding knowledge that need to be found:Specific rule, can be for the given corresponding conclusion mould of premise automatic pattern generation Formula.
As a result:When giving one group of new premise pattern, the corresponding conclusion pattern of found rule generation can be passed through.
In one embodiment, the hiding knowledge (rule) of the language acknowledging model of this non-categorical problem is extracted, it is necessary to To pass through 5 steps as described below:
(1) pattern of conclusion part is classified by unsupervised learning;
(2) the premise pattern corresponding to the conclusion pattern in these inhomogeneities is analyzed, and subclass division is carried out to it;
(3) the common trait domain of premise pattern is analyzed in each subclass;
(4) shared premise pattern feature space and conclusion part feature space are known in each subclass;
With " if the premise pattern feature space similarity highest of a given premise pattern and some subclass, then its institute is right The conclusion pattern answered should have such conclusion pattern feature space feature " form build inference rule.
Training mode is classified:
Different from classification problem, the conclusion part of most of language acknowledging model only makes specific modification in premise part, and remaining is special Levy and remain unchanged, such as english verb and its past tense, noun and its plural number, etc..Modification for conclusion part is typically to have (i.e. regular) of rule.Therefore we use autocoding (decoding) device (auto-encoder/decoder) to Different Rule institute The pattern of construction is carried out without labeling, and using auto-wave hereby graceful machine (Restricted Boltzmann machines, RBMs pre-training) is carried out for every layer of neutral net, to improve its classification accuracy.The nerve net classified specifically for training mode Network construction is as shown in Figure 1.
Training data based on autocoding (decoding) device is classified:
Typical differences in training data are found by using autocoding (decoding) device feature recognition function.When successfully supervision is learned The conclusion pattern of given training data is practised, the weight that its neutral net is generated need to be retained, for use as construction rule Conclusion part.At this moment, according to conclusion pattern final layer neutral net neuronal activation state by conclusion pattern in itself with And its corresponding premise pattern classification is into different classes.Finally, it is the new autocoding of inhomogeneous premise schema construction (decoding) device, and carry out subclass division.
The feature space of premise pattern and conclusion pattern is found and rule is built:
The feature space of pattern in each class is identified and explained, to reach the purpose for building rule.Because neutral net The neuronal activation state of last layer is feature space, therefore explains by bits of coded identification these feature spaces.Specifically For:
1. autocoding (decoding) device built before training data is substituted into, the neutral net weighted value preserved before use:
Wherein TrainData is training data, w12It is the weight of the 1st layer to the 2nd layer of the neutral net prestored, and wj-1jRefer to pre- - 1 layer of weight to jth layer of the neutral net jth deposited.Sigm () is sigmoid functions, NetlIt is layer l sigmoid outputs, This is a recursive function.
2. use genetic algorithm optimization following problems:
Wherein n is training data TrainData batch total, and repmat is a function that n row duplications are carried out to matrix X, Operator × refer to Hadmamard multiplies.NettIt is last layer of automatic interpreter, X is an explanation to feature space, and this is asked The optimum results of topic are to one of the feature space explanation most approached.
3. substituting into the TrainData of formula (1) with the premise pattern of training data after classification and conclusion pattern respectively, finally obtain One feature space for such premise pattern most approached explains XpremiseExplained with the feature space of conclusion pattern Xconclusion
4. build rule:
The feature space of the given premise patterns of If mono- meets the X of some premise classpremise
Then constructs the character pair position of the conclusion pattern of the premise pattern with the feature space of such conclusion pattern.
In another embodiment, the hiding Knowledge Extraction problem such as English verb and its past tense language acknowledging model is retouched State.The operation principle of invention will be explained with English verb cognitive model in this part.In English verb and its past tense language acknowledging In model, the rule that most of verb is used is that ed is added after verb as its past tense form, but in addition to that, move Also there is some other special case in word past tense construction, such as any association (go-went) between root and past tense form, Root and past tense unchanged (hit-hit) and change root vocal parts formed its past tense form (sing-sang, ring-rang).The hiding Knowledge Extraction of the language acknowledging model can be described as:
It is known:One group is used for the verb and its past tense of training.
The hiding knowledge that need to be found:Different construction verbs and its rule of past tense.
As a result:When giving one group of new verb, the corresponding past tense of found rule generation can be passed through.
In this example, data set is using 508 3 alphabetical verbs and its past tense, wherein comprising 410 conventional verbs and its Past tense (i.e. using the verb and its past tense of ed suffix), 68 vowels change unconventional verb and its past tense, and 10 are appointed Meaning association verb and its past tense and 20 unchanged verbs and its past tense.Coding is using Plunkett&marchaman's 19 language feature encoding mechanisms, i.e., implement 19 0/1 binary codings to each verb, and encoding scheme is shown in Fig. 2.
There is following rule in the embodiment:
(1) rule 1:
If (x15=0 and x24=1 and x40=1)
Then (y2=1 and y3=0 and y7=1 and y20=1 and y22=1 and y23=1 and y24= 1 and y40=1)
(2) rule 2:
The method that content is hidden in the extraction used by the present invention can quickly extract structure rule, further solve language Say the non-categorical Rule Extraction problem of cognitive model.
In another embodiment, wavelet neural network is made up of input layer, hidden layer and output layer, three etale topology structures Arbitrarily to approach Nonlinear Mapping relation;Connection is had no between each neuron of layer;Set up forecast model specific according to the following steps Carry out:Learning sample is inputted, the output of hidden layer and output layer, calculation error and gradient vector are calculated using current network;It is right Learning algorithm is judged, study is stopped when the functional value that algorithm is set is less than default accuracy value, otherwise continues to learn;It is logical Cross MATLAB instruments and carry out simulation and prediction, and show that price of medicinal material predicts the outcome;Known by collecting and analyzing feature space Not, using data mining to receive the preparation of progress data, setting up data mining model, data prediction and conclusion are stated, so as to carry out The expection of this research is reached in a series of study with research, initial data is pre-processed first, and use grey correlation The degree of association of each influence factor of analytical and price is spent, so that it is determined that key influence factor.Wavelet neural network again Method, carries out simulation and prediction by MATLAB instruments, draws and predict the outcome.
Above content is to combine specific preferred embodiment further description made for the present invention, it is impossible to assert The specific implementation of the present invention is confined to these explanations.For general technical staff of the technical field of the invention, On the premise of not departing from present inventive concept, some simple deduction or replace can also be made, should all be considered as belonging to the present invention's Protection domain.

Claims (7)

1. a kind of extract the new method that language acknowledging model hides knowledge, it is characterised in that:Comprise the following steps:(A)By non- Supervised learning is classified the pattern of conclusion part;(B)Analyze the common trait domain of premise pattern;(C)Recognize premise pattern Feature space and conclusion part feature space;(D)Build inference rule.
2. according to claim 1 extract the new method that language acknowledging model hides knowledge, it is characterised in that:The step (A)In, the pattern constructed to Different Rule is carried out without labeling, and carries out pre-training to every layer of neutral net.
3. according to claim 2 extract the new method that language acknowledging model hides knowledge, it is characterised in that:Analysis is different The premise pattern corresponding to conclusion pattern in class, and subclass division is carried out to these classes.
4. according to claim 1 extract the new method that language acknowledging model hides knowledge, it is characterised in that:The step (B)In, the common trait domain of the premise pattern of the subclass constituted to classification is analyzed.
5. according to claim 1 extract the new method that language acknowledging model hides knowledge, it is characterised in that:The step (C)In, the premise pattern feature space of the subclass constituted to classification and conclusion part feature space are identified.
6. according to claim 1 extract the new method that language acknowledging model hides knowledge, it is characterised in that:The step (D)In, inference rule is built according to the premise pattern feature space similarity of the premise pattern of setting and subclass.
7. according to claim 1 extract the new method that language acknowledging model hides knowledge, it is characterised in that:The step (D)In, build rule and carry out in the following manner:(D1)Training data will be substituted into the autocoding decoder built, and The neutral net weighted value preserved using it;(D2)Optimize the explanation for making it most approach feature space;(D3)Obtain Obtain a feature space for such premise pattern most approached and explain that the feature space with conclusion pattern is explained;(D4)Press The space interpretation of some premise class is met according to the feature space of a given premise pattern, it is empty with the feature of such conclusion pattern Between construct the premise pattern the mode of character pair position of conclusion pattern build rule.
CN201710319420.XA 2017-05-09 2017-05-09 Extract the new method that language acknowledging model hides knowledge Pending CN107193895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710319420.XA CN107193895A (en) 2017-05-09 2017-05-09 Extract the new method that language acknowledging model hides knowledge

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710319420.XA CN107193895A (en) 2017-05-09 2017-05-09 Extract the new method that language acknowledging model hides knowledge

Publications (1)

Publication Number Publication Date
CN107193895A true CN107193895A (en) 2017-09-22

Family

ID=59872945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710319420.XA Pending CN107193895A (en) 2017-05-09 2017-05-09 Extract the new method that language acknowledging model hides knowledge

Country Status (1)

Country Link
CN (1) CN107193895A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635171A (en) * 2018-12-13 2019-04-16 成都索贝数码科技股份有限公司 A kind of fusion reasoning system and method for news program intelligent label
CN109670184A (en) * 2018-12-26 2019-04-23 南京题麦壳斯信息科技有限公司 A kind of english article method for evaluating quality and system
CN110119446A (en) * 2018-02-05 2019-08-13 埃森哲环球解决方案有限公司 Interpretable artificial intelligence

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710393A (en) * 2009-11-25 2010-05-19 北京航空航天大学 Method for knowledge expressing and reasoning mechanism of expert system
CN106446022A (en) * 2016-08-29 2017-02-22 华东师范大学 Formal semantic reasoning and deep learning-based natural language knowledge mining method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710393A (en) * 2009-11-25 2010-05-19 北京航空航天大学 Method for knowledge expressing and reasoning mechanism of expert system
CN106446022A (en) * 2016-08-29 2017-02-22 华东师范大学 Formal semantic reasoning and deep learning-based natural language knowledge mining method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JUAN YANG等: "A Hidden Knowledge Discovering Approach for Past Tense and Plural Problems to Language Cognition", 《 2016 12TH INTERNATIONAL CONFERENCE ON SEMANTICS, KNOWLEDGE AND GRIDS (SKG)》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110119446A (en) * 2018-02-05 2019-08-13 埃森哲环球解决方案有限公司 Interpretable artificial intelligence
CN109635171A (en) * 2018-12-13 2019-04-16 成都索贝数码科技股份有限公司 A kind of fusion reasoning system and method for news program intelligent label
CN109635171B (en) * 2018-12-13 2022-11-29 成都索贝数码科技股份有限公司 Fusion reasoning system and method for news program intelligent tags
CN109670184A (en) * 2018-12-26 2019-04-23 南京题麦壳斯信息科技有限公司 A kind of english article method for evaluating quality and system
CN109670184B (en) * 2018-12-26 2023-07-04 南京题麦壳斯信息科技有限公司 English article quality assessment method and system

Similar Documents

Publication Publication Date Title
Wang et al. Deepvid: Deep visual interpretation and diagnosis for image classifiers via knowledge distillation
Zhang et al. Toward the third generation artificial intelligence
CN104915386B (en) A kind of short text clustering method based on deep semantic feature learning
Liang et al. Learning logistic circuits
CN104217226B (en) Conversation activity recognition methods based on deep neural network Yu condition random field
CN110431566A (en) Guiding device based on probability
CN107992597A (en) A kind of text structure method towards electric network fault case
CN107526785A (en) File classification method and device
CN107562784A (en) Short text classification method based on ResLCNN models
CN110209806A (en) File classification method, document sorting apparatus and computer readable storage medium
CN106570513A (en) Fault diagnosis method and apparatus for big data network system
CN106815369A (en) A kind of file classification method based on Xgboost sorting algorithms
Kim et al. AI for design: Virtual design assistant
CN107193895A (en) Extract the new method that language acknowledging model hides knowledge
CN110348227A (en) A kind of classification method and system of software vulnerability
CN106485259A (en) A kind of image classification method based on high constraint high dispersive principal component analysiss network
CN106959946A (en) A kind of text semantic feature generation optimization method based on deep learning
CN110110846A (en) Auxiliary driver's vehicle exchange method based on convolutional neural networks
Ibrahim et al. Explainable convolutional neural networks: A taxonomy, review, and future directions
Taylor Using a multi-head, convolutional neural network with data augmentation to improve electropherogram classification performance
Sharbati et al. Computer engineering and artificial intelligence 2
CN106446117A (en) Text analysis method based on poisson-gamma belief network
CN113849653A (en) Text classification method and device
Dewan et al. A system for offline character recognition using auto-encoder networks
Jiang et al. Research on image classification algorithm based on pytorch

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170922