CN110196909A - Text denoising method and device based on intensified learning - Google Patents
Text denoising method and device based on intensified learning Download PDFInfo
- Publication number
- CN110196909A CN110196909A CN201910400091.0A CN201910400091A CN110196909A CN 110196909 A CN110196909 A CN 110196909A CN 201910400091 A CN201910400091 A CN 201910400091A CN 110196909 A CN110196909 A CN 110196909A
- Authority
- CN
- China
- Prior art keywords
- text
- feature
- default
- intensified learning
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
- G06F16/353—Clustering; Classification into predefined classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
Abstract
The text denoising method and device based on intensified learning that this application discloses a kind of.This method includes by text feature the first default network model of training;Text to be processed is inputted to the described first default network model, and judges the noise word in the text to be processed;And denoising result is inputted into the second default disaggregated model, obtain text classification result.Present application addresses the technical problems of text-processing result inaccuracy.The word for removing based on intensified learning the noise in task by the application, the word after denoising improve the accuracy rate of classification using conventional disaggregated model training.
Description
Technical field
This application involves text-processings, intensified learning field, in particular to the text denoising side based on intensified learning
Method and device.
Background technique
When carrying out intention assessment to text, natural language can include content of text many and that intention is unrelated, i.e. text
Noise.
Inventors have found that if, using identical deactivated vocabulary, be easy to causeing the knot of text-processing for different intentions
Fruit inaccuracy.Further, the prediction of intention is influenced.
For the problem of text-processing result inaccuracy in the related technology, currently no effective solution has been proposed.
Summary of the invention
The main purpose of the application is to provide a kind of text denoising method and device based on intensified learning, to solve text
The problem of present treatment result inaccuracy.
To achieve the goals above, it according to the one aspect of the application, provides a kind of text based on intensified learning and goes
Method for de-noising.
The text denoising method based on intensified learning according to the application includes: by text feature the first default net of training
Network model;Text to be processed is inputted to the described first default network model, and judges the noise word in the text to be processed;
And denoising result is inputted into the second default disaggregated model, obtain text classification result.
It further, include: that request input to be sorted is default by text feature the first default network model of training
In neural network model, the term vector feature in the text is extracted;Using the term vector feature as based on intensified learning
Tactful network model input, and export movement is executed to the vector characteristics.
Further, after output is to the execution movement of the vector characteristics, further includes: default execution movement will be met and tied
The word of fruit retains, and is added in the tactful network model based on intensified learning again, described wait divide after being denoised
The request feature of class;Using the request feature to be sorted after the denoising as the input of default disaggregated model, output point
Class result.
Further, denoising result is inputted into the second default disaggregated model, after obtaining text classification result, further includes:
Judge whether correct to text classification result;If it is determined that it is correct to text classification result, then in first based on intensified learning
The feedback of default network model accumulates;If it is determined that it is incorrect to text classification result, then in based on intensified learning
The feedback accumulative minimizing of one default network model.
It further, include: by the text to be sorted as training set by text feature the first default network model of training
This input intensified learning feature extraction network, obtains text feature;By using the text feature as the input of tactful network,
So that the strategy network acquires the corpus information in text to be sorted;The policy network is obtained by text feature training
Network.
To achieve the goals above, it according to the another aspect of the application, provides a kind of text based on intensified learning and goes
It makes an uproar device.
It include: training module according to the text denoising device based on intensified learning of the application, for passing through text feature
Default network model of the training based on intensified learning;Module is denoised, for inputting text to be processed to default network model, and is sentenced
The noise word to break in the text to be processed;Categorization module obtains text for denoising result to be inputted default disaggregated model
Classification results.
Further, the training module includes: feature extraction unit, for request input to be sorted is default neural
In network model, the term vector feature in the text is extracted;Act output unit, for using the term vector feature as
The input of tactful network model based on intensified learning, and export and movement is executed to the vector characteristics.
Further, device further include: processing module, the processing module, for default execution the result of the action will to be met
Word retain, and be added in the tactful network model based on intensified learning again, it is described to be sorted after being denoised
Request feature;Using the request feature to be sorted after the denoising as the input of default disaggregated model, output category
As a result.
Further, device further include: feedback module, the feedback module include: judging unit, for judging to text
Whether classification results are correct;Adding unit, for judge it is correct to text classification result, then in first based on intensified learning
The feedback of default network model accumulates;Reduce unit, for judge it is incorrect to text classification result, then based on by force
The feedback accumulative minimizing for the first default network model that chemistry is practised.
Further, the training module includes: text feature unit, for the text to be sorted of training set will to be used as defeated
Enter intensified learning feature extraction network, obtains text feature;Policy unit, for by using the text feature as policy network
The input of network, so that the strategy network acquires the corpus information in text to be sorted;Training unit, for passing through text
Feature training obtains the tactful network.
Text denoising method and device based on intensified learning in the embodiment of the present application is trained using text feature is passed through
The mode of default network model based on intensified learning, by input text to be processed to default network model, and described in judgement
Noise word in text to be processed has reached denoising result inputting default disaggregated model, obtained the mesh of text classification result
, thus realize remove for task be noise word, improve the accuracy rate and discrimination of intention assessment task
Technical effect, and then solve the technical problem of text-processing result inaccuracy.
Detailed description of the invention
The attached drawing constituted part of this application is used to provide further understanding of the present application, so that the application's is other
Feature, objects and advantages become more apparent upon.The illustrative examples attached drawing and its explanation of the application is for explaining the application, not
Constitute the improper restriction to the application.In the accompanying drawings:
Fig. 1 is the text denoising method flow schematic diagram based on intensified learning according to the application first embodiment;
Fig. 2 is the text denoising method flow schematic diagram based on intensified learning according to the application second embodiment;
Fig. 3 is the text denoising method flow schematic diagram based on intensified learning according to the application 3rd embodiment;
Fig. 4 is the text denoising method flow schematic diagram based on intensified learning according to the application fourth embodiment;
Fig. 5 is the text denoising method flow schematic diagram based on intensified learning according to the 5th embodiment of the application;
Fig. 6 is the text denoising apparatus structure schematic diagram based on intensified learning according to the application first embodiment;
Fig. 7 is the text denoising apparatus structure schematic diagram based on intensified learning according to the application second embodiment;
Fig. 8 is the text denoising apparatus structure schematic diagram based on intensified learning according to the application 3rd embodiment;
Fig. 9 is the text denoising apparatus structure schematic diagram based on intensified learning according to the application fourth embodiment;
Figure 10 is the text denoising apparatus structure schematic diagram based on intensified learning according to the 5th embodiment of the application.
Specific embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application
Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only
The embodiment of the application a part, instead of all the embodiments.Based on the embodiment in the application, ordinary skill people
Member's every other embodiment obtained without making creative work, all should belong to the model of the application protection
It encloses.
It should be noted that the description and claims of this application and term " first " in above-mentioned attached drawing, "
Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way
Data be interchangeable under appropriate circumstances, so as to embodiments herein described herein.In addition, term " includes " and " tool
Have " and their any deformation, it is intended that cover it is non-exclusive include, for example, containing a series of steps or units
Process, method, system, product or equipment those of are not necessarily limited to be clearly listed step or unit, but may include without clear
Other step or units listing to Chu or intrinsic for these process, methods, product or equipment.
In this application, term " on ", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outside",
" in ", "vertical", "horizontal", " transverse direction ", the orientation or positional relationship of the instructions such as " longitudinal direction " be orientation based on the figure or
Positional relationship.These terms are not intended to limit indicated dress primarily to better describe the application and embodiment
Set, element or component must have particular orientation, or constructed and operated with particular orientation.
Also, above-mentioned part term is other than it can be used to indicate that orientation or positional relationship, it is also possible to for indicating it
His meaning, such as term " on " also are likely used for indicating certain relations of dependence or connection relationship in some cases.For ability
For the those of ordinary skill of domain, the concrete meaning of these terms in this application can be understood as the case may be.
In addition, term " installation ", " setting ", " being equipped with ", " connection ", " connected ", " socket " shall be understood in a broad sense.For example,
It may be a fixed connection, be detachably connected or monolithic construction;It can be mechanical connection, or electrical connection;It can be direct phase
It even, or indirectly connected through an intermediary, or is two connections internal between device, element or component.
For those of ordinary skills, the concrete meaning of above-mentioned term in this application can be understood as the case may be.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
The text denoising method based on intensified learning in the application, specifically includes that tactful network model and sorter network
Model removes the word of the noise in task based on intensified learning, and the word after denoising is mentioned using conventional disaggregated model training
The accuracy rate of high-class.
As shown in Figure 1, this method includes the following steps, namely S102 to step S106:
Step S102 passes through text feature the first default network model of training;
By the text feature, training obtains the first default network model based on intensified learning.
The text feature can be obtained using known model extraction.
By the way that the incoming first default network model based on intensified learning of the text feature can be exported to institute
State the processing movement of text feature.It carries out judging which word is to make an uproar according to the first default network model based on intensified learning
The judgement of sound word.
By above-mentioned treatment process, the training to the default network model based on intensified learning is completed.
Step S104 inputs text to be processed to the described first default network model, and judges in the text to be processed
Noise word;
It, can be with by the described first default network model by the first default network model described in the text input to be processed
Carry out the judgement of the noise word in the text to be processed.
By requesting feature as the input of the described first default network model user, thus the first default network
Model can capture the corpus information in user's request.
The first default network model is as tactful network for judging which word in the text belongs to noise.
Denoising result is inputted the second default disaggregated model, obtains text classification result by step S106.
Text feature result after denoising is input in the described second default disaggregated model, text classification result is obtained.
The second default disaggregated model is used to be used as classifier, classifies to text.
It should be noted that the second default disaggregated model can select different classification according to actual use scene
Device carries out classification processing to text.
It can be seen from the above description that the application realizes following technical effect:
In the embodiment of the present application, using the side by default network model of the text feature training based on intensified learning
Formula by input text to be processed to default network model, and judges the noise word in the text to be processed, and having reached will
Denoising result inputs default disaggregated model, obtains the purpose of text classification result, so that realize removal is for task
The word of noise improves the accuracy rate of intention assessment task and the technical effect of discrimination, and then solves text-processing result
The technical problem of inaccuracy.
According to the embodiment of the present application, as preferred in this implementation, as shown in Fig. 2, pre- by text feature training first
If network model includes:
Step S202 inputs request to be sorted in default neural network model, extract word in the text to
Measure feature;
It is input in default neural network model, can extract in the text to text request to be sorted by described
Term vector feature.
Preferably, the default neural network model can be BERT network, extract text by the BERT network
Sentence in feature.
Preferably, the default neural network model can be LSTM network, extract text by the LSTM network
Sentence in feature.
Step S204 using the term vector feature as the input of the tactful network model based on intensified learning, and is exported
Movement is executed to the vector characteristics.
Using the term vector feature as the input of the tactful network model based on intensified learning, and by described based on strong
The tactful network model output that chemistry is practised executes movement to the vector characteristics.It further trains in available judgement sentence
Which word belongs to the tactful network model based on intensified learning of noise.
According to the embodiment of the present application, as preferred in this implementation, as shown in figure 3, the vector characteristics are held in output
After action is made, further includes:
Step S206 will meet the default word for executing the result of the action and retain, and be added to again described based on intensified learning
Tactful network model in, the request feature to be sorted after being denoised;
Step S208, it is defeated using the request feature to be sorted after the denoising as the input of default disaggregated model
Classification results out.
Specifically, the input of tactful network is the query feature extracted in by BERT network or LSTM network, defeated
It is the action movement to word each in query out.Wherein, for movement, there are two types of executive modes, retain or give up.Consider
It arrives, importance of each word in different task is different, so removing noise word by intensified learning model realization
Strategy.
Preferably, the tactful network can use the common network connected entirely.
Further, retained by the word that action is acted=1, can rejoin again BERT network or
LSTM network extracts to obtain denoising query text feature.
According to the embodiment of the present application, as preferred in this implementation, as shown in figure 4, denoising result input second is preset
Disaggregated model, after obtaining text classification result, further includes:
Whether step S302 judges correct to text classification result;
Step S304, if it is determined that it is correct to text classification result, then in the first default network mould based on intensified learning
The feedback of type accumulates;
Step S306, if it is determined that it is incorrect to text classification result, then in the first default network based on intensified learning
The feedback accumulative minimizing of model.
Judge it is whether correct to text classification result, it is correct to text classification result when judging, based on intensified learning
The feedback of first default network model accumulates, that is, feeds back+1.It is incorrect to text classification result when judging, then based on strong
The feedback accumulative minimizing for the first default network model that chemistry is practised feeds back -1.
According to the embodiment of the present application, as preferred in this implementation, as shown in figure 5, pre- by text feature training first
If network model includes:
Text input intensified learning feature extraction network to be sorted as training set is obtained text spy by step S402
Sign;
Step S404, by using the text feature as the input of tactful network, so that the strategy network obtains
To the corpus information in text to be sorted;
Step S406 obtains the tactful network by text feature training.
Specifically, the incoming BERT LSTM network model of user's request to be sorted is extracted to the feature of sentence.Plan
The input of slightly network is the query feature by extracting in BERT LSTM network model, and output is to input text to user
In each word action be retain or give up.User is inputted into text feature as the input of tactful network, policy network
Network can capture the corpus information of user's request.The word of action=1 is retained, and be added to again BERT or
The feature of denoising input text is obtained in LSTM network model.
It should be noted that step shown in the flowchart of the accompanying drawings can be in such as a group of computer-executable instructions
It is executed in computer system, although also, logical order is shown in flow charts, and it in some cases, can be with not
The sequence being same as herein executes shown or described step.
According to the embodiment of the present application, additionally provide a kind of for implementing the text denoising based on intensified learning of the above method
Device, as shown in fig. 6, the device includes: training module 10, for passing through default net of the text feature training based on intensified learning
Network model;Module 20 is denoised, for inputting text to be processed to default network model, and judges making an uproar in the text to be processed
Sound word;Categorization module 30 obtains text classification result for denoising result to be inputted default disaggregated model.
By the text feature in the training module 10 of the embodiment of the present application, training obtains first based on intensified learning
Default network model.
The text feature can be obtained using known model extraction.
By the way that the incoming first default network model based on intensified learning of the text feature can be exported to institute
State the processing movement of text feature.It carries out judging which word is to make an uproar according to the first default network model based on intensified learning
The judgement of sound word.
By above-mentioned treatment process, the training to the default network model based on intensified learning is completed.
By the first default network model described in the text input to be processed in the denoising module 20 of the embodiment of the present application, lead to
Cross the judgement for the noise word that the described first default network model can carry out in the text to be processed.
By requesting feature as the input of the described first default network model user, thus the first default network
Model can capture the corpus information in user's request.
The first default network model is as tactful network for judging which word in the text belongs to noise.
The text feature result after denoising is input to described second default point in the categorization module 30 of the embodiment of the present application
In class model, text classification result is obtained.
The second default disaggregated model is used to be used as classifier, classifies to text.
It should be noted that the second default disaggregated model can select different classification according to actual use scene
Device carries out classification processing to text.
According to the embodiment of the present application, as preferred in this implementation, as shown in fig. 7, the training module includes: that feature mentions
Unit 101 is taken, for inputting request to be sorted in default neural network model, the term vector extracted in the text is special
Sign;Output unit 102 is acted, for using the term vector feature as the input of the tactful network model based on intensified learning,
And it exports and movement is executed to the vector characteristics.
Default nerve is input to text request to be sorted by described in the feature extraction unit 101 of the embodiment of the present application
In network model, the term vector feature in the text can be extracted.
Preferably, the default neural network model can be BERT network, extract text by the BERT network
Sentence in feature.
Preferably, the default neural network model can be LSTM network, extract text by the LSTM network
Sentence in feature.
Using the term vector feature as the strategy based on intensified learning in the movement output unit 102 of the embodiment of the present application
The input of network model, and it is dynamic to the execution of the vector characteristics by the tactful network model output based on intensified learning
Make.Which word in available judgement sentence is further trained to belong to the tactful network model based on intensified learning of noise.
According to the embodiment of the present application, as preferred in this implementation, as shown in figure 8, device further include: processing module 40,
The processing module 40 retains for that will meet the default word for executing the result of the action, and is added to again described based on extensive chemical
The request feature to be sorted in the tactful network model of habit, after being denoised;It will be described to be sorted after the denoising
Input of the request feature as default disaggregated model, output category result.
In the processing module 40 of the embodiment of the present application specifically, the input of tactful network be in by BERT network or
The query feature that LSTM network extracts, output are the action movements to word each in query.Wherein, for movement there are two types of
Executive mode retains or gives up.It is considered that importance of each word in different task is different, so by strong
Chemistry practises the strategy of model realization removal noise word.
Preferably, the tactful network can use the common network connected entirely.
Further, retained by the word that action is acted=1, can rejoin again BERT network or
LSTM network extracts to obtain denoising query text feature.
According to the embodiment of the present application, as preferred in this implementation, as shown in figure 9, device further include: further include: feedback
Whether module 50, the feedback module 50 include: judging unit 501, correct to text classification result for judging;Adding unit
502, for judge it is correct to text classification result, then the feedback of the first default network model based on intensified learning add up
Increase;Reduce unit 503, for judge it is incorrect to text classification result, then in the first default net based on intensified learning
The feedback accumulative minimizing of network model.
Whether the middle judgement of the embodiment of the present application is correct to text classification result, correct to text classification result when judging,
Accumulate in the feedback of the first default network model based on intensified learning, that is, feeds back+1.When judging to text classification result
It is incorrect, then in the feedback accumulative minimizing of the first default network model based on intensified learning, that is, feed back -1.
According to the embodiment of the present application, as preferred in this implementation, as shown in Figure 10, the training module includes: text
Feature unit 103 obtains text spy for that will be used as the text input intensified learning feature extraction network to be sorted of training set
Sign;Policy unit 104, for by using the text feature as the input of tactful network, so that the strategy network obtains
Obtain the corpus information in text to be sorted;Training unit 105, for obtaining the tactful network by text feature training.
In embodiments herein, specifically, incoming BERT LSTM network model is requested to mention user to be sorted
Take out the feature of sentence.The input of tactful network is the query feature by extracting in BERT LSTM network model, output
The action for being each word inputted in text to user is to retain or give up.User is inputted into text feature as policy network
The input of network, tactful network can capture the corpus information of user's request.The word of action=1 is retained, and again
It is added to the feature that denoising input text is obtained in BERT LSTM network model.
Obviously, those skilled in the art should be understood that each module of above-mentioned the application or each step can be with general
Computing device realize that they can be concentrated on a single computing device, or be distributed in multiple computing devices and formed
Network on, optionally, they can be realized with the program code that computing device can perform, it is thus possible to which they are stored
Be performed by computing device in the storage device, perhaps they are fabricated to each integrated circuit modules or by they
In multiple modules or step be fabricated to single integrated circuit module to realize.In this way, the application be not limited to it is any specific
Hardware and software combines.
The foregoing is merely preferred embodiment of the present application, are not intended to limit this application, for the skill of this field
For art personnel, various changes and changes are possible in this application.Within the spirit and principles of this application, made any to repair
Change, equivalent replacement, improvement etc., should be included within the scope of protection of this application.
Claims (10)
1. a kind of text denoising method based on intensified learning characterized by comprising
Pass through text feature the first default network model of training;
Text to be processed is inputted to the described first default network model, and judges the noise word in the text to be processed;With
And
Denoising result is inputted into the second default disaggregated model, obtains text classification result.
2. text denoising method according to claim 1, which is characterized in that pass through text feature the first default network of training
Model includes:
Request to be sorted is inputted in default neural network model, the term vector feature in the text is extracted;
Using the term vector feature as the input of the tactful network model based on intensified learning, and export to the vector characteristics
Execute movement.
3. text denoising method according to claim 2, which is characterized in that output executes movement to the vector characteristics
Later, further includes:
The default word for executing the result of the action will be met to retain, and be added to the tactful network model based on intensified learning again
In, the request feature to be sorted after being denoised;
Using the request feature to be sorted after the denoising as the input of default disaggregated model, output category result.
4. text denoising method according to claim 1, which is characterized in that by the default classification mould of denoising result input second
Type, after obtaining text classification result, further includes:
Judge whether correct to text classification result;
If it is determined that it is correct to text classification result, then in the accumulative increasing of the feedback of the first default network model based on intensified learning
Add;
If it is determined that it is incorrect to text classification result, then it is accumulative in the feedback of the first default network model based on intensified learning
It reduces.
5. text denoising method according to claim 1, which is characterized in that pass through text feature the first default network of training
Model includes:
By the text input intensified learning feature extraction network to be sorted as training set, text feature is obtained;
By using the text feature as the input of tactful network, so that the strategy network acquires in text to be sorted
Corpus information;
The tactful network is obtained by text feature training.
6. a kind of text denoising device based on intensified learning characterized by comprising
Training module, for passing through default network model of the text feature training based on intensified learning;
Module is denoised, for inputting text to be processed to default network model, and judges the noise word in the text to be processed
Language;
Categorization module obtains text classification result for denoising result to be inputted default disaggregated model.
7. text denoising device according to claim 6, which is characterized in that the training module includes:
Feature extraction unit extracts in the text for inputting request to be sorted in default neural network model
Term vector feature;
Output unit is acted, for using the term vector feature as the input of the tactful network model based on intensified learning, and
Output executes movement to the vector characteristics.
8. text denoising device according to claim 6, which is characterized in that further include: processing module, the processing mould
Block retains for that will meet the default word for executing the result of the action, and is added to the tactful network based on intensified learning again
The request feature to be sorted in model, after being denoised;The request feature to be sorted after the denoising is made
For the input for presetting disaggregated model, output category result.
9. text denoising device according to claim 6, which is characterized in that further include: feedback module, the feedback module
Include:
Judging unit, it is whether correct to text classification result for judging;
Adding unit, for judge it is correct to text classification result, then in the first default network model based on intensified learning
Feedback accumulate;
Reduce unit, for judge it is incorrect to text classification result, then in the first default network mould based on intensified learning
The feedback accumulative minimizing of type.
10. text denoising device according to claim 6, which is characterized in that the training module includes:
Text feature unit obtains text for that will be used as the text input intensified learning feature extraction network to be sorted of training set
Eigen;
Policy unit, for by using the text feature as the input of tactful network, so that the strategy network obtains
To the corpus information in text to be sorted;
Training unit, for obtaining the tactful network by text feature training.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910400091.0A CN110196909B (en) | 2019-05-14 | 2019-05-14 | Text denoising method and device based on reinforcement learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910400091.0A CN110196909B (en) | 2019-05-14 | 2019-05-14 | Text denoising method and device based on reinforcement learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110196909A true CN110196909A (en) | 2019-09-03 |
CN110196909B CN110196909B (en) | 2022-05-31 |
Family
ID=67752806
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910400091.0A Active CN110196909B (en) | 2019-05-14 | 2019-05-14 | Text denoising method and device based on reinforcement learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110196909B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104254122A (en) * | 2006-10-31 | 2014-12-31 | 高通股份有限公司 | Inter-cell power control in presence of fractional frequency reuse |
US9092802B1 (en) * | 2011-08-15 | 2015-07-28 | Ramakrishna Akella | Statistical machine learning and business process models systems and methods |
US9189730B1 (en) * | 2012-09-20 | 2015-11-17 | Brain Corporation | Modulated stochasticity spiking neuron network controller apparatus and methods |
US20160005401A1 (en) * | 2011-11-21 | 2016-01-07 | Zero Labs, Inc. | Engine for human language comprehension of intent and command execution |
CN105786798A (en) * | 2016-02-25 | 2016-07-20 | 上海交通大学 | Natural language intention understanding method in man-machine interaction |
US20170116332A1 (en) * | 2014-06-20 | 2017-04-27 | Nec Corporation | Method for classifying a new instance |
CN108304387A (en) * | 2018-03-09 | 2018-07-20 | 联想(北京)有限公司 | The recognition methods of noise word, device, server group and storage medium in text |
CN109189925A (en) * | 2018-08-16 | 2019-01-11 | 华南师范大学 | Term vector model based on mutual information and based on the file classification method of CNN |
CN109299264A (en) * | 2018-10-12 | 2019-02-01 | 深圳市牛鼎丰科技有限公司 | File classification method, device, computer equipment and storage medium |
CN109359191A (en) * | 2018-09-18 | 2019-02-19 | 中山大学 | Sentence semantics coding method based on intensified learning |
CN109710770A (en) * | 2019-01-31 | 2019-05-03 | 北京牡丹电子集团有限责任公司数字电视技术中心 | A kind of file classification method and device based on transfer learning |
-
2019
- 2019-05-14 CN CN201910400091.0A patent/CN110196909B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104254122A (en) * | 2006-10-31 | 2014-12-31 | 高通股份有限公司 | Inter-cell power control in presence of fractional frequency reuse |
US9092802B1 (en) * | 2011-08-15 | 2015-07-28 | Ramakrishna Akella | Statistical machine learning and business process models systems and methods |
US20160005401A1 (en) * | 2011-11-21 | 2016-01-07 | Zero Labs, Inc. | Engine for human language comprehension of intent and command execution |
US9189730B1 (en) * | 2012-09-20 | 2015-11-17 | Brain Corporation | Modulated stochasticity spiking neuron network controller apparatus and methods |
US20170116332A1 (en) * | 2014-06-20 | 2017-04-27 | Nec Corporation | Method for classifying a new instance |
CN105786798A (en) * | 2016-02-25 | 2016-07-20 | 上海交通大学 | Natural language intention understanding method in man-machine interaction |
CN108304387A (en) * | 2018-03-09 | 2018-07-20 | 联想(北京)有限公司 | The recognition methods of noise word, device, server group and storage medium in text |
CN109189925A (en) * | 2018-08-16 | 2019-01-11 | 华南师范大学 | Term vector model based on mutual information and based on the file classification method of CNN |
CN109359191A (en) * | 2018-09-18 | 2019-02-19 | 中山大学 | Sentence semantics coding method based on intensified learning |
CN109299264A (en) * | 2018-10-12 | 2019-02-01 | 深圳市牛鼎丰科技有限公司 | File classification method, device, computer equipment and storage medium |
CN109710770A (en) * | 2019-01-31 | 2019-05-03 | 北京牡丹电子集团有限责任公司数字电视技术中心 | A kind of file classification method and device based on transfer learning |
Non-Patent Citations (3)
Title |
---|
JUN FENG等: "Reinforcement Learning for Relation Classification From Noisy Data", 《THE THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE》 * |
JUN FENG等: "Relation mention extraction from noisy data with hierarchical reinforcement learning", 《HTTPS://ARXIV.ORG/ABS/1811.01237》 * |
李小娟: "基于分类技术的网页去噪方法的研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Also Published As
Publication number | Publication date |
---|---|
CN110196909B (en) | 2022-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107798390B (en) | Training method and device of machine learning model and electronic equipment | |
CN109918560A (en) | A kind of answering method and device based on search engine | |
DE102019218259A1 (en) | Ultrasonic attack detection using deep learning | |
CN108459999B (en) | Font design method, system, equipment and computer readable storage medium | |
CN107301170A (en) | The method and apparatus of cutting sentence based on artificial intelligence | |
CN109635080A (en) | Acknowledgment strategy generation method and device | |
CN110489524A (en) | Relate to punishment case data intelligence checking method and device | |
CN110059541A (en) | A kind of mobile phone usage behavior detection method and device in driving | |
CN109902157A (en) | A kind of training sample validation checking method and device | |
CN105956181A (en) | Searching method and apparatus | |
CN109697090A (en) | A kind of method, terminal device and the storage medium of controlling terminal equipment | |
CN109859747A (en) | Voice interactive method, equipment and storage medium | |
CN106708807B (en) | Unsupervised participle model training method and device | |
CN106021413A (en) | Theme model based self-extendable type feature selecting method and system | |
CN109597987A (en) | A kind of text restoring method, device and electronic equipment | |
CN110196909A (en) | Text denoising method and device based on intensified learning | |
CN112492606A (en) | Classification and identification method and device for spam messages, computer equipment and storage medium | |
CN110704611B (en) | Illegal text recognition method and device based on feature de-interleaving | |
CN109359650A (en) | Object detection method and device, embedded device | |
CN109508643A (en) | Image processing method and device for porny | |
CN110020256A (en) | The method and system of the harmful video of identification based on User ID and trailer content | |
CN111539420B (en) | Panoramic image saliency prediction method and system based on attention perception features | |
CN108959237A (en) | A kind of file classification method, device, medium and equipment | |
CN109885687A (en) | A kind of sentiment analysis method, apparatus, electronic equipment and the storage medium of text | |
CN110941963A (en) | Text attribute viewpoint abstract generation method and system based on sentence emotion attributes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |