CN112434157B - Method and device for classifying documents in multiple labels, electronic equipment and storage medium - Google Patents
Method and device for classifying documents in multiple labels, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112434157B CN112434157B CN202011220204.8A CN202011220204A CN112434157B CN 112434157 B CN112434157 B CN 112434157B CN 202011220204 A CN202011220204 A CN 202011220204A CN 112434157 B CN112434157 B CN 112434157B
- Authority
- CN
- China
- Prior art keywords
- document
- standard
- original
- classification
- classification model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000013145 classification model Methods 0.000 claims abstract description 102
- 238000012549 training Methods 0.000 claims abstract description 92
- 238000012545 processing Methods 0.000 claims abstract description 30
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 238000005520 cutting process Methods 0.000 claims description 22
- 230000011218 segmentation Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 8
- 230000007246 mechanism Effects 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000002372 labelling Methods 0.000 claims 2
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 12
- 230000009849 deactivation Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 3
- 238000012512 characterization method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/3332—Query translation
- G06F16/3335—Syntactic pre-processing, e.g. stopword elimination, stemming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/12—Use of codes for handling textual entities
- G06F40/126—Character encoding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/18—Legal services
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Tourism & Hospitality (AREA)
- Technology Law (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to a data processing technology, and discloses a document multi-label classification method, which comprises the following steps: preprocessing an original document set to obtain a standard document set, performing multi-label processing on the standard document set to obtain a document label set, dividing the standard document set according to the number of batches to obtain a plurality of document subsets, inputting the document subsets into a built original document multi-classification model for training, calculating the error value of the training value set obtained by training and the document label set, adjusting the internal parameters of the document multi-classification model when the error value is larger than a preset error threshold value until the error value is smaller than or equal to the error threshold value, obtaining a standard document multi-classification model, and inputting documents to be classified into the standard document multi-classification model to obtain various classification results. The invention also relates to blockchain technology, and the original text set can be stored in a blockchain. The invention also discloses a document classification device, electronic equipment and a storage medium. The invention can improve the diversity of document classification.
Description
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and apparatus for classifying documents with multiple labels, an electronic device, and a computer readable storage medium.
Background
The document records the process and result of the approval of the national court, which is the carrier of the litigation activity structure, and is also the only evidence for the national court to determine and distribute the entity right obligation of the principal, which plays a very important role in the case examination process.
The big data age provides much convenience for us, if the content such as litigation request, dialect and dispute focus of the case is marked with the corresponding label as one of the characteristics, the content is used for searching similar cases, documents of similar cases can be searched more quickly, the case handling efficiency of case handling staff is improved, and the case handling time is shortened.
At present, a naive Bayesian classification method, a support vector machine algorithm and the like are used for classifying the documents, but the classification effect of the methods is poor, the characteristics in the documents can not be effectively utilized or only a few characteristics are utilized, so that not only is the characteristic waste caused, but also the document classification is not comprehensive.
Disclosure of Invention
The invention provides a method, a device, electronic equipment and a computer readable storage medium for classifying documents with multiple labels, and mainly aims to solve the problem that the classification of the documents is incomplete.
In order to achieve the above object, the present invention provides a method for classifying documents in multiple tags, comprising:
original Wen Shuji is obtained, and the original text set is preprocessed to obtain a standard text set;
Performing multi-label processing on the standard document set to obtain a document label set;
constructing an original document multi-classification model;
Dividing the standard document set according to the preset batch number to obtain a plurality of document subsets;
inputting a plurality of the document subsets into the original document multi-classification model for training to obtain a training value set;
calculating the difference value between the training value set and the document label set to obtain an error value;
When the error value is larger than a preset error threshold, adjusting internal parameters of the original document multi-classification model, and returning to the step of dividing the standard document set according to the preset batch number to obtain a plurality of document subsets, until the error value is smaller than or equal to the error threshold, and obtaining the standard document multi-classification model;
And acquiring the document to be classified, and inputting the document to be classified into the standard document multi-classification model to obtain various classification results.
Optionally, the preprocessing the original text set to obtain a standard text set includes:
Removing non-text parts in the original document set to obtain a first document set;
word segmentation is carried out on the first paperwork set to obtain a second Wen Shuji;
and removing stop words of the second document training set to obtain a standard document set.
Optionally, the constructing the original document multi-classification model includes:
constructing an original BERT model;
adding an attention mechanism into the original BERT model to obtain a primary BERT model;
And connecting the primary BERT model by using a pre-constructed full connection layer to obtain the original document multi-classification model.
Optionally, the inputting the plurality of document subsets into the original document multi-classification model for training to obtain a training value set includes:
performing byte coding on the document subset by utilizing a coding layer in the original document multi-classification model to obtain an original byte coding set;
Filling and cutting operation is carried out on the original byte code set according to a preset length by using filling and cutting layers in the original document multi-classification model, so that a standard byte code set is obtained;
and performing embedding operation on the standard byte code set by utilizing an embedding layer in the original document multi-classification model to obtain a standard byte sequence set and calculating a training value set corresponding to the standard byte sequence set.
Optionally, the filling and cutting operation is performed on the original byte code set by using filling and cutting layers in the multi-classification model of the original document according to a preset length to obtain a standard byte code set, which includes:
when the length of the byte codes in the original byte code set is larger than the preset length, cutting off the middle of the byte codes, and reserving the head and tail information of the byte codes to obtain standard byte codes;
Summarizing the standard byte codes to obtain a standard byte code set.
Optionally, the embedding operation is performed on the standard byte code set by using an embedding layer in the original document multi-classification model to obtain a standard byte sequence set, including:
embedding a preset code into the head of the standard byte code to obtain a first embedded byte code;
Embedding the tail part of the first embedded byte code by using the preset code to obtain a second embedded byte code;
summarizing the second embedded byte code subjected to the embedding operation to obtain a standard byte sequence set.
Optionally, the calculating the difference between the training value set and the document tag set to obtain an error value includes:
The error value is calculated using the following error value calculation formula:
wherein, C is the error value, n is the number of the document labels in the document label set, x is the total number of training values in the training value set, y represents the training value set, and a is the document label value.
In order to solve the above problems, the present invention further provides a document multi-label classification method device, where the device includes:
The data processing module is used for acquiring an original Wen Shuji, and preprocessing the original text set to obtain a standard text set; performing multi-label processing on the standard document set to obtain a document label set;
The model construction module is used for constructing an original document multi-classification model;
The model training module is used for dividing the standard document set according to the preset batch number to obtain a plurality of document subsets; inputting a plurality of the document subsets into the original document multi-classification model for training to obtain a training value set; calculating the difference value between the training value set and the document label set to obtain an error value; when the error value is larger than a preset error threshold, adjusting internal parameters of the original document multi-classification model, and returning to the step of dividing the standard document set according to the preset batch number to obtain a plurality of document subsets, until the error value is smaller than or equal to the error threshold, and obtaining the standard document multi-classification model;
The classification module is used for acquiring the documents to be classified, inputting the documents to be classified into the standard document multi-classification model, and obtaining various classification results.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores computer program instructions executable by the at least one processor to enable the at least one processor to implement the document multi-label classification method described above.
In order to solve the above-mentioned problems, the present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the above-mentioned document multi-label classification method.
According to the embodiment of the invention, the document set is subjected to multi-label processing to obtain the document label set, the multi-label processing can classify and label the document in different dimensions, the document is classified by utilizing more characteristics, the standard document set is divided into a plurality of document subsets more comprehensively, the training efficiency of a subsequent input model is improved, the plurality of document subsets are input into a pre-built original document classification model for training, the internal parameters of the original document multi-classification model are adjusted according to the training value set and the document label set, the standard document multi-classification model is obtained, and the document to be classified can be classified in a plurality of dimensions by utilizing the standard document multi-classification model, so that various classification results are obtained. Therefore, the document multi-label classification method, the device and the computer readable storage medium can improve the diversity of document classification.
Drawings
FIG. 1 is a flow chart of a method for classifying documents according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating one of the steps in a method for classifying documents according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating one of the steps in the method for classifying documents according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating one of the steps in the method for classifying documents according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a document multi-label classification method according to an embodiment of the present invention;
Fig. 6 is a schematic diagram of an internal structure of an electronic device for implementing a document multi-label classification method according to an embodiment of the present invention;
the achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the application provides a document multi-label classification method, and an execution subject of the document multi-label classification method comprises, but is not limited to, at least one of a server, a terminal and the like which can be configured to execute the electronic equipment of the method provided by the embodiment of the application. In other words, the document multi-tag classification method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to fig. 1, a flow chart of a document multi-label classification method according to an embodiment of the present invention is shown. In this embodiment, the document multi-label classification method includes:
S1, acquiring an original Wen Shuji, and preprocessing the original text set to obtain a standard text set.
In the preferred embodiment of the invention, the original text set can be obtained by using the forms of manual input or a crawler program and the like.
In an embodiment of the present invention, referring to fig. 2, the preprocessing the original text set to obtain a standard text set includes:
S101, removing non-text parts in the original document set to obtain a first document set;
s102, word segmentation is carried out on the first document set to obtain a second Wen Shuji;
And S103, removing stop words of the second document training set to obtain a standard document set.
In the embodiment of the present invention, for example, the document a includes the following parts:
document name: contract disputes civil judgment books of A company and B company;
The content of the document: the content of the document is consistent with the content of the document delivered by the principal;
document title: a "somebody civil law institute civil decision" document;
other content of the document, including: format settings, case numbers, text, etc.
The non-text part comprises punctuation marks, messy codes and the like, and the embodiment of the invention removes the non-text part comprising the punctuation marks, the messy codes and the like in the original document set to obtain the first document set.
Further, the embodiment of the invention divides the first document set into words to obtain a second document set. The word segmentation method can adopt a jieba word segmentation method which is disclosed at present.
For example: the document name: the word segmentation of the contract dispute civil judgement book of the A company and the B company can be obtained: [ company A ], [ company B ], [ and ], [ contract ], [ dispute ], [ civil ] and [ judgement book ].
In the embodiment of the invention, the deactivation word can be sequentially removed by using a pre-constructed deactivation word list, and if the deactivation word list comprises [ the ], [ the ground ], [ the sum ], the judgment book obtained by removing the word segmentation according to the deactivation word list is obtained: all the phrases are collected to obtain a standard text set.
S2, performing multi-label processing on the standard document set to obtain a document label set.
In the embodiment of the invention, the expert group members carry out various types of labels on the standard document set so as to obtain the document tag set.
In detail, the multiple categories vary according to the content of the standard corpus, e.g., for litigation-like paperwork cases, the multiple categories include multiple dimensions such as litigation request, debate, and dispute focus.
The document label set can be divided into a training set, a verification set and a test set according to a preset proportion, wherein the training set is used for training the model in the later period, the test set is used for testing the robustness of the model, and the verification set is used for verifying the accuracy of the model in the training process.
Preferably, the preset ratio may be 6:2:2.
S3, constructing an original document multi-classification model.
Further, referring to fig. 3, constructing a multi-classification model of an original document includes:
s311, constructing an original BERT model according to the standard text set and a preset classification function;
In detail, the BERT (BidirectionalEncoderRepresentationsfrom Transformer) model is a language characterization model, the BERT model includes a data receiving layer and a classification layer, the size of the data receiving layer is determined according to the standard textset, and the classification layer can use a preset classification function.
Preferably, the preset classification function may be a softmax function.
S312, adding an attention mechanism into the original BERT model to obtain a primary BERT model;
preferably, attention mechanism (Attention) is a data processing method in machine learning, and is widely applied to various different types of machine learning tasks such as natural language processing, image recognition and voice recognition.
S313, connecting the primary BERT model by using a pre-constructed full-connection layer to obtain the original document multi-classification model.
In a preferred embodiment of the present invention, the original document multi-classification model may be obtained after the full connection layer is connected to the primary BERT model.
And S4, dividing the standard document set according to the preset batch number to obtain a plurality of document subsets.
The batch number of the invention can be divided according to the actual application scene.
For example, a total of 90000 training data for a standard document set may correspond to dividing 900000 training data into 100 batches, resulting in 100 document subsets, each document subset comprising 900 training data.
S5, inputting a plurality of document subsets into the original document multi-classification model for training to obtain a training value set.
In an embodiment of the present invention, referring to fig. 4, the inputting the plurality of document subsets into the original document multi-classification model for training to obtain a training value set includes:
S501, performing byte coding on the document subset by using a coding layer in the original document multi-classification model to obtain an original byte coding set;
in the preferred embodiment of the invention, when the encoding layer in the original document multi-classification model carries out byte encoding on the document subset, a Word Piece mode, namely double-byte encoding is adopted, and the double-byte encoding can effectively reduce the data volume of the document subset and reduce the influence of similar documents on the whole model training to a certain extent.
S502, filling and cutting operation is carried out on the original byte code set according to a preset length by using filling and cutting layers in the original document multi-classification model, and a standard byte code set is obtained.
In the embodiment of the present invention, the filling and cutting operation is performed on the original byte code set according to a preset length by using filling and cutting layers in the multi-classification model of the original document to obtain a standard byte code set, including:
Judging whether the length of the byte codes in the original byte code set is larger than the preset length;
when the length of the byte codes in the original byte code set is larger than the preset length, cutting off the middle of the byte codes, and reserving the head and tail information of the byte codes to obtain standard byte codes;
When the length of the byte codes in the original byte code set is smaller than or equal to the preset length, the codes in the original byte code set are standard byte codes;
Summarizing the standard byte codes to obtain a standard byte code set.
For example, the preset length is 256 bytes, if the code length exceeds 256 bytes, a truncation operation is adopted, and when the filling truncation layer in the original document multi-classification model performs the filling truncation operation on the original byte code set, a sentence truncation mode is adopted, and the information of the head and the tail of the code is reserved.
The truncation operation comprises four modes of head truncation, tail truncation, two-side truncation and sentence truncation.
S503, embedding the standard byte code set by utilizing an embedding layer in the original document multi-classification model to obtain a standard byte sequence set and calculating a training value set corresponding to the standard byte sequence set.
In the embodiment of the present invention, the embedding operation is performed on the standard byte code set by using an embedding layer in the original document multi-classification model to obtain a standard byte sequence set, including:
embedding a preset code into the head of the standard byte code to obtain a first embedded byte code;
Embedding the tail part of the first embedded byte code by using the preset code to obtain a second embedded byte code;
summarizing the second embedded byte code subjected to the embedding operation to obtain a standard byte sequence set.
Preferably, the training value set is a predictive label set, and if a document a, a document B and a document C are in a standard document training set, training is performed through an original document multi-classification model respectively, so that a predictive label of the document a is a litigation request label, a predictive label of the document B is a dialect label, and a predictive label of the document C is a dispute focus label, thereby integrating and obtaining the training value set.
S6, calculating the difference value between the training value set and the document label set to obtain an error value.
In the embodiment of the invention, calculating the difference between the training value set and the document label set to obtain an error value includes:
The error value is calculated using the following error value calculation formula:
wherein, C is the error value, n is the number of the document labels in the document label set, x is the total number of training values in the training value set, y represents the training value set, and a is the document label value.
And S7, judging whether the error value is larger than a preset error threshold value, executing S8 when the error value is larger than the preset error threshold value, adjusting internal parameters of the original document multi-classification model, returning to the S4 until the error value is smaller than or equal to the error threshold value, and executing S9 to obtain the standard document multi-classification model.
In a preferred embodiment of the present invention, if the error value is greater than the error threshold, internal parameters of the multi-classification model of the original document are adjusted, where the internal parameters include training batch number, learning rate, iteration number, and the like. And if the error value is smaller than or equal to the error threshold value, parameters do not need to be adjusted, and a standard document multi-classification model is obtained.
For example, if the error value and the error threshold are 0.3 and 0.5, respectively, the trained standard document multi-classification model is obtained because the error value is smaller than the error threshold, and if the error value and the error threshold are 0.3 and 0.2, respectively, the execution returns to S4 because the error value is greater than the error threshold.
S10, acquiring a document to be classified, and inputting the document to be classified into the standard document multi-classification model to obtain various classification results.
For example, a document X to be classified, which is input by a user, is acquired, and is classified by the standard document multi-classification model, so that the classification result of the document X to be classified is a litigation request label and a dispute focus label.
Fig. 5 is a schematic block diagram of the multi-label sorting apparatus according to the present invention.
The document multi-label sorting apparatus 100 of the present invention may be installed in an electronic device. Depending on the functions implemented, the document multi-label classification device 100 may include a data processing module 101, a model building module 102, a model training module 103, and a classification module 104. The module of the invention, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
In the present embodiment, the functions concerning the respective modules/units are as follows:
The data processing module 101 is configured to obtain an original Wen Shuji, and perform preprocessing on the original text set to obtain a standard text set; performing multi-label processing on the standard document set to obtain a document label set;
the model construction module 102 is used for constructing an original document multi-classification model;
The model training module 103 is configured to divide the standard document set according to a preset batch number to obtain a plurality of document subsets; inputting a plurality of the document subsets into the original document multi-classification model for training to obtain a training value set; calculating the difference value between the training value set and the document label set to obtain an error value; when the error value is larger than a preset error threshold, adjusting internal parameters of the original document multi-classification model, and returning to the step of dividing the standard document set according to the preset batch number to obtain a plurality of document subsets, until the error value is smaller than or equal to the error threshold, and obtaining the standard document multi-classification model;
The classification module 104 is configured to obtain a document to be classified, and input the document to be classified into the standard document multi-classification model to obtain multiple classification results.
In detail, the document multi-tag classification apparatus 100 may be used to perform the document multi-tag classification method as described in fig. 1 to 4 above. When the document multi-tag classification method is executed, each module in the document multi-tag classification device 100 specifically executes the following operations:
step one, the data processing model 101 obtains an original Wen Shuji, and performs preprocessing on the original text set to obtain a standard text set.
In a preferred embodiment of the present invention, the data processing model 101 may use a form of human input or a crawler to obtain the original text set.
In the embodiment of the present invention, the data processing model 101 performs preprocessing on the original text set to obtain a standard text set, including:
Removing non-text parts in the original document set to obtain a first document set;
word segmentation is carried out on the first paperwork set to obtain a second Wen Shuji;
and removing stop words of the second document training set to obtain a standard document set.
In the embodiment of the present invention, for example, the document a includes the following parts:
document name: contract disputes civil judgment books of A company and B company;
The content of the document: the content of the document is consistent with the content of the document delivered by the principal;
document title: a "somebody civil law institute civil decision" document;
other content of the document, including: format settings, case numbers, text, etc.
The non-text part comprises punctuation marks, messy codes and the like, and the embodiment of the invention removes the non-text part comprising the punctuation marks, the messy codes and the like in the original document set to obtain the first document set.
Further, the embodiment of the invention divides the first document set into words to obtain a second document set. The word segmentation method can adopt a jieba word segmentation method which is disclosed at present.
For example: the document name: the word segmentation of the contract dispute civil judgement book of the A company and the B company can be obtained: [ company A ], [ company B ], [ and ], [ contract ], [ dispute ], [ civil ] and [ judgement book ].
In the embodiment of the invention, the deactivation word can be sequentially removed by using a pre-constructed deactivation word list, and if the deactivation word list comprises [ the ], [ the ground ], [ the sum ], the judgment book obtained by removing the word segmentation according to the deactivation word list is obtained: all the phrases are collected to obtain a standard text set.
And step two, the data processing module 101 performs multi-label processing on the standard document set to obtain a document label set.
In the embodiment of the invention, the expert group members can make various types of labels on the standard document set, and the data processing module 101 receives the labels of the expert group members on the standard document set, so as to obtain the document tag set.
In detail, the multiple categories vary according to the content of the standard corpus, e.g., for litigation-like paperwork cases, the multiple categories include multiple dimensions such as litigation request, debate, and dispute focus.
And thirdly, the model construction module 102 constructs the original document multi-classification model.
Further, the model construction module 102 constructs the original document multi-classification model by the following method, including:
step A: constructing an original BERT model;
In detail, BERT (BidirectionalEncoderRepresentationsfrom Transformer) model is a language characterization model.
And (B) step (B): adding an attention mechanism into the original BERT model to obtain a primary BERT model;
preferably, attention mechanism (Attention) is a data processing method in machine learning, and is widely applied to various different types of machine learning tasks such as natural language processing, image recognition and voice recognition.
Step C: and connecting the primary BERT model by using a pre-constructed full connection layer to obtain the original document multi-classification model.
In a preferred embodiment of the present invention, the original document multi-classification model may be obtained after the full connection layer is connected to the primary BERT model.
And step four, the model training module 103 divides the standard document set according to a preset batch number to obtain a plurality of document subsets.
The batch number of the invention can be divided according to the actual application scene.
For example, a total of 90000 training data for a standard document set may correspond to dividing 900000 training data into 100 batches, resulting in 100 document subsets, each document subset comprising 900 training data.
And fifthly, the model training module 103 inputs a plurality of document subsets to the original document multi-classification model for training to obtain a training value set.
In the embodiment of the present invention, the model training module 103 inputs a plurality of the document subsets into the original document multi-classification model for training by the following operations, so as to obtain a training value set:
step a: performing byte coding on the document subset by utilizing a coding layer in the original document multi-classification model to obtain an original byte coding set;
in the preferred embodiment of the invention, when the encoding layer in the original document multi-classification model carries out byte encoding on the document subset, a Word Piece mode, namely double-byte encoding is adopted, and the double-byte encoding can effectively reduce the data volume of the document subset and reduce the influence of similar documents on the whole model training to a certain extent.
Step b: filling and cutting operation is carried out on the original byte code set according to a preset length by using filling and cutting layers in the original document multi-classification model, so that a standard byte code set is obtained;
in the embodiment of the present invention, the filling and cutting operation is performed on the original byte code set according to a preset length by using filling and cutting layers in the multi-classification model of the original document to obtain a standard byte code set, including:
step c: judging whether the length of the byte codes in the original byte code set is larger than the preset length;
Step d: when the length of the byte codes in the original byte code set is larger than the preset length, cutting off the middle of the byte codes, and reserving the head and tail information of the byte codes to obtain standard byte codes;
Step e: when the length of the byte codes in the original byte code set is smaller than or equal to the preset length, the codes in the original byte code set are standard byte codes;
Step f: summarizing the standard byte codes to obtain a standard byte code set.
For example, the preset length is 256 bytes, if the code length exceeds 256 bytes, a truncation operation is adopted, and when the filling truncation layer in the original document multi-classification model performs the filling truncation operation on the original byte code set, a sentence truncation mode is adopted, and the information of the head and the tail of the code is reserved.
The truncation operation comprises four modes of head truncation, tail truncation, two-side truncation and sentence truncation.
And performing embedding operation on the standard byte code set by utilizing an embedding layer in the original document multi-classification model to obtain a standard byte sequence set and calculating a training value set corresponding to the standard byte sequence set.
In the embodiment of the present invention, the embedding operation is performed on the standard byte code set by using an embedding layer in the original document multi-classification model to obtain a standard byte sequence set, including: embedding a preset code into the head of the standard byte code to obtain a first embedded byte code; embedding the tail part of the first embedded byte code by using the preset code to obtain a second embedded byte code; summarizing the second embedded byte code subjected to the embedding operation to obtain a standard byte sequence set.
Preferably, the training value set is a predictive label set, and if a document a, a document B and a document C are in a standard document training set, training is performed through an original document multi-classification model respectively, so that a predictive label of the document a is a litigation request label, a predictive label of the document B is a dialect label, and a predictive label of the document C is a dispute focus label, thereby integrating and obtaining the training value set.
And step six, the model training module 103 calculates the difference value between the training value set and the document label set to obtain an error value.
In the embodiment of the present invention, the model training module 103
The error value is calculated using the following error value calculation formula:
wherein, C is the error value, n is the number of the document labels in the document label set, x is the total number of training values in the training value set, y represents the training value set, and a is the document label value.
And step seven, when the error value is greater than a preset error threshold, the model training module 103 adjusts internal parameters of the original document multi-classification model until the error value is less than or equal to the error threshold, and a standard document multi-classification model is obtained.
In a preferred embodiment of the present invention, if the error value is greater than the error threshold, internal parameters of the multi-classification model of the original document are adjusted, where the internal parameters include training batch number, learning rate, iteration number, and the like. And if the error value is smaller than or equal to the error threshold value, parameters do not need to be adjusted, and a standard document multi-classification model is obtained.
For example, if the error value and the error threshold are 0.3 and 0.5, respectively, the trained standard document multi-classification model is obtained because the error value is smaller than the error threshold, and if the error value and the error threshold are 0.3 and 0.2, respectively, the execution returns to S4 because the error value is greater than the error threshold.
And step eight, the classification module 104 acquires the document to be classified, and inputs the document to be classified into the standard document multi-classification model to obtain various classification results.
For example, a document X to be classified, which is input by a user, is acquired, and is classified by the standard document multi-classification model, so that the classification result of the document X to be classified is a litigation request label and a dispute focus label.
According to the embodiment of the invention, the document set is subjected to multi-label processing to obtain the document label set, the multi-label processing can classify and label the document in different dimensions, the document is classified by utilizing more characteristics, the standard document set is divided into a plurality of document subsets more comprehensively, the training efficiency of a subsequent input model is improved, the plurality of document subsets are input into a pre-built original document classification model for training, the internal parameters of the original document multi-classification model are adjusted according to the training value set and the document label set, the standard document multi-classification model is obtained, and the document to be classified can be classified in a plurality of dimensions by utilizing the standard document multi-classification model, so that various classification results are obtained. Therefore, the document multi-label classification method, the device and the computer readable storage medium can improve the diversity of document classification.
Fig. 6 is a schematic structural diagram of an electronic device for implementing the method for classifying documents according to the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a document classification program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, including flash memory, a mobile hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only for storing application software installed in the electronic device 1 and various types of data, such as codes of the document multi-tag classification program 12, but also for temporarily storing data that has been output or is to be output.
The processor 10 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects respective parts of the entire electronic device using various interfaces and lines, and executes various functions of the electronic device 1 and processes data by running or executing programs or modules (for example, executing a document multi-tag classification program or the like) stored in the memory 11, and calling data stored in the memory 11.
The bus may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
Fig. 6 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 6 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device 1 may further include a power source (such as a battery) for supplying power to each component, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
Further, the electronic device 1 may also comprise a network interface, optionally the network interface may comprise a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the electronic device 1 and other electronic devices.
The electronic device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The document multi-tag classification program 12 stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, may implement:
original Wen Shuji is obtained, and the original text set is preprocessed to obtain a standard text set;
Performing multi-label processing on the standard document set to obtain a document label set;
constructing an original document multi-classification model;
Dividing the standard document set according to the preset batch number to obtain a plurality of document subsets;
inputting a plurality of the document subsets into the original document multi-classification model for training to obtain a training value set;
calculating the difference value between the training value set and the document label set to obtain an error value;
When the error value is larger than a preset error threshold, adjusting internal parameters of the original document multi-classification model, and returning to the step of dividing the standard document set according to the preset batch number to obtain a plurality of document subsets, until the error value is smaller than or equal to the error threshold, and obtaining the standard document multi-classification model;
And acquiring the document to be classified, and inputting the document to be classified into the standard document multi-classification model to obtain various classification results.
Further, the modules/units integrated in the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
Further, the computer-usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any accompanying diagram representation in the claims should not be considered as limiting the claim concerned.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.
Claims (9)
1. A method for multi-tag classification of documents, the method comprising:
original Wen Shuji is obtained, and the original text set is preprocessed to obtain a standard text set;
performing multi-label processing on the standard text set to obtain a text label set, wherein the multi-label processing is to perform multi-dimensional category labeling on the standard text set;
constructing an original document multi-classification model;
Dividing the standard document set according to the preset batch number to obtain a plurality of document subsets;
Inputting a plurality of document subsets into the original document multi-classification model for training, and performing byte encoding on the document subsets by using an encoding layer in the original document multi-classification model in a double-byte encoding mode to obtain an original byte encoding set; filling and cutting operation is carried out on the original byte code set according to a preset length by using filling and cutting layers in the original document multi-classification model, so that a standard byte code set is obtained; embedding the standard byte code set by utilizing an embedding layer in the original document multi-classification model to obtain a standard byte sequence set and calculating a training value set corresponding to the standard byte sequence set;
calculating the difference value between the training value set and the document label set to obtain an error value;
When the error value is larger than a preset error threshold, adjusting internal parameters of the original document multi-classification model, and returning to the step of dividing the standard document set according to the preset batch number to obtain a plurality of document subsets, until the error value is smaller than or equal to the error threshold, and obtaining the standard document multi-classification model;
And acquiring the document to be classified, and inputting the document to be classified into the standard document multi-classification model to obtain various classification results.
2. The method for multi-tag classification of documents as claimed in claim 1, wherein said preprocessing said original document set to obtain a standard document set comprises:
Removing non-text parts in the original document set to obtain a first document set;
word segmentation is carried out on the first paperwork set to obtain a second Wen Shuji;
and removing stop words of the second text set to obtain a standard text set.
3. The method for multi-tag classification of documents as claimed in claim 1, wherein said constructing an original document multi-classification model comprises:
constructing an original BERT model;
adding an attention mechanism into the original BERT model to obtain a primary BERT model;
And connecting the primary BERT model by using a pre-constructed full connection layer to obtain the original document multi-classification model.
4. The method for classifying documents with multiple labels according to claim 1, wherein said performing a filling and cutting operation on said original byte code set according to a preset length by using filling and cutting layers in said original document multiple classification model to obtain a standard byte code set comprises:
When the length of the byte codes in the original byte code set is larger than the preset length, cutting off the middle of the byte codes, and reserving the head and tail information of the byte codes to obtain standard byte codes;
Summarizing the standard byte codes to obtain a standard byte code set.
5. The method for multi-tag classification of documents according to claim 1, wherein said embedding said set of standard byte codes with an embedding layer in said original multi-tag classification model of documents to obtain a set of standard byte sequences comprises:
embedding a preset code into the head of the standard byte code to obtain a first embedded byte code;
Embedding the tail part of the first embedded byte code by using the preset code to obtain a second embedded byte code;
And summarizing the second embedded byte codes to obtain a standard byte sequence set.
6. The method for multi-label classification of documents as claimed in claim 1, wherein said calculating the difference between said training value set and said document label set to obtain an error value comprises:
The error value is calculated using the following error value calculation formula:
wherein, For the error value,/>For the number of document tags in the set of document tags,/>For the total number of training values in the training value set, y represents the training value set,/>Is a document tag value.
7. A document multi-label classification method device, characterized in that the device comprises:
The data processing module is used for acquiring an original Wen Shuji, and preprocessing the original text set to obtain a standard text set; performing multi-label processing on the standard text set to obtain a text label set, wherein the multi-label processing is to perform multi-dimensional category labeling on the standard text set;
The model construction module is used for constructing an original document multi-classification model;
The model training module is used for dividing the standard document set according to the preset batch number to obtain a plurality of document subsets; inputting a plurality of document subsets into the original document multi-classification model for training, and performing byte encoding on the document subsets by using an encoding layer in the original document multi-classification model in a double-byte encoding mode to obtain an original byte encoding set; filling and cutting operation is carried out on the original byte code set according to a preset length by using filling and cutting layers in the original document multi-classification model, so that a standard byte code set is obtained; embedding the standard byte code set by utilizing an embedding layer in the original document multi-classification model to obtain a standard byte sequence set and calculating a training value set corresponding to the standard byte sequence set; calculating the difference value between the training value set and the document label set to obtain an error value, adjusting internal parameters of the original document multi-classification model when the error value is larger than a preset error threshold value, and returning to the step of dividing the standard document set according to the preset batch number to obtain a plurality of document subsets until the error value is smaller than or equal to the error threshold value to obtain the standard document multi-classification model;
The classification module is used for acquiring the documents to be classified, inputting the documents to be classified into the standard document multi-classification model, and obtaining various classification results.
8. An electronic device, the electronic device comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores computer program instructions executable by the at least one processor to enable the at least one processor to perform the document multi-label classification method according to any one of claims 1 to 6.
9. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the document multi-label classification method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011220204.8A CN112434157B (en) | 2020-11-05 | 2020-11-05 | Method and device for classifying documents in multiple labels, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011220204.8A CN112434157B (en) | 2020-11-05 | 2020-11-05 | Method and device for classifying documents in multiple labels, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112434157A CN112434157A (en) | 2021-03-02 |
CN112434157B true CN112434157B (en) | 2024-05-17 |
Family
ID=74695448
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011220204.8A Active CN112434157B (en) | 2020-11-05 | 2020-11-05 | Method and device for classifying documents in multiple labels, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112434157B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113076426B (en) * | 2021-06-07 | 2021-08-13 | 腾讯科技(深圳)有限公司 | Multi-label text classification and model training method, device, equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110334710A (en) * | 2019-07-10 | 2019-10-15 | 深圳市华云中盛科技有限公司 | Legal documents recognition methods, device, computer equipment and storage medium |
CN110442722A (en) * | 2019-08-13 | 2019-11-12 | 北京金山数字娱乐科技有限公司 | Method and device for training classification model and method and device for data classification |
CN110717333A (en) * | 2019-09-02 | 2020-01-21 | 平安科技(深圳)有限公司 | Method and device for automatically generating article abstract and computer readable storage medium |
CN110807495A (en) * | 2019-11-08 | 2020-02-18 | 腾讯科技(深圳)有限公司 | Multi-label classification method and device, electronic equipment and storage medium |
CN111177324A (en) * | 2019-12-31 | 2020-05-19 | 支付宝(杭州)信息技术有限公司 | Method and device for classifying intentions based on voice recognition result |
CN111291152A (en) * | 2018-12-07 | 2020-06-16 | 北大方正集团有限公司 | Case document recommendation method, device, equipment and storage medium |
CN111428485A (en) * | 2020-04-22 | 2020-07-17 | 深圳市华云中盛科技股份有限公司 | Method and device for classifying judicial literature paragraphs, computer equipment and storage medium |
-
2020
- 2020-11-05 CN CN202011220204.8A patent/CN112434157B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111291152A (en) * | 2018-12-07 | 2020-06-16 | 北大方正集团有限公司 | Case document recommendation method, device, equipment and storage medium |
CN110334710A (en) * | 2019-07-10 | 2019-10-15 | 深圳市华云中盛科技有限公司 | Legal documents recognition methods, device, computer equipment and storage medium |
CN110442722A (en) * | 2019-08-13 | 2019-11-12 | 北京金山数字娱乐科技有限公司 | Method and device for training classification model and method and device for data classification |
CN110717333A (en) * | 2019-09-02 | 2020-01-21 | 平安科技(深圳)有限公司 | Method and device for automatically generating article abstract and computer readable storage medium |
CN110807495A (en) * | 2019-11-08 | 2020-02-18 | 腾讯科技(深圳)有限公司 | Multi-label classification method and device, electronic equipment and storage medium |
CN111177324A (en) * | 2019-12-31 | 2020-05-19 | 支付宝(杭州)信息技术有限公司 | Method and device for classifying intentions based on voice recognition result |
CN111428485A (en) * | 2020-04-22 | 2020-07-17 | 深圳市华云中盛科技股份有限公司 | Method and device for classifying judicial literature paragraphs, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112434157A (en) | 2021-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113157927B (en) | Text classification method, apparatus, electronic device and readable storage medium | |
CN112380343B (en) | Problem analysis method, device, electronic equipment and storage medium | |
CN112597312A (en) | Text classification method and device, electronic equipment and readable storage medium | |
CN112883190A (en) | Text classification method and device, electronic equipment and storage medium | |
CN113378970B (en) | Sentence similarity detection method and device, electronic equipment and storage medium | |
CN112559687B (en) | Question identification and query method and device, electronic equipment and storage medium | |
CN113157739B (en) | Cross-modal retrieval method and device, electronic equipment and storage medium | |
CN113658002B (en) | Transaction result generation method and device based on decision tree, electronic equipment and medium | |
CN113344125B (en) | Long text matching recognition method and device, electronic equipment and storage medium | |
CN113313211B (en) | Text classification method, device, electronic equipment and storage medium | |
CN113435308B (en) | Text multi-label classification method, device, equipment and storage medium | |
CN112434157B (en) | Method and device for classifying documents in multiple labels, electronic equipment and storage medium | |
CN113505273B (en) | Data sorting method, device, equipment and medium based on repeated data screening | |
CN114840684A (en) | Map construction method, device and equipment based on medical entity and storage medium | |
CN114677526A (en) | Image classification method, device, equipment and medium | |
CN117390173B (en) | Massive resume screening method for semantic similarity matching | |
CN112560427B (en) | Problem expansion method, device, electronic equipment and medium | |
CN116578696A (en) | Text abstract generation method, device, equipment and storage medium | |
CN116739001A (en) | Text relation extraction method, device, equipment and medium based on contrast learning | |
CN116468025A (en) | Electronic medical record structuring method and device, electronic equipment and storage medium | |
CN116340537A (en) | Character relation extraction method and device, electronic equipment and storage medium | |
CN116521867A (en) | Text clustering method and device, electronic equipment and storage medium | |
CN113626605B (en) | Information classification method, device, electronic equipment and readable storage medium | |
CN113806540B (en) | Text labeling method, text labeling device, electronic equipment and storage medium | |
CN111414452B (en) | Search word matching method and device, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |