WO2022143069A1 - Text clustering method and apparatus, electronic device, and storage medium - Google Patents
Text clustering method and apparatus, electronic device, and storage medium Download PDFInfo
- Publication number
- WO2022143069A1 WO2022143069A1 PCT/CN2021/136677 CN2021136677W WO2022143069A1 WO 2022143069 A1 WO2022143069 A1 WO 2022143069A1 CN 2021136677 W CN2021136677 W CN 2021136677W WO 2022143069 A1 WO2022143069 A1 WO 2022143069A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- word
- text data
- target
- frequency
- piece
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000012545 processing Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 238000013138 pruning Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009193 crawling Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000012015 optical character recognition Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/374—Thesaurus
Definitions
- the embodiments of the present disclosure relate to the field of computer technology, for example, to a text clustering method, apparatus, electronic device, and storage medium.
- Text clustering is to divide similar text data into the same cluster and distinguish different text clusters, among which, clusters can also be called “clusters”.
- Clustering methods are divided into different fields such as networking, medicine, biology, computer vision, natural language, etc.
- the text clustering method in the related art represents the text as a feature vector, and then calculates the similarity between the texts by calculating the feature vector corresponding to the text; finally, the text is clustered according to the similarity between the texts, as can be seen in It is pointed out that the text clustering method in the related art first needs to represent the text as a feature vector, and then the similarity between the texts can be calculated by the feature vector, which makes the calculation process of text clustering complicated and the efficiency is low.
- Embodiments of the present disclosure provide a text clustering method, apparatus, electronic device, and storage medium, which can effectively improve the efficiency and accuracy of text clustering.
- an embodiment of the present disclosure provides a text clustering method, including:
- the target text data set includes at least one piece of target text data
- a pre-built dictionary tree is searched for a target word sequence adapted to each word sequence to be searched; wherein, the target word sequence belongs to the child of each word sequence to be searched sequence;
- the target text data corresponding to the at least one target word sequence is clustered according to the at least one target word sequence, respectively, to obtain a text clustering result.
- an embodiment of the present disclosure further provides a text clustering apparatus, including:
- a text data acquisition module configured to acquire a target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data;
- a search word sequence generation module configured to calculate the first importance score of at least one word in each piece of target text data for each piece of target text data in the target text data set, and based on the first importance score Sort at least one word in each piece of target text data, and generate a word sequence to be searched corresponding to each piece of target text data;
- the target word sequence determination module is configured to search a pre-built dictionary tree for a target word sequence adapted to each to-be-searched word sequence for each to-be-searched word sequence; wherein, the target word sequence belongs to the a subsequence of each sequence of words to be searched;
- the text clustering module is configured to cluster the target text data corresponding to the at least one target word sequence according to the at least one target word sequence, respectively, to obtain a text clustering result.
- an embodiment of the present disclosure further provides an electronic device, the electronic device comprising:
- At least one processing device At least one processing device
- a storage device configured to store at least one program
- the at least one processing apparatus When the at least one program is executed by the at least one processing apparatus, the at least one processing apparatus implements the text clustering method according to the embodiment of the present disclosure.
- an embodiment of the present disclosure further provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing apparatus, implements the text clustering method according to the embodiment of the present disclosure.
- FIG. 1 is a flowchart of a text clustering method in an embodiment of the present disclosure
- FIG. 2 is a schematic diagram of a dictionary tree in an embodiment of the present disclosure
- FIG. 3 is a flowchart of another text clustering method in an embodiment of the present disclosure.
- FIG. 4 is a flowchart of yet another text clustering method in an embodiment of the present disclosure.
- FIG. 5 is a flowchart of yet another text clustering method in an embodiment of the present disclosure.
- FIG. 6 is a schematic structural diagram of a text clustering apparatus in an embodiment of the present disclosure.
- FIG. 7 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
- the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
- the term “based on” is “based at least in part on.”
- the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
- FIG. 1 is a flowchart of a text clustering method provided by an embodiment of the present disclosure.
- the embodiment of the present disclosure can be applied to the case of text clustering. and/or software, and generally can be integrated into a device with text clustering function, which can be an electronic device such as a server, a mobile terminal, or a server cluster.
- the method includes the following steps:
- Step 110 Obtain a target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data.
- the target text data set includes at least one piece of target text data, where the target text may be various types of text data, such as news, advertisement, network, natural language, medical, etc. text data.
- the categories of at least one piece of target text data in the target text data set may be the same or different.
- the target text data may be English text, Chinese text, or Korean text.
- the target text data to be clustered can be collected through a web crawler technology, and the target text data can also be obtained through optical character recognition, speech recognition, handwriting recognition, and the like.
- the target text data input by the user may be collected in real time, and the collected text data may be used as the text data to be clustered.
- Step 120 for each piece of target text data in the target text data set, calculate the first importance score of at least one word in the target text data, and assign a value to the target text data based on the first importance score. Sort at least one word of the target text data to generate a word sequence to be searched corresponding to the target text data.
- word segmentation processing is performed on each piece of target text data in the target text data set, so as to divide each piece of target text data into at least one word.
- word segmentation preprocessing may also be performed on each piece of target text data, such as removing punctuation and stop words. Then, the first importance score of at least one word in each piece of target text data is calculated, and the first importance score is used to reflect the importance of each word in the target text data. The more important the word is in the target text data, on the contrary, the smaller the first importance score is, the less important the word is in the target text data.
- the number of occurrences of each word in the target text data may be counted, and the number of occurrences of the word in the target text data may be used as the first importance score.
- the word frequency-inverse document frequency of a word in the target text data may be used as the first importance score of the word.
- calculating the first importance score of at least one word in the target text data including: for each piece of target text data in the target text data set , respectively calculating the first word frequency-inverse document frequency of at least one word in the target text data; respectively calculating the first importance score of at least one word in the target text data according to at least one first word frequency-inverse document frequency. It should be noted that, the embodiment of the present disclosure does not limit the calculation method of the first importance score of at least one word in the target text data.
- the at least one word in the target text data is sorted based on the first importance score, for example, the at least one word in the target text data may be sorted in descending order of the first importance score, and the The sequence composed of the sorted words is used as the to-be-searched word sequence corresponding to the target text data. It can be understood that the higher the word in the word sequence to be searched, the greater the first importance score of the word, indicating that the word is more important in the target text data, and the more it can indicate the meaning that the target text data wants to express, content, or the more it can indicate the category of the target text data.
- Step 130 For each word sequence to be searched, search a pre-built dictionary tree for a target word sequence adapted to the word sequence to be searched; wherein, the target word sequence belongs to a subsequence of the word sequence to be searched .
- a pre-built dictionary tree is acquired, wherein the dictionary tree is constructed based on a pre-configured target corpus.
- a dictionary tree is searched for a target word sequence adapted to the word sequence to be searched.
- searching a pre-built dictionary tree for a target word sequence adapted to the to-be-searched word sequence including: for each to-be-searched word sequence, searching in a pre-built dictionary tree In the sequence from the root node to the child node, search for the target word sequence adapted to the to-be-searched word sequence.
- the dictionary tree is searched for the first target node that matches the first word in the word sequence to be searched, and then all child nodes connected to the first target node are searched.
- the second target node that matches the second word in the sequence of words to be searched, and then searches all subnodes connected to the second target node for the third target node that matches the third word in the sequence of words to be searched, And so on, until no node matching p+1 words in the word sequence to be searched is found in all the child nodes connected to the p-th target node, and the sequence of words in multiple target nodes is taken as
- the target word sequence that is, the sequence consisting of words in the word sequence to be searched that can be searched in the dictionary tree, which matches the node, as the target word sequence.
- the target word sequence is a subsequence of the word sequence to be searched.
- the sequence of words to be searched is [A-B-C-D-E], where A, B, C, D, and E respectively represent words in the sequence of words to be searched, and can be searched in the order from the root node to the child nodes in the dictionary tree.
- A, B, C, D, and E respectively represent words in the sequence of words to be searched, and can be searched in the order from the root node to the child nodes in the dictionary tree.
- To the target node matching A, B, and C, that is, the first target node matching A can be searched in the dictionary tree, and the second target node matching B can be found in the child nodes connected to the first target node.
- the target node, the third target node that matches C can be searched in the child nodes connected to the second target node, but the third target node that matches D cannot be found in the child nodes connected to the third target node, then The sequence consisting of A, B, and C is used as the target word sequence.
- Step 140 Cluster the target text data corresponding to the at least one target word sequence according to the at least one target word sequence, respectively, to obtain a text clustering result.
- the target text data corresponding to the at least one target word sequence is clustered according to the at least one target word sequence. It can be understood that the target word sequence can intuitively reflect the category of the target text data or the target text data. If the target word sequence corresponding to the target text data is the same or has a high degree of similarity, it can indicate that the category of the target text data or the content expressed are the same or similar. Therefore, the target text data can be clustered according to the target word sequence.
- target text data with the same target word sequence can be clustered into the same cluster, and target text data with different target word sequences can be clustered into different clusters; the similarity between at least one target word sequence can also be calculated, The target text data whose similarity is greater than the preset threshold are clustered into the same cluster, and the target text data whose similarity is less than the preset threshold are clustered into different clusters. It should be noted that the embodiment of the present disclosure does not limit the manner of clustering the corresponding target text according to the target word sequence.
- a target text data set to be clustered is obtained; wherein, the target text data set includes at least one piece of target text data; for each piece of target text data in the target text data set, the target text data is calculated the first importance score of at least one word in the target text data, and sort at least one word in the target text data based on the first importance score, and generate a sequence of words to be searched corresponding to the target text data;
- a word sequence to be searched is searched in a pre-built dictionary tree for a target word sequence adapted to the word sequence to be searched; wherein, the target word sequence belongs to a subsequence of the word sequence to be searched; according to at least one The target word sequence performs clustering on the target text data corresponding to the at least one target word sequence to obtain a text clustering result.
- the text clustering method calculates the importance score of at least one word in the text data to be clustered, and sorts the at least one word in the text data to be clustered based on the importance score, and generates the word to be searched Then, based on the pre-built dictionary tree, the target word sequence that is suitable for the search word is found, so that the text data is clustered based on the target word sequence, which simplifies the process of text clustering and greatly reduces the time complexity of text clustering. It effectively improves the efficiency and accuracy of text clustering.
- calculating the first importance score of at least one word in the target text data includes: for each piece of target text data in the target text data set text data, respectively calculate the first word frequency-inverse document frequency of at least one word in the target text data; respectively calculate the first importance of at least one word in the target text data according to at least one first word frequency-inverse document frequency Fraction.
- the first term frequency-inverse document frequency can indirectly reflect the importance of each word in the target text data. Therefore, the first term of each word in the target text data can be calculated. Word frequency-inverse document frequency, and then calculate the first importance score of each word in the target text data according to each first word frequency-inverse document frequency.
- calculating the first word frequency-inverse document frequency of at least one word in the target text data respectively includes: respectively determining the first word frequency and the first inverse document frequency of each word in the target text data; Describe the first word frequency and the first inverse document frequency to calculate the first word frequency-inverse document frequency of the corresponding word; wherein, the first word frequency-inverse document frequency is the first word frequency and the first inverse document frequency. product.
- respectively determining the first word frequency and the first inverse document frequency of each word in the target text data includes: determining the number of occurrences of each word in the target text data, and using the number of occurrences as the first word frequency of the corresponding word; obtain parameter configuration information corresponding to the dictionary tree; wherein, the parameter configuration information includes an inverse document frequency list, and the inverse document frequency list includes each The inverse document frequency of the word; in the inverse document frequency list, find the inverse document frequency corresponding to at least one word in the target text data respectively, as the first inverse document frequency of at least one word in the target text data .
- the parameter configuration information corresponding to the dictionary tree is acquired, wherein the parameter configuration information is the parameter information determined in the process of constructing the dictionary tree based on the target corpus.
- the parameter configuration information may include an inverse document frequency list composed of an inverse document frequency (Inverse Document Frequency, IDF) of each word contained in the dictionary tree.
- the parameter configuration information further includes a distribution deviation list; wherein, the distribution deviation list includes the distribution deviation of each word included in the dictionary tree; Inverse document frequency, before calculating the first importance score of at least one word in the target text data, further comprising: in the distribution deviation list, respectively searching for the distribution deviation corresponding to each word in the target text data , as the first distribution deviation of each word in the target text data; calculate the first importance score of at least one word in the target text data according to at least one first word frequency-inverse document frequency, including: Calculate the first importance score of each word in the target text data according to each first word frequency-inverse document frequency and the corresponding first distribution deviation; wherein, the first importance score is the The first word frequency - the product of the inverse document frequency and the deviation of the first distribution.
- the parameter configuration information may further include a distribution deviation list consisting of the distribution table deviation of each word in the dictionary tree. It is understandable that in the process of constructing a dictionary tree based on the target corpus, it is not only necessary to calculate the inverse document frequency of each word in the target corpus, but also the distribution deviation of each word in the target corpus, and then based on the inverse of multiple words. Document frequency and distribution bias to build a dictionary tree. Among them, the distribution deviation is used to reflect the distribution deviation of each word in the target corpus and the total corpus.
- the distribution deviation list corresponding to the dictionary tree find the distribution deviation corresponding to each word in the target text data, and use the found target distribution deviation corresponding to each word as the first distribution of the word in the target text data deviation. Then, according to the first word frequency-inverse document frequency and the corresponding first distribution deviation, the first importance score of the word in the target text data is calculated, wherein the first importance score is the first word frequency-inverse document frequency and the first importance score.
- the method before acquiring the target text data set to be clustered, the method further includes: acquiring a total corpus and a target corpus; wherein the total corpus includes the target corpus, and the target corpus contains at least one piece of sample text data; calculate the second distribution deviation of each word contained in the target corpus relative to the total corpus; for each piece of sample text data in the target corpus, according to the The second distribution deviation calculates the second importance score of the corresponding word, and sorts at least one word in each piece of sample text data according to the second importance score in descending order, and generates the same value as the sample text data.
- corresponding sample word sequences constructing the dictionary tree based on at least one sample word sequence. This setting can accurately and quickly build a dictionary tree corresponding to the target corpus.
- the target corpus may be a corpus belonging to a certain category or a certain field.
- the target corpus may be an advertising corpus, a network corpus, a legal corpus, or a medical corpus.
- the total corpus is the total corpus that includes the target corpus.
- the target corpus is an advertising corpus
- the total corpus may include a network corpus, a legal corpus, a medical corpus, and an advertising corpus.
- the target corpus includes at least one piece of sample text data.
- the total corpus and the target corpus can be obtained through web crawling technology. It should be noted that, the embodiment of the present disclosure does not limit the type of the target corpus, nor does it limit other corpus contents in the total corpus except the target corpus.
- the target corpus can be calculated.
- the second distribution deviation of each word contained in the relative to the total corpus wherein the second distribution deviation can reflect the difference between each word in the target corpus and the total corpus.
- calculating the second distribution deviation of each word included in the target corpus relative to the total corpus includes: calculating the relative value of each word included in the target corpus to the total corpus according to the following formula: Second distribution bias of the corpus:
- b represents the second distribution deviation of the word w in the target corpus relative to the total corpus
- freq a (w) represents the frequency of the word w in the target corpus
- freq(w) represents the word w in the total corpus.
- the frequency of occurrence in the corpus t represents the number of occurrences of the word w in the target corpus
- M represents the total number of words contained in the target corpus
- t' represents the number of occurrences of the word w in the total corpus
- M ' represents the total number of words contained in the total corpus.
- the total number of words contained in the target corpus is 1000, and the word “movement” appears 100 times in the target corpus, then the frequency of occurrence of "movement” in the target corpus is:
- the total number of words contained in the total corpus is 5000, and the word “movement” appears 120 times in the total corpus, so the frequency of "movement” in the total corpus is:
- the second distribution deviation of "Motion” is:
- the second importance score of the corresponding word is calculated according to the second distribution deviation of at least one word in the sample text data, wherein the second importance score reflects the The importance of each word in the sample text data, where the larger the second importance score is, the more important the word is in the sample text data; otherwise, the smaller the second importance score is, the more important the word is in the sample text data. unimportant. Then at least one word in the sample text data is sorted in descending order of the second importance score, and the sequence consisting of the sorted words is used as a sample word sequence corresponding to the sample text data.
- a dictionary tree is constructed based on a sample word sequence corresponding to at least one piece of sample text data in the target corpus.
- the empty node is used as the root node of the dictionary tree
- the first word in all sample word sequences is used as the child node of the root node
- the The second word in all sample word sequences is taken as the child node of the node where the first word in the same sample word sequence is located
- the third word in all sample word sequences is taken as the location of the second word in the same sample word sequence
- the first word in all sample word sequences can be used as the root node of the dictionary tree, and the second word in all sample word sequences can be used as a child of the root node node, take the third word in all sample word sequences as the child node of the node where the second word in the same sample word sequence is located, and so on, until all words in all sample word sequences are filled in multiple up to the node.
- the sample word sequences corresponding to the five pieces of sample text data in the target corpus are: [intermediate commodity], [intermediate bigbuy], [intermediate business Korean], [business middle], [behind the middle], based on the above
- the dictionary tree constructed by five sample word sequences is shown in Figure 2.
- calculating the second importance score of the corresponding word according to the second distribution deviation of each word in the sample text data includes: for the target For each piece of sample text data in the corpus, calculate the second word frequency-inverse document frequency of each word in the sample text data; respectively, according to each second word frequency-inverse document frequency and the corresponding second distribution deviation, calculate the The second importance score for each word described in the sample text data.
- the second word frequency-inverse document frequency can indirectly reflect the importance of each word in the sample text data.
- the second word frequency-inverse document frequency of each word in the sample text data can be calculated, and then according to each second word frequency-inverse document frequency Word frequency - inverse document frequency and corresponding second distribution deviation, to calculate the second importance score of each word in the sample text data.
- the second importance score is the product of the second word frequency-inverse document frequency and the corresponding second distribution deviation.
- calculating the second importance score of each word in the sample text data according to each second word frequency-inverse document frequency and the corresponding second distribution deviation respectively includes: calculating the second importance score according to the following formula: The second importance score for each word described in the sample text data:
- s(w) represents the second importance score of the word w in the sample text data
- tf-idf a (w) represents the second word frequency-inverse document frequency of the word w in the sample data text
- determining the second word frequency and the second inverse document frequency of each word in the sample text data respectively includes: calculating the second word frequency and the second inverse document frequency of each word in the sample text data respectively according to the following formula: Document frequency:
- Calculating the second word frequency-inverse document frequency of the corresponding word in the sample text data according to the second word frequency and the second inverse document frequency includes: calculating the second word frequency of each word in the sample text data according to the following formula Term Frequency - Inverse Document Frequency:
- w represents any word in the sample data text
- tf(w) represents the second word frequency of the word w in the sample data text
- idf(w) represents the second word frequency of the word w in the sample data text
- Two inverse document frequency tf-idf(w) represents the second word frequency-inverse document frequency of the word w in the sample data text
- m represents the number of times the word w appears in the sample data text
- n represents the target The number of pieces of sample text data containing word w in the corpus
- N represents the total number of pieces of sample text data contained in the target corpus.
- the method further includes: determining the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences; according to the dictionary The dictionary tree is pruned by the number of occurrences of the word of each node in the tree at the same position in all sample word sequences until the number of nodes contained in the dictionary tree reaches a preset number.
- This setting can effectively improve the search speed of the target word sequence on the premise that the target word sequence corresponding to the target text data can be accurately determined based on the dictionary tree, thereby improving the efficiency of text clustering.
- the dictionary tree determines the total number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences.
- the dictionary tree shown in FIG. 2 in the order from the root node to the child node, The number of occurrences of the word "middle” in the first level of the dictionary tree at the same position in all sample word sequences is 3, the number of occurrences of the word “quotient” in the first level is 1, and the word “behind” in the first level
- the number of occurrences of the word “quotient” in the second level is 2, the number of occurrences of the word “big” in the second level is 1, and the number of occurrences of the word "middle” in the second level is 2,
- the occurrences of the words "pin”, “bu” and “han” in the third level are all 1. According to the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences, the dictionary tree is pruned until the number of nodes contained in the
- the dictionary tree is pruned according to the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences, until the number of nodes contained in the dictionary tree reaches a preset number. , including: according to the order of occurrences of the words of each node in the dictionary tree at the same position in all sample word sequences from small to large, sequentially delete the nodes corresponding to the same occurrence in the dictionary tree, until the dictionary tree The number of included nodes reaches the preset number.
- the node whose word in the dictionary tree appears at the same position in all sample word sequences can be deleted, and then the number of occurrences of the node word in the dictionary tree at the same position in all sample word sequences is 2.
- the nodes corresponding to the same number of occurrences in the dictionary tree can be deleted in sequence from the root node to the child nodes.
- FIG. 3 is a flowchart of a text clustering method in another embodiment of the present disclosure. As shown in FIG. 3 , the method includes the following steps:
- Step 310 Obtain a target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data.
- Step 320 Obtain parameter configuration information corresponding to the pre-built dictionary tree; wherein, the parameter configuration information includes an inverse document frequency list and a distribution deviation list; wherein, the inverse document frequency list includes the inverse of each word contained in the dictionary tree. Document frequency, distribution deviation The list includes the distribution deviation for each word contained in the dictionary tree.
- Step 330 For each piece of target text data in the target text data set, determine the number of occurrences of each word in the target text data, and use the number of occurrences as the first word frequency of the corresponding word.
- Step 340 in the inverse document frequency list, search for the inverse document frequency corresponding to at least one word in the target text data, respectively, as the first inverse document frequency of the at least one word in the target text data.
- Step 350 Calculate the first word frequency-inverse document frequency of the corresponding word according to the first word frequency and the first inverse document frequency; wherein the first word frequency-inverse document frequency is the product of the first word frequency and the first inverse document frequency.
- Step 360 in the distribution deviation list, search for the distribution deviation corresponding to each word in the target text data, as the first distribution deviation of each word in the target text data.
- Step 370 Calculate the first importance score of each word in the target text data according to each first word frequency-inverse document frequency and the corresponding first distribution deviation; wherein, the first importance score is the first word frequency - The product of the inverse document frequency and the deviation of the first distribution.
- Step 380 Rank at least one word in the target text data based on the first importance score, and generate a to-be-searched word sequence corresponding to the target text data.
- Step 390 For each word sequence to be searched, in the pre-built dictionary tree, search for a target word sequence adapted to the word sequence to be searched in the order from the root node to the child nodes.
- Step 3100 Cluster the target text data corresponding to the at least one target word sequence according to the at least one target word sequence, respectively, to obtain a text clustering result.
- the technical solution of the embodiment of the present disclosure calculates the importance score of each word by determining the word frequency, inverse document frequency and distribution deviation of each word in the text data to be clustered, and based on the importance score of the text to be clustered Sort at least one word in the data to generate a sequence of words to be searched, and then search for a target word sequence that matches the word to be searched based on a pre-built dictionary tree, so as to cluster the text data based on the target word sequence, which simplifies text clustering.
- the class process greatly reduces the time complexity of text clustering and effectively improves the efficiency and accuracy of text clustering.
- FIG. 4 is a flowchart of a text clustering method in another embodiment of the present disclosure. As shown in FIG. 4 , the method includes the following steps:
- Step 410 Obtain a total corpus and a target corpus; wherein, the total corpus includes a target corpus, and the target corpus includes at least one piece of sample text data.
- Step 420 Calculate the second distribution deviation of each word contained in the target corpus relative to the total corpus.
- calculating the second distribution deviation of each word included in the target corpus relative to the total corpus includes: calculating the second distribution deviation of each word included in the target corpus relative to the total corpus according to the following formula:
- b represents the second distribution deviation of the word w of the target corpus relative to the total corpus
- freq a (w) represents the frequency of the word w in the target corpus
- freq(w) represents the frequency of the word w in the total corpus
- t represents the number of occurrences of word w in the target corpus
- M represents the total number of words contained in the target corpus
- t' represents the number of occurrences of word w in the total corpus
- M' represents the total number of words contained in the total corpus.
- Step 430 for each piece of sample text data in the target corpus, calculate the second importance score of the corresponding word according to the second distribution deviation of each word in the sample text data, and follow the second importance score in descending order. Sort at least one word in each piece of sample text data to generate a sample word sequence corresponding to the sample text data.
- calculating the second importance score of the corresponding word according to the second distribution deviation of each word in the sample text data including: for each piece of sample text data in the target corpus, Calculate the second word frequency-inverse document frequency of each word in the sample text data respectively; according to each second word frequency-inverse document frequency and the corresponding second distribution deviation, calculate the second word frequency of each word in the sample text data. Importance Score.
- calculating the second word frequency-inverse document frequency of each word in the sample text data respectively includes: respectively determining the second word frequency and the second inverse document frequency of each word in the sample text data; The second inverse document frequency calculates the second word frequency-inverse document frequency of the corresponding word in the sample text data.
- determining the second word frequency and the second inverse document frequency of each word in the sample text data respectively includes: calculating the second word frequency and the second inverse document frequency of each word in the sample text data according to the following formula:
- Calculate the second word frequency-inverse document frequency of corresponding words in the sample text data according to the second word frequency and the second inverse document frequency comprising: calculating the second word frequency-inverse document frequency of each word in the sample text data according to the following formula:
- w represents any word in the sample data text
- tf(w) represents the second word frequency of the word w in the sample data text
- idf(w) represents the second inverse document frequency of the word w in the sample data text
- tf -idf(w) represents the second word frequency-inverse document frequency of the word w in the sample data text
- m represents the number of times the word w appears in the sample data text
- n represents the number of sample text data containing the word w in the target corpus
- N represents the total number of sample text data contained in the target corpus.
- calculate the second importance score of each word in the sample text data according to each second word frequency-inverse document frequency and the corresponding second distribution deviation including:
- s(w) represents the second importance score of the word w in the sample text data
- tf-idf a (w) represents the second word frequency-inverse document frequency of the word w in the sample data text
- Step 440 construct a dictionary tree based on at least one sample word sequence.
- Step 450 Determine the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences.
- Step 460 Delete the nodes corresponding to the same number of occurrences in the dictionary tree in order according to the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences, until the number of nodes contained in the dictionary tree reaches 460. up to the preset number.
- Step 470 Obtain a target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data.
- Step 480 for each piece of target text data in the target text data set, calculate the first importance score of at least one word in the target text data, and sort at least one word in the target text data based on the first importance score, and generate The sequence of words to be searched corresponding to the target text data.
- Step 490 for each word sequence to be searched, in the pre-built dictionary tree in the order from the root node to the child node, search for a target word sequence that is adapted to the word sequence to be searched; wherein, the target word sequence belongs to the word to be searched.
- a subsequence of a sequence A subsequence of a sequence.
- Step 4100 Cluster the target text data corresponding to the at least one target word sequence according to the at least one target word sequence, respectively, to obtain a text clustering result.
- the text clustering method can build a dictionary tree matching the target corpus, prune the dictionary tree, and then calculate the importance score of at least one word in the text data to be clustered, and based on the importance
- the score sorts at least one word in the text data to be clustered to generate a sequence of words to be searched, and then searches for a target word sequence adapted to the word to be searched based on the dictionary tree, thereby clustering the text data based on the target word sequence.
- pruning the dictionary tree the depth of the dictionary tree can be reduced.
- the search speed of the target word sequence can be effectively improved, and the search speed of the target word sequence can be greatly reduced.
- the time complexity of text clustering effectively improves the efficiency and accuracy of text clustering.
- FIG. 5 is a flowchart of a text clustering method in another embodiment of the present disclosure. As shown in FIG. 5 , the method includes the following steps:
- Step 510 Obtain a total corpus and a target corpus; wherein, the total corpus includes a target corpus, and the target corpus includes at least one piece of sample text data.
- Step 520 Calculate the second distribution deviation of each word contained in the target corpus relative to the total corpus.
- Step 530 For each piece of sample text data in the target corpus, determine the second word frequency and the second inverse document frequency of each word in the sample text data, respectively.
- Step 540 Calculate the second word frequency-inverse document frequency of the corresponding word in the sample text data according to the second word frequency and the second inverse document frequency.
- Step 550 Calculate the second importance score of each word in the sample text data according to each second word frequency-inverse document frequency and the corresponding second distribution deviation.
- Step 560 Sort at least one word in each piece of sample text data in descending order of the second importance score to generate a sample word sequence corresponding to the sample text data.
- Step 570 construct a dictionary tree based on at least one sample word sequence.
- Step 580 Store at least one distribution deviation list composed of the second distribution deviation and at least one inverse document list composed of the second inverse document frequency as parameter configuration information of the dictionary tree.
- Step 590 Obtain a target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data.
- Step 5100 For each piece of target text data in the target text data set, determine the number of occurrences of each word in the target text data, and use the number of occurrences as the first word frequency of each word.
- Step 5110 In the inverse document frequency list, search for the inverse document frequency corresponding to each word in the target text data, as the first inverse document frequency of each word in the target text data.
- Step 5120 Calculate the first word frequency-inverse document frequency of the corresponding word according to the first word frequency and the first inverse document frequency; wherein the first word frequency-inverse document frequency is the product of the first word frequency and the first inverse document frequency.
- Step 5130 In the distribution deviation list, search for the distribution deviation corresponding to each word in the target text data, as the first distribution deviation of each word in the target text data.
- Step 5140 Calculate the first importance score of each word in the target text data according to each first word frequency-inverse document frequency and the corresponding first distribution deviation; wherein, the first importance score is the first word frequency - The product of the inverse document frequency and the deviation of the first distribution.
- Step 5150 Sort at least one word in the target text data based on the first importance score, and generate a to-be-searched word sequence corresponding to the target text data.
- Step 5160 For each word sequence to be searched, a pre-built dictionary tree is searched for a target word sequence adapted to the word sequence to be searched; wherein the target word sequence belongs to a subsequence of the word sequence to be searched.
- Step 5170 Cluster the target text data corresponding to the at least one target word sequence according to the at least one target word sequence, respectively, to obtain a text clustering result.
- the text clustering method provided by the embodiments of the present disclosure clusters text data based on a dictionary tree, which simplifies the process of text clustering, greatly reduces the time complexity of text clustering, and effectively improves the efficiency and accuracy of text clustering. sex.
- FIG. 6 is a schematic structural diagram of a text clustering apparatus according to another embodiment of the present disclosure. As shown in FIG. 6 , the apparatus includes: a text data acquisition module 610 , a search word sequence generation module 620 , a target word sequence determination module 630 and a text clustering module 640 .
- the text data acquisition module 610 is configured to acquire the target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data;
- the search word sequence generation module 620 is configured to, for each piece of target text data in the target text data set, calculate the first importance score of at least one word in the target text data, and based on the first importance score Sort at least one word in the target text data to generate a to-be-searched word sequence corresponding to the target text data;
- the target word sequence determination module 630 is configured to search a pre-built dictionary tree for a target word sequence that is adapted to the to-be-searched word sequence for each to-be-searched word sequence; wherein, the target word sequence belongs to the to-be-searched word sequence. search for subsequences of word sequences;
- the text clustering module 640 is configured to cluster the target text data corresponding to the at least one target word sequence according to the at least one target word sequence, respectively, to obtain a text clustering result.
- a target text data set to be clustered is obtained; wherein, the target text data set includes at least one piece of target text data; for each piece of target text data in the target text data set, the target text data is calculated the first importance score of at least one word in the target text data, and sort at least one word in the target text data based on the first importance score, and generate a sequence of words to be searched corresponding to the target text data;
- a word sequence to be searched is searched in a pre-built dictionary tree for a target word sequence adapted to the word sequence to be searched; wherein, the target word sequence belongs to a subsequence of the word sequence to be searched; according to at least one The target word sequence performs clustering on the target text data corresponding to the at least one target word sequence to obtain a text clustering result.
- the text clustering apparatus calculates the importance score of each word in the text data to be clustered, and sorts at least one word in the text data to be clustered based on the importance score, and generates the word to be searched Then, based on the pre-built dictionary tree, the target word sequence that is suitable for the search word is found, so that the text data is clustered based on the target word sequence, which simplifies the process of text clustering and greatly reduces the time complexity of text clustering. It effectively improves the efficiency and accuracy of text clustering.
- the search word sequence generation module includes:
- a first word frequency-inverse document frequency calculation unit configured to calculate the first word frequency-inverse document frequency of at least one word in the target text data for each piece of target text data in the target text data set;
- the first importance score calculation unit is configured to calculate the first importance score of at least one word in the target text data according to at least one first word frequency-inverse document frequency, respectively.
- the first word frequency-inverse document frequency calculation unit includes:
- a first frequency determination subunit configured to respectively determine the first word frequency and the first inverse document frequency of each word in the target text data
- a first word frequency-inverse document frequency calculation subunit configured to calculate the first word frequency-inverse document frequency of the corresponding word according to the first word frequency and the first inverse document frequency; wherein, the first word frequency-inverse document frequency is the product of the first word frequency and the first inverse document frequency.
- the first frequency determination subunit is set to:
- the parameter configuration information includes an inverse document frequency list, and the inverse document frequency list includes the inverse document frequency of each word contained in the dictionary tree;
- the inverse document frequency list the inverse document frequency corresponding to each word in the target text data is respectively searched as the first inverse document frequency of each word in the target text data.
- the parameter configuration information further includes a distribution deviation list; wherein, the distribution deviation list includes the distribution deviation of each word contained in the dictionary tree;
- the device also includes:
- the distribution deviation determination module is set to, before calculating the first importance score of at least one word in the target text data according to the at least one first word frequency-inverse document frequency, respectively, in the distribution deviation list, to search for the distribution deviation list corresponding to the The distribution deviation corresponding to each word in the target text data is used as the first distribution deviation of each word in the target text data;
- the first importance score calculation unit is set to:
- the first importance score is the The first word frequency - the product of the inverse document frequency and the deviation of the first distribution.
- the target word sequence determination module is set to:
- a pre-built dictionary tree is searched for a target word sequence adapted to the word sequence to be searched in the order from the root node to the child node.
- the device further includes:
- the corpus acquisition module is configured to acquire a general corpus and a target corpus before acquiring the target text data set to be clustered; wherein, the general corpus includes the target corpus, and the target corpus contains at least one piece of sample text data;
- a distribution deviation calculation module configured to calculate the second distribution deviation of each word contained in the target corpus relative to the total corpus
- the sample word sequence generation module is configured to, for each piece of sample text data in the target corpus, calculate the second importance score of the corresponding word according to the second distribution deviation of each word in the sample text data, and calculate the second importance score of the corresponding word according to the The second importance score sorts at least one word in each piece of sample text data in descending order to generate a sample word sequence corresponding to the sample text data;
- a dictionary tree building module configured to build the dictionary tree based on at least one sample word sequence.
- the sample word sequence generation module includes:
- a second word frequency-inverse document frequency calculation unit configured to calculate the second word frequency-inverse document frequency of each word in the sample text data for each piece of sample text data in the target corpus;
- the second importance score calculation unit is configured to calculate the second importance score of each word in the sample text data according to each second word frequency-inverse document frequency and the corresponding second distribution deviation, respectively.
- the second word frequency-inverse document frequency calculation unit including:
- a second frequency determination subunit configured to respectively determine the second word frequency and the second inverse document frequency of each word in the sample text data
- the second word frequency-inverse document frequency calculation subunit is configured to calculate the second word frequency-inverse document frequency of the corresponding word in the sample text data according to the second word frequency and the second inverse document frequency.
- the second frequency determination subunit is set to:
- the second word frequency-inverse document frequency calculation subunit is set to:
- w represents any word in the sample data text
- tf(w) represents the second word frequency of the word w in the sample data text
- idf(w) represents the second word frequency of the word w in the sample data text
- Two inverse document frequency tf-idf(w) represents the second word frequency-inverse document frequency of the word w in the sample data text
- m represents the number of times the word w appears in the sample data text
- n represents the target The number of pieces of sample text data containing word w in the corpus
- N represents the total number of pieces of sample text data contained in the target corpus.
- the second importance score calculation unit is set to:
- s(w) represents the second importance score of the word w in the sample text data
- tf-idf a (w) represents the second word frequency-inverse document frequency of the word w in the sample data text
- the shown distribution deviation calculation module is set to:
- b represents the second distribution deviation of the word w in the target corpus relative to the total corpus
- freq a (w) represents the frequency of the word w in the target corpus
- freq(w) represents the word w in the total corpus.
- the frequency of occurrence in the corpus t represents the number of occurrences of the word w in the target corpus
- M represents the total number of words contained in the target corpus
- t' represents the number of occurrences of the word w in the total corpus
- M ' represents the total number of words contained in the total corpus.
- the device further includes:
- a number of occurrence determination module configured to determine the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences after constructing the dictionary tree based on at least one sample word sequence;
- the dictionary tree pruning module is set to prune the dictionary tree according to the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences, until the number of nodes contained in the dictionary tree reaches up to the preset number.
- the dictionary tree pruning module is set to:
- the foregoing apparatus can execute the methods provided by all the foregoing embodiments of the present disclosure, and has functional modules corresponding to executing the foregoing methods.
- functional modules corresponding to executing the foregoing methods For technical details that are not described in detail in the embodiments of the present disclosure, reference may be made to the methods provided by all the foregoing embodiments of the present disclosure.
- FIG. 7 it shows a schematic structural diagram of an electronic device 300 suitable for implementing an embodiment of the present disclosure.
- the electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), PADs (tablets), portable multimedia players (Portable Media Players). , PMP), mobile terminals such as in-vehicle terminals (such as in-vehicle navigation terminals), and stationary terminals such as digital televisions (TVs), desktop computers, etc., or various forms of servers, such as independent servers or server clusters.
- PDAs personal digital assistants
- PADs tablets
- PMP portable multimedia players
- PMP mobile terminals
- in-vehicle terminals such as in-vehicle navigation terminals
- stationary terminals such as digital televisions (TVs), desktop computers, etc.
- servers such as independent servers or server clusters.
- the electronic device shown in FIG. 7 is only an example, and should not
- the electronic device 300 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 301, which may be stored in accordance with a program stored in a read-only storage device (Read-Only Memory, ROM) 302 or from a storage device
- the device 305 loads a program into a random access memory (RAM) 303 to perform various appropriate actions and processes.
- RAM random access memory
- various programs and data required for the operation of the electronic device 300 are also stored.
- the processing device 301, the ROM 302, and the RAM 303 are connected to each other through a bus 304.
- An Input/Output (I/O) interface 305 is also connected to the bus 304 .
- the following devices can be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) Output device 307 , speaker, vibrator, etc.; storage device 308 including, eg, magnetic tape, hard disk, etc.; and communication device 309 .
- Communication means 309 may allow electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 7 illustrates electronic device 300 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
- embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing a recommended method of a word.
- the computer program may be downloaded and installed from the network via the communication device 309, or from the storage device 305, or from the ROM 302.
- the processing device 301 When the computer program is executed by the processing device 301, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
- the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
- Computer readable storage media may include, but are not limited to, electrical connections having at least one wire, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable Read memory ((Erasable Programmable Read-Only Memory, EPROM) or flash memory), optical fiber, portable compact disk read only memory (Compact Disc-Read Only Memory, CD-ROM), optical storage device, magnetic storage device, or any of the above suitable combination.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable Read memory
- CD-ROM Compact Disc-Read Only Memory
- optical storage device magnetic storage device, or any of the above suitable combination.
- a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
- Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, radio frequency (RF) (radio frequency), etc., or any suitable combination of the foregoing.
- RF radio frequency
- the client and server can use any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects.
- HTTP HyperText Transfer Protocol
- Examples of communication networks include local area networks ("Local Area Network, LAN”), wide area networks ("Wide Area Network, WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), and any currently known or future developed networks.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
- the computer-readable medium carries at least one program, and when the at least one program is executed by the electronic device, the electronic device: acquires a target text data set to be clustered; wherein, the target text data set includes at least one target text data; for each piece of target text data in the target text data set, calculate the first importance score of each word in the target text data, and assign a value to each word in the target text data based on the first importance score.
- Each word is sorted to generate a sequence of words to be searched corresponding to the target text data; for each sequence of words to be searched, a pre-built dictionary tree is searched for a sequence of target words adapted to the sequence of words to be searched; wherein, The target word sequence belongs to a subsequence of the to-be-searched word sequence; the corresponding target text data are clustered according to each of the target word sequences to obtain a text clustering result.
- Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
- LAN local area network
- WAN wide area network
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains at least one configurable function for implementing the specified logical function. Execute the instruction.
- the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
- the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation of the unit itself under certain circumstances.
- exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Products) Standard Parts, ASSP), system on chip (System on Chip, SOC), complex programmable logic device (Complex Programmable Logic Device, CPLD) and so on.
- FPGAs Field Programmable Gate Arrays
- ASICs Application Specific Integrated Circuits
- ASSP Application Specific Standard Products
- SOC System on Chip
- complex programmable logic device Complex Programmable Logic Device, CPLD
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
- machine-readable storage media would include at least one wire-based electrical connection, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read only memory
- flash memory flash memory
- fiber optics compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
- CD-ROM compact disk read only memory
- magnetic storage devices or any suitable combination of the foregoing.
- the present disclosure provides a text clustering method, including:
- the target text data set includes at least one piece of target text data
- the words are sorted, and a sequence of words to be searched corresponding to the target text data is generated;
- a pre-built dictionary tree is searched for a target word sequence adapted to the word sequence to be searched; wherein, the target word sequence belongs to a subsequence of the word sequence to be searched;
- the target text data corresponding to the at least one target word sequence is clustered according to the at least one target word sequence, respectively, to obtain a text clustering result.
- calculating the first importance score of at least one word in the target text data including:
- a first importance score of at least one word in the target text data is calculated according to at least one first word frequency-inverse document frequency, respectively.
- calculating the first word frequency-inverse document frequency of at least one word in the target text data including:
- the first word frequency-inverse document frequency is the first word frequency and the first inverse document frequency product of frequencies.
- determining the first word frequency and the first inverse document frequency of each word in the target text data including:
- the parameter configuration information includes an inverse document frequency list, and the inverse document frequency list includes the inverse document frequency of each word contained in the dictionary tree;
- the inverse document frequency list the inverse document frequency corresponding to each word in the target text data is respectively searched as the first inverse document frequency of each word in the target text data.
- the parameter configuration information further includes a distribution deviation list; wherein, the distribution deviation list includes the distribution deviation of each word contained in the dictionary tree;
- the method further includes:
- the distribution deviation list respectively find the distribution deviation corresponding to each word in the target text data as the first distribution deviation of each word in the target text data;
- the first importance score is the The first word frequency - the product of the inverse document frequency and the deviation of the first distribution.
- a pre-built dictionary tree is searched for a target word sequence adapted to the word sequence to be searched, including:
- a pre-built dictionary tree is searched for a target word sequence adapted to the word sequence to be searched in the order from the root node to the child node.
- the method before acquiring the target text data set to be clustered, the method further includes:
- the total corpus includes the target corpus, and the target corpus contains at least one piece of sample text data;
- the dictionary tree is constructed based on at least one sample word sequence.
- calculate the second importance score of the corresponding word according to the second distribution deviation of each word in the sample text data including:
- a second importance score of each word in the sample text data is calculated according to each second word frequency-inverse document frequency and the corresponding second distribution deviation, respectively.
- separately calculating the second word frequency-inverse document frequency of each word in the sample text data including:
- the second word frequency-inverse document frequency of the corresponding word in the sample text data is calculated according to the second word frequency and the second inverse document frequency.
- determining the second word frequency and the second inverse document frequency of each word in the sample text data including:
- Calculate the second word frequency-inverse document frequency of the corresponding word in the sample text data according to the second word frequency and the second inverse document frequency including:
- w represents any word in the sample data text
- tf(w) represents the second word frequency of the word w in the sample data text
- idf(w) represents the second word frequency of the word w in the sample data text
- Two inverse document frequency tf-idf(w) represents the second word frequency-inverse document frequency of the word w in the sample data text
- m represents the number of times the word w appears in the sample data text
- n represents the target The number of pieces of sample text data containing word w in the corpus
- N represents the total number of pieces of sample text data contained in the target corpus.
- calculate the second importance score of each word in the sample text data according to each second word frequency-inverse document frequency and the corresponding second distribution deviation including:
- the second importance score of each word in the sample text data is calculated according to the following formula:
- s(w) represents the second importance score of the word w in the sample text data
- tf-idf a (w) represents the second word frequency-inverse document frequency of the word w in the sample data text
- calculating the second distribution deviation of each word contained in the target corpus relative to the total corpus including:
- b represents the second distribution deviation of the word w in the target corpus relative to the total corpus
- freq a (w) represents the frequency of the word w in the target corpus
- freq(w) represents the word w in the total corpus.
- the frequency of occurrence in the corpus t represents the number of occurrences of the word w in the target corpus
- M represents the total number of words contained in the target corpus
- t' represents the number of occurrences of the word w in the total corpus
- M ' represents the total number of words contained in the total corpus.
- the method further includes:
- the dictionary tree is pruned according to the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences, until the number of nodes included in the dictionary tree reaches a preset number.
- the dictionary tree is pruned according to the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences, until the number of nodes contained in the dictionary tree reaches a preset number.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Machine Translation (AREA)
Abstract
A text clustering method and apparatus, an electronic device, and a storage medium. The method comprises: acquiring a target text data set to be clustered (110); for each piece of target text data in the target text data set, calculating a first importance score of at least one word in each piece of target text data, and sorting the at least one word in each piece of target text data on the basis of the first importance score to generate a word sequence to be searched corresponding to each piece of target text data (120); for each word sequence, searching in a pre-constructed dictionary tree for a target word sequence adapted to each word sequence, the target word sequence belonging to a sub-sequence of each word sequence (130); and according to at least one target word sequence, respectively clustering the target text data corresponding to the at least one target word sequence to obtain a text clustering result (140).
Description
本申请要求在2020年12月31日提交中国专利局、申请号为202011630633.2的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。This application claims the priority of the Chinese Patent Application No. 202011630633.2 filed with the China Patent Office on December 31, 2020, the entire contents of which are incorporated herein by reference.
本公开实施例涉及计算机技术领域,例如涉及一种文本聚类方法、装置、电子设备及存储介质。The embodiments of the present disclosure relate to the field of computer technology, for example, to a text clustering method, apparatus, electronic device, and storage medium.
文本聚类是将相似的文本数据划分到同一聚类,并将不同的文本聚类区分开来,其中,聚类也可以称之为“簇”。聚类方法分为不同的领域,如网络、医学、生物学、计算机视觉、自然语言等。Text clustering is to divide similar text data into the same cluster and distinguish different text clusters, among which, clusters can also be called "clusters". Clustering methods are divided into different fields such as networking, medicine, biology, computer vision, natural language, etc.
相关技术中的文本聚类方法,将文本表示为特征向量,然后通过计算文本对应的特征向量,计算文本之间的相似度;最后,根据文本之间的相似度将文本进行聚类,可以看出,相关技术中的文本聚类方法,首先需要将文本表示为特征向量,进而才能通过特征向量计算文本之间的相似度,使得文本聚类的计算过程复杂,效率较低。The text clustering method in the related art represents the text as a feature vector, and then calculates the similarity between the texts by calculating the feature vector corresponding to the text; finally, the text is clustered according to the similarity between the texts, as can be seen in It is pointed out that the text clustering method in the related art first needs to represent the text as a feature vector, and then the similarity between the texts can be calculated by the feature vector, which makes the calculation process of text clustering complicated and the efficiency is low.
发明内容SUMMARY OF THE INVENTION
本公开实施例提供一种文本聚类方法、装置、电子设备及存储介质,可以有效提高文本聚类的效率和准确性。Embodiments of the present disclosure provide a text clustering method, apparatus, electronic device, and storage medium, which can effectively improve the efficiency and accuracy of text clustering.
第一方面,本公开实施例提供了一种文本聚类方法,包括:In a first aspect, an embodiment of the present disclosure provides a text clustering method, including:
获取待聚类的目标文本数据集;其中,所述目标文本数据集中包括至少一条目标文本数据;Obtain a target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data;
针对所述目标文本数据集中的每条目标文本数据,计算所述每条目标文本数据中至少一个词的第一重要性分数,并基于所述第一重要性分数对所述每条 目标文本数据中的至少一个词进行排序,生成与所述每条目标文本数据对应的待搜索词序列;For each piece of target text data in the target text data set, calculate a first importance score of at least one word in each piece of target text data, and assign each piece of target text data based on the first importance score Sort at least one word in the to-be-searched word sequence corresponding to each piece of target text data;
针对每个待搜索词序列,在预先构建的字典树中搜索与所述每个待搜索词序列适配的目标词序列;其中,所述目标词序列属于所述每个待搜索词序列的子序列;For each word sequence to be searched, a pre-built dictionary tree is searched for a target word sequence adapted to each word sequence to be searched; wherein, the target word sequence belongs to the child of each word sequence to be searched sequence;
分别根据至少一个目标词序列对所述至少一个目标词序列对应的目标文本数据进行聚类,得到文本聚类结果。The target text data corresponding to the at least one target word sequence is clustered according to the at least one target word sequence, respectively, to obtain a text clustering result.
第二方面,本公开实施例还提供了一种文本聚类装置,包括:In a second aspect, an embodiment of the present disclosure further provides a text clustering apparatus, including:
文本数据获取模块,设置为获取待聚类的目标文本数据集;其中,所述目标文本数据集中包括至少一条目标文本数据;A text data acquisition module, configured to acquire a target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data;
搜索词序列生成模块,设置为针对所述目标文本数据集中的每条目标文本数据,计算所述每条目标文本数据中至少一个词的第一重要性分数,并基于所述第一重要性分数对所述每条目标文本数据中的至少一个词进行排序,生成与所述每条目标文本数据对应的待搜索词序列;A search word sequence generation module, configured to calculate the first importance score of at least one word in each piece of target text data for each piece of target text data in the target text data set, and based on the first importance score Sort at least one word in each piece of target text data, and generate a word sequence to be searched corresponding to each piece of target text data;
目标词序列确定模块,设置为针对每个待搜索词序列,在预先构建的字典树中搜索与所述每个待搜索词序列适配的目标词序列;其中,所述目标词序列属于所述每个待搜索词序列的子序列;The target word sequence determination module is configured to search a pre-built dictionary tree for a target word sequence adapted to each to-be-searched word sequence for each to-be-searched word sequence; wherein, the target word sequence belongs to the a subsequence of each sequence of words to be searched;
文本聚类模块,设置为分别根据至少一个目标词序列对所述至少一个目标词序列对应的目标文本数据进行聚类,得到文本聚类结果。The text clustering module is configured to cluster the target text data corresponding to the at least one target word sequence according to the at least one target word sequence, respectively, to obtain a text clustering result.
第三方面,本公开实施例还提供了一种电子设备,所述电子设备包括:In a third aspect, an embodiment of the present disclosure further provides an electronic device, the electronic device comprising:
至少一个处理装置;at least one processing device;
存储装置,设置为存储至少一个程序;a storage device configured to store at least one program;
当所述至少一个程序被所述至少一个处理装置执行,使得所述至少一个处理装置实现如本公开实施例所述的文本聚类方法。When the at least one program is executed by the at least one processing apparatus, the at least one processing apparatus implements the text clustering method according to the embodiment of the present disclosure.
第四方面,本公开实施例还提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现如本公开实施例所述的文本聚类方法。In a fourth aspect, an embodiment of the present disclosure further provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing apparatus, implements the text clustering method according to the embodiment of the present disclosure.
图1是本公开一实施例中的一种文本聚类方法的流程图;1 is a flowchart of a text clustering method in an embodiment of the present disclosure;
图2是本公开一实施例中的字典树的示意图;2 is a schematic diagram of a dictionary tree in an embodiment of the present disclosure;
图3是本公开一实施例中的另一种文本聚类方法的流程图;3 is a flowchart of another text clustering method in an embodiment of the present disclosure;
图4是本公开一实施例中的又一种文本聚类方法的流程图;FIG. 4 is a flowchart of yet another text clustering method in an embodiment of the present disclosure;
图5是本公开一实施例中的又一种文本聚类方法的流程图;FIG. 5 is a flowchart of yet another text clustering method in an embodiment of the present disclosure;
图6是本公开一实施例中的一种文本聚类装置的结构示意图;FIG. 6 is a schematic structural diagram of a text clustering apparatus in an embodiment of the present disclosure;
图7是本公开一实施例中的一种电子设备的结构示意图。FIG. 7 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
下面将参照附图更详细地描述本公开的实施例。Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings.
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。It should be understood that the various steps described in the method embodiments of the present disclosure may be performed in different orders and/or in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this regard.
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。As used herein, the term "including" and variations thereof are open-ended inclusions, ie, "including but not limited to". The term "based on" is "based at least in part on." The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions of other terms will be given in the description below.
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。It should be noted that concepts such as "first" and "second" mentioned in the present disclosure are only used to distinguish different devices, modules or units, and are not used to limit the order of functions performed by these devices, modules or units or interdependence.
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“至少一个”。It should be noted that the modifications of "a" and "a plurality" mentioned in the present disclosure are illustrative rather than restrictive, and those skilled in the art should understand that unless the context clearly indicates otherwise, they should be understood as "at least one" ".
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说 明性的目的,而并不是用于对这些消息或信息的范围进行限制。The names of messages or information exchanged between multiple devices in the embodiments of the present disclosure are only used for illustrative purposes, and are not used to limit the scope of these messages or information.
图1为本公开一实施例提供的一种文本聚类方法的流程图,本公开实施例可适用于对文本聚类的情况,该方法可以由文本聚类装置来执行,该装置可由硬件和/或软件组成,并一般可集成在具有文本聚类功能的设备中,该设备可以是服务器、移动终端或服务器集群等电子设备。如图1所示,该方法包括如下步骤:FIG. 1 is a flowchart of a text clustering method provided by an embodiment of the present disclosure. The embodiment of the present disclosure can be applied to the case of text clustering. and/or software, and generally can be integrated into a device with text clustering function, which can be an electronic device such as a server, a mobile terminal, or a server cluster. As shown in Figure 1, the method includes the following steps:
步骤110,获取待聚类的目标文本数据集;其中,所述目标文本数据集中包括至少一条目标文本数据。Step 110: Obtain a target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data.
在本公开实施例中,目标文本数据集包括至少一条目标文本数据,其中目标文本可以是各种类别的文本数据,如新闻类、广告类、网络类、自然语言类、医学类等不同类型的文本数据。目标文本数据集中的至少一条目标文本数据的类别可以相同,也可以不同。其中,目标文本数据可以为英文文本,也可以为中文文本,还可以为韩语文本。示例性的,可以通过网络爬虫技术采集待聚类的目标文本数据,还可以通过光学字符识别、语音识别、手写识别等方式获取目标文本数据。可选的,当待聚类的目标文本数据集中包含一条目标文本数据时,可实时采集用户输入的文本数据,并将采集到的文本数据作为待聚类的文本数据。In the embodiment of the present disclosure, the target text data set includes at least one piece of target text data, where the target text may be various types of text data, such as news, advertisement, network, natural language, medical, etc. text data. The categories of at least one piece of target text data in the target text data set may be the same or different. The target text data may be English text, Chinese text, or Korean text. Exemplarily, the target text data to be clustered can be collected through a web crawler technology, and the target text data can also be obtained through optical character recognition, speech recognition, handwriting recognition, and the like. Optionally, when the target text data set to be clustered contains a piece of target text data, the text data input by the user may be collected in real time, and the collected text data may be used as the text data to be clustered.
需要说明的是,本公开实施例对目标文本数据的内容类别、语言类别及获取方式不做限定。It should be noted that the embodiments of the present disclosure do not limit the content category, language category and acquisition method of the target text data.
步骤120,针对所述目标文本数据集中的每条目标文本数据,计算所述目标文本数据中至少一个词的第一重要性分数,并基于所述第一重要性分数对所述目标文本数据中的至少一个词进行排序,生成与所述目标文本数据对应的待搜索词序列。 Step 120, for each piece of target text data in the target text data set, calculate the first importance score of at least one word in the target text data, and assign a value to the target text data based on the first importance score. Sort at least one word of the target text data to generate a word sequence to be searched corresponding to the target text data.
在本公开实施例中,针对目标文本数据集中的每条目标文本数据进行分词处理,以将每条目标文本数据切分成至少一个词。可选的,在对每条目标文本数据进行分词处理之前,还可以对每条目标文本数据进行分词预处理,如去除 标点符合和停用词。然后,计算每条目标文本数据中至少一个词的第一重要性分数,第一重要性分数用于反映每个词在目标文本数据中的重要程度,其中,第一重要性分数越大,表示该词在目标文本数据中越重要,反之,第一重要性分数越小,表示该词在目标文本数据中越不重要。In the embodiment of the present disclosure, word segmentation processing is performed on each piece of target text data in the target text data set, so as to divide each piece of target text data into at least one word. Optionally, before performing word segmentation processing on each piece of target text data, word segmentation preprocessing may also be performed on each piece of target text data, such as removing punctuation and stop words. Then, the first importance score of at least one word in each piece of target text data is calculated, and the first importance score is used to reflect the importance of each word in the target text data. The more important the word is in the target text data, on the contrary, the smaller the first importance score is, the less important the word is in the target text data.
可选的,可以统计每个词在该目标文本数据中的出现次数,将该词在目标文本数据中的出现次数作为第一重要性分数。可选的,可以将目标文本数据中词的词频-逆文档频率作为该词的第一重要性分数。可选的,针对所述目标文本数据集中的每条目标文本数据,计算所述目标文本数据中至少一个词的第一重要性分数,包括:针对所述目标文本数据集中的每条目标文本数据,分别计算所述目标文本数据中至少一个词的第一词频-逆文档频率;分别根据至少一个第一词频-逆文档频率,计算所述目标文本数据中至少一个词的第一重要性分数。需要说明的是,本公开实施例对目标文本数据中至少一个词的第一重要性分数的计算方式不做限定。Optionally, the number of occurrences of each word in the target text data may be counted, and the number of occurrences of the word in the target text data may be used as the first importance score. Optionally, the word frequency-inverse document frequency of a word in the target text data may be used as the first importance score of the word. Optionally, for each piece of target text data in the target text data set, calculating the first importance score of at least one word in the target text data, including: for each piece of target text data in the target text data set , respectively calculating the first word frequency-inverse document frequency of at least one word in the target text data; respectively calculating the first importance score of at least one word in the target text data according to at least one first word frequency-inverse document frequency. It should be noted that, the embodiment of the present disclosure does not limit the calculation method of the first importance score of at least one word in the target text data.
示例性的,基于第一重要性分数对目标文本数据中至少一个词进行排序,例如,可以按照第一重要性分数从大到小的顺序对目标文本数据中的至少一个词进行排序,并将排序后的词组成的序列作为与目标文本数据对应的待搜索词序列。可以理解的是,在待搜索词序列中词越靠前,该词的第一重要性分数越大,表明该词在目标文本数据中越重要,越能表明该目标文本数据想要表达的含义、内容,或者越能表明该目标文本数据的类别。Exemplarily, the at least one word in the target text data is sorted based on the first importance score, for example, the at least one word in the target text data may be sorted in descending order of the first importance score, and the The sequence composed of the sorted words is used as the to-be-searched word sequence corresponding to the target text data. It can be understood that the higher the word in the word sequence to be searched, the greater the first importance score of the word, indicating that the word is more important in the target text data, and the more it can indicate the meaning that the target text data wants to express, content, or the more it can indicate the category of the target text data.
步骤130,针对每个待搜索词序列,在预先构建的字典树中搜索与所述待搜索词序列适配的目标词序列;其中,所述目标词序列属于所述待搜索词序列的子序列。Step 130: For each word sequence to be searched, search a pre-built dictionary tree for a target word sequence adapted to the word sequence to be searched; wherein, the target word sequence belongs to a subsequence of the word sequence to be searched .
在本公开实施例中,获取预先构建的字典树,其中,字典树为基于预先配置的目标语料库构建的。示例性的,针对每个待搜索词序列,在字典树中搜索与待搜索词序列适配的目标词序列。可选的,针对每个待搜索词序列,在预先构建的字典树中搜索与所述待搜索词序列适配的目标词序列,包括:针对每个 待搜索词序列,在预先构建的字典树中按照从根节点到子节点的顺序,搜索与所述待搜索词序列适配的目标词序列。示例性的,从根节点到子节点的顺序,从字典树中搜索与待搜索词序列中的第一个词匹配的第一目标节点,然后在与第一目标节点连接的所有子节点中搜索与待搜索词序列中的第二个词匹配的第二目标节点,再在与第二目标节点连接的所有子节点中搜索与待搜索词序列中的第三个词匹配的第三目标节点,依次类推,直至在与第p个目标节点连接的所有子节点中搜索不到与待搜索词序列中的p+1个词匹配的节点为止,并将多个目标节点中的词构成的序列作为目标词序列,也即将待搜索词序列中能够在字典树中搜索的匹配节点的词组成的序列作为目标词序列。其中,目标词序列为待搜索词序列的子序列。示例性的,待搜索词序列为[A-B-C-D-E],其中,A、B、C、D、E分别表示待搜索词序列中的词,在字典树中按照从根节点到子节点的顺序,能够搜索到与A、B、C匹配的目标节点,也即在字典树中能够搜索到与A匹配的第一目标节点,在与第一目标节点连接的子节点中能够搜索到与B匹配的第二目标节点,在与第二目标节点连接的子节点中能够搜索到与C匹配的第三目标节点,但是在第三目标节点连接的子节点中搜索不到与D匹配的第三目标节点,则将A、B、C构成的序列作为目标词序列。In the embodiment of the present disclosure, a pre-built dictionary tree is acquired, wherein the dictionary tree is constructed based on a pre-configured target corpus. Exemplarily, for each word sequence to be searched, a dictionary tree is searched for a target word sequence adapted to the word sequence to be searched. Optionally, for each to-be-searched word sequence, searching a pre-built dictionary tree for a target word sequence adapted to the to-be-searched word sequence, including: for each to-be-searched word sequence, searching in a pre-built dictionary tree In the sequence from the root node to the child node, search for the target word sequence adapted to the to-be-searched word sequence. Exemplarily, in order from the root node to the child nodes, the dictionary tree is searched for the first target node that matches the first word in the word sequence to be searched, and then all child nodes connected to the first target node are searched. The second target node that matches the second word in the sequence of words to be searched, and then searches all subnodes connected to the second target node for the third target node that matches the third word in the sequence of words to be searched, And so on, until no node matching p+1 words in the word sequence to be searched is found in all the child nodes connected to the p-th target node, and the sequence of words in multiple target nodes is taken as The target word sequence, that is, the sequence consisting of words in the word sequence to be searched that can be searched in the dictionary tree, which matches the node, as the target word sequence. The target word sequence is a subsequence of the word sequence to be searched. Exemplarily, the sequence of words to be searched is [A-B-C-D-E], where A, B, C, D, and E respectively represent words in the sequence of words to be searched, and can be searched in the order from the root node to the child nodes in the dictionary tree. To the target node matching A, B, and C, that is, the first target node matching A can be searched in the dictionary tree, and the second target node matching B can be found in the child nodes connected to the first target node. The target node, the third target node that matches C can be searched in the child nodes connected to the second target node, but the third target node that matches D cannot be found in the child nodes connected to the third target node, then The sequence consisting of A, B, and C is used as the target word sequence.
步骤140,分别根据至少一个目标词序列对所述至少一个目标词序列对应的目标文本数据进行聚类,得到文本聚类结果。Step 140: Cluster the target text data corresponding to the at least one target word sequence according to the at least one target word sequence, respectively, to obtain a text clustering result.
在本公开实施例中,根据至少一个目标词序列对所述至少一个目标词序列对应的目标文本数据进行聚类,可以理解的是,目标词序列能够直观反映目标文本数据的类别或目标文本数据表达的内容,若目标文本数据对应的目标词序列相同或相似度较高,可以说明目标文本数据的类别或表达的内容相同或相近,因此,可根据目标词序列对目标文本数据进行聚类。示例性的,可以将具有相同目标词序列的目标文本数据聚类为同一簇,将具有不同目标词序列的目标文本数据聚类为不同簇;也可以计算至少一个目标词序列间的相似度,将相似度大于预设阈值的目标文本数据聚类为同一簇,将相似度小于预设阈值的目标文 本数据聚类为不同簇。需要说明的是,本公开实施例对根据目标词序列对对应的目标文本进行聚类的方式不做限定。In the embodiment of the present disclosure, the target text data corresponding to the at least one target word sequence is clustered according to the at least one target word sequence. It can be understood that the target word sequence can intuitively reflect the category of the target text data or the target text data. If the target word sequence corresponding to the target text data is the same or has a high degree of similarity, it can indicate that the category of the target text data or the content expressed are the same or similar. Therefore, the target text data can be clustered according to the target word sequence. Exemplarily, target text data with the same target word sequence can be clustered into the same cluster, and target text data with different target word sequences can be clustered into different clusters; the similarity between at least one target word sequence can also be calculated, The target text data whose similarity is greater than the preset threshold are clustered into the same cluster, and the target text data whose similarity is less than the preset threshold are clustered into different clusters. It should be noted that the embodiment of the present disclosure does not limit the manner of clustering the corresponding target text according to the target word sequence.
本公开实施例,获取待聚类的目标文本数据集;其中,所述目标文本数据集中包括至少一条目标文本数据;针对所述目标文本数据集中的每条目标文本数据,计算所述目标文本数据中至少一个词的第一重要性分数,并基于所述第一重要性分数对所述目标文本数据中的至少一个词进行排序,生成与所述目标文本数据对应的待搜索词序列;针对每个待搜索词序列,在预先构建的字典树中搜索与所述待搜索词序列适配的目标词序列;其中,所述目标词序列属于所述待搜索词序列的子序列;分别根据至少一个所述目标词序列对所述至少一个所述目标词序列对应的目标文本数据进行聚类,得到文本聚类结果。本公开实施例提供的文本聚类方法,计算待聚类的文本数据中至少一个词的重要性分数,并基于重要性分数对待聚类的文本数据中的至少一个词进行排序,生成待搜索词序列,然后基于预先构建的字典树查找与待搜索词适配的目标词序列,从而基于目标词序列对文本数据进行聚类,简化了文本聚类的过程,大大降低了文本聚类的时间复杂度,有效提高了文本聚类的效率和准确性。In this embodiment of the present disclosure, a target text data set to be clustered is obtained; wherein, the target text data set includes at least one piece of target text data; for each piece of target text data in the target text data set, the target text data is calculated the first importance score of at least one word in the target text data, and sort at least one word in the target text data based on the first importance score, and generate a sequence of words to be searched corresponding to the target text data; A word sequence to be searched is searched in a pre-built dictionary tree for a target word sequence adapted to the word sequence to be searched; wherein, the target word sequence belongs to a subsequence of the word sequence to be searched; according to at least one The target word sequence performs clustering on the target text data corresponding to the at least one target word sequence to obtain a text clustering result. The text clustering method provided by the embodiment of the present disclosure calculates the importance score of at least one word in the text data to be clustered, and sorts the at least one word in the text data to be clustered based on the importance score, and generates the word to be searched Then, based on the pre-built dictionary tree, the target word sequence that is suitable for the search word is found, so that the text data is clustered based on the target word sequence, which simplifies the process of text clustering and greatly reduces the time complexity of text clustering. It effectively improves the efficiency and accuracy of text clustering.
在一些实施例中,针对所述目标文本数据集中的每条目标文本数据,计算所述目标文本数据中至少一个词的第一重要性分数,包括:针对所述目标文本数据集中的每条目标文本数据,分别计算所述目标文本数据中至少一个词的第一词频-逆文档频率;分别根据至少一个第一词频-逆文档频率,计算所述目标文本数据中至少一个词的第一重要性分数。其中,第一词频-逆文档频率(term frequency-inverse document frequency,TF-IDF)可以间接反映每个词在目标文本数据中的重要性,因此,可计算目标文本数据中每个词的第一词频-逆文档频率,然后根据每个第一词频-逆文档频率,计算目标文本数据中每个词的第一重要性分数。In some embodiments, for each piece of target text data in the target text data set, calculating the first importance score of at least one word in the target text data includes: for each piece of target text data in the target text data set text data, respectively calculate the first word frequency-inverse document frequency of at least one word in the target text data; respectively calculate the first importance of at least one word in the target text data according to at least one first word frequency-inverse document frequency Fraction. Among them, the first term frequency-inverse document frequency (TF-IDF) can indirectly reflect the importance of each word in the target text data. Therefore, the first term of each word in the target text data can be calculated. Word frequency-inverse document frequency, and then calculate the first importance score of each word in the target text data according to each first word frequency-inverse document frequency.
可选的,分别计算所述目标文本数据中至少一个词的第一词频-逆文档频率,包括:分别确定所述目标文本数据中每个词的第一词频和第一逆文档频率;根 据所述第一词频和所述第一逆文档频率计算对应词的第一词频-逆文档频率;其中,所述第一词频-逆文档频率为所述第一词频与所述第一逆文档频率的乘积。示例性的,分别确定所述目标文本数据中每个词的第一词频和第一逆文档频率,包括:确定每个词在所述目标文本数据中的出现次数,并将所述出现次数作为对应词的第一词频;获取与所述字典树对应的参数配置信息;其中,所述参数配置信息包括逆文档频率列表,所述逆文档频率列表中包括所述字典树中所包含的每个词的逆文档频率;在所述逆文档频率列表中,分别查找与所述目标文本数据中的至少一个词对应的逆文档频率,作为所述目标文本数据中至少一个词的第一逆文档频率。Optionally, calculating the first word frequency-inverse document frequency of at least one word in the target text data respectively includes: respectively determining the first word frequency and the first inverse document frequency of each word in the target text data; Describe the first word frequency and the first inverse document frequency to calculate the first word frequency-inverse document frequency of the corresponding word; wherein, the first word frequency-inverse document frequency is the first word frequency and the first inverse document frequency. product. Exemplarily, respectively determining the first word frequency and the first inverse document frequency of each word in the target text data includes: determining the number of occurrences of each word in the target text data, and using the number of occurrences as the first word frequency of the corresponding word; obtain parameter configuration information corresponding to the dictionary tree; wherein, the parameter configuration information includes an inverse document frequency list, and the inverse document frequency list includes each The inverse document frequency of the word; in the inverse document frequency list, find the inverse document frequency corresponding to at least one word in the target text data respectively, as the first inverse document frequency of at least one word in the target text data .
示例性的,统计每个词在目标文本数据中的出现次数,并将出现次数作为对应词的第一词频(Term Frequency,TF),可以理解的是,某个词在目标文本数据中可能出现多次,也可能出现一次,其中,出现次数越多,表明该词在目标文本数据的内容或语言表达中越重要。获取与字典树对应的参数配置信息,其中,参数配置信息为基于目标语料库构建字典树的过程中确定的参数信息。参数配置信息可以包括由字典树中所包含的每个词的逆文档频率(Inverse Document Frequency,IDF)组成的逆文档频率列表。可以理解的是,在基于目标语料库构建字典树的过程中,需要计算目标语料库中多个词的逆文档频率,然后基于多个词的逆文档频率构建字典树。在字典树对应的逆文档频率列表中查找与目标文本数据中每个词对应的逆文档频率,并将查找到的与每个词对应的目标逆文档频率作为每个词的第一逆文档频率。然后,将第一词频与第一逆文档频率的乘积作为对应词的第一词频-逆文档频率。Exemplarily, count the number of occurrences of each word in the target text data, and use the number of occurrences as the first word frequency (Term Frequency, TF) of the corresponding word. It is understandable that a certain word may appear in the target text data. It may appear several times, and may also appear once, wherein, the more times the word appears, the more important the word is in the content or language expression of the target text data. The parameter configuration information corresponding to the dictionary tree is acquired, wherein the parameter configuration information is the parameter information determined in the process of constructing the dictionary tree based on the target corpus. The parameter configuration information may include an inverse document frequency list composed of an inverse document frequency (Inverse Document Frequency, IDF) of each word contained in the dictionary tree. It can be understood that in the process of constructing a dictionary tree based on the target corpus, it is necessary to calculate the inverse document frequencies of multiple words in the target corpus, and then construct a dictionary tree based on the inverse document frequencies of the multiple words. Find the inverse document frequency corresponding to each word in the target text data in the inverse document frequency list corresponding to the dictionary tree, and use the found target inverse document frequency corresponding to each word as the first inverse document frequency of each word . Then, the product of the first word frequency and the first inverse document frequency is taken as the first word frequency-inverse document frequency of the corresponding word.
在一些实施例中,所述参数配置信息还包括分布偏差列表;其中,所述分布偏差列表中包括所述字典树中所包含的每个词的分布偏差;在分别根据至少一个第一词频-逆文档频率,计算所述目标文本数据中至少一个词的第一重要性分数之前,还包括:在所述分布偏差列表中,分别查找与所述目标文本数据中的每个词对应的分布偏差,作为所述目标文本数据中所述每个词的第一分布偏 差;分别根据至少一个第一词频-逆文档频率,计算所述目标文本数据中至少一个词的第一重要性分数,包括:分别根据每个第一词频-逆文档频率及对应的第一分布偏差,计算所述目标文本数据中所述每个词的第一重要性分数;其中,所述第一重要性分数为所述第一词频-逆文档频率与所述第一分布偏差的乘积。In some embodiments, the parameter configuration information further includes a distribution deviation list; wherein, the distribution deviation list includes the distribution deviation of each word included in the dictionary tree; Inverse document frequency, before calculating the first importance score of at least one word in the target text data, further comprising: in the distribution deviation list, respectively searching for the distribution deviation corresponding to each word in the target text data , as the first distribution deviation of each word in the target text data; calculate the first importance score of at least one word in the target text data according to at least one first word frequency-inverse document frequency, including: Calculate the first importance score of each word in the target text data according to each first word frequency-inverse document frequency and the corresponding first distribution deviation; wherein, the first importance score is the The first word frequency - the product of the inverse document frequency and the deviation of the first distribution.
示例性的,参数配置信息还可包括由字典树中的每个词的分布表偏差组成的分布偏差列表。可以理解的是,在基于目标语料库构建字典树的过程中,不仅需要计算目标语料库中每个词的逆文档频率,还需要计算目标语料库中每个词的分布偏差,然后基于多个词的逆文档频率及分布偏差构建字典树。其中,分布偏差用于反应每个词在目标语料库与总语料库的分布偏差。在字典树对应的分布偏差列表中,查找与目标文本数据中的每个词对应的分布偏差,将查找到的与每个词对应的目标分布偏差,作为目标文本数据中该词的第一分布偏差。然后,根据第一词频-逆文档频率及对应的第一分布偏差,计算目标文本数据中该词的第一重要性分数,其中,第一重要性分数为第一词频-逆文档频率与第一分布偏差的乘积。Exemplarily, the parameter configuration information may further include a distribution deviation list consisting of the distribution table deviation of each word in the dictionary tree. It is understandable that in the process of constructing a dictionary tree based on the target corpus, it is not only necessary to calculate the inverse document frequency of each word in the target corpus, but also the distribution deviation of each word in the target corpus, and then based on the inverse of multiple words. Document frequency and distribution bias to build a dictionary tree. Among them, the distribution deviation is used to reflect the distribution deviation of each word in the target corpus and the total corpus. In the distribution deviation list corresponding to the dictionary tree, find the distribution deviation corresponding to each word in the target text data, and use the found target distribution deviation corresponding to each word as the first distribution of the word in the target text data deviation. Then, according to the first word frequency-inverse document frequency and the corresponding first distribution deviation, the first importance score of the word in the target text data is calculated, wherein the first importance score is the first word frequency-inverse document frequency and the first importance score. The product of distribution deviations.
在一些实施例中,在获取待聚类的目标文本数据集之前,还包括:获取总语料库和目标语料库;其中,所述总语料库包括所述目标语料库,所述目标语料库中包含至少一条样本文本数据;计算所述目标语料库中所包含的每个词相对于所述总语料库的第二分布偏差;针对所述目标语料库中每条样本文本数据,分别根据所述样本文本数据中每个词的第二分布偏差计算对应词的第二重要性分数,并按照所述第二重要性分数从大到小的顺序对每条样本文本数据中的至少一个词进行排序,生成与所述样本文本数据对应的样本词序列;基于至少一个样本词序列构建所述字典树。这样设置可以准确、快速地构建与目标语料库对应的字典树。In some embodiments, before acquiring the target text data set to be clustered, the method further includes: acquiring a total corpus and a target corpus; wherein the total corpus includes the target corpus, and the target corpus contains at least one piece of sample text data; calculate the second distribution deviation of each word contained in the target corpus relative to the total corpus; for each piece of sample text data in the target corpus, according to the The second distribution deviation calculates the second importance score of the corresponding word, and sorts at least one word in each piece of sample text data according to the second importance score in descending order, and generates the same value as the sample text data. corresponding sample word sequences; constructing the dictionary tree based on at least one sample word sequence. This setting can accurately and quickly build a dictionary tree corresponding to the target corpus.
示例性的,目标语料库可以为属于某一类别或某一领域的语料库,如目标语料库可以为广告类的语料库,还可以为网络类的语料库、法律类的语料库、医学类的语料库。总语料库为包含目标语料库的总的语料库,例如,目标语料 库为广告类的语料库,则总语料库可以包括网络类的语料库、法律类的语料库、医学类的语料库及广告类的语料库构成的总语料库。其中,目标语料库中包括至少一条样本文本数据。示例性的,可以通过网络爬虫技术获取总语料库和目标语料库。需要说明的是,本公开实施例对目标语料库的类型不做限定,对总语料库中除包含目标语料库外的其他语料内容也不限定。Exemplarily, the target corpus may be a corpus belonging to a certain category or a certain field. For example, the target corpus may be an advertising corpus, a network corpus, a legal corpus, or a medical corpus. The total corpus is the total corpus that includes the target corpus. For example, if the target corpus is an advertising corpus, the total corpus may include a network corpus, a legal corpus, a medical corpus, and an advertising corpus. The target corpus includes at least one piece of sample text data. Exemplarily, the total corpus and the target corpus can be obtained through web crawling technology. It should be noted that, the embodiment of the present disclosure does not limit the type of the target corpus, nor does it limit other corpus contents in the total corpus except the target corpus.
示例性的,由于不同领域或不同类别的语料库中,所包含的词及词的重要程度不同,如广告类的语料库与法律类的语料库中所包含的词差别较大,因此,可计算目标语料库中所包含的每个词相对于总语料库的第二分布偏差,其中,第二分布偏差可以反映每个词在目标语料库与总语料库中差异。可选的,计算所述目标语料库中所包含的每个词相对于所述总语料库的第二分布偏差,包括:根据如下公式计算所述目标语料库中所包含的每个词相对于所述总语料库的第二分布偏差:Exemplarily, because the corpora of different fields or different categories contain different words and the importance of the words, such as the words contained in the advertising corpus and the legal corpus are quite different, therefore, the target corpus can be calculated. The second distribution deviation of each word contained in the relative to the total corpus, wherein the second distribution deviation can reflect the difference between each word in the target corpus and the total corpus. Optionally, calculating the second distribution deviation of each word included in the target corpus relative to the total corpus includes: calculating the relative value of each word included in the target corpus to the total corpus according to the following formula: Second distribution bias of the corpus:
其中,b表示目标语料库的词w相对于所述总语料库的第二分布偏差,freq
a(w)表示词w在所述目标语料库中的出现频率,freq(w)表示词w在所述总语料库中的出现频率,t表示词w在所述目标语料库中的出现次数,M表示所述目标语料库中所包含词的总数量,t'表示词w在所述总语料库中的出现次数,M'表示所述总语料库中所包含词的总数量。
Among them, b represents the second distribution deviation of the word w in the target corpus relative to the total corpus, freq a (w) represents the frequency of the word w in the target corpus, freq(w) represents the word w in the total corpus. The frequency of occurrence in the corpus, t represents the number of occurrences of the word w in the target corpus, M represents the total number of words contained in the target corpus, t' represents the number of occurrences of the word w in the total corpus, M ' represents the total number of words contained in the total corpus.
示例性的,目标语料库中所包含的词的总数量为1000个,而词“运动”在目标语料库中出现了100次,则“运动”在目标语料库中的出现频率为:
总语料库中所包含的词的总数量为5000个,而词“运动”在总语料库中出现了120次,则“运动”在总语料库中的出现频率为:
则“运动”的第二分布偏差为:
Exemplarily, the total number of words contained in the target corpus is 1000, and the word "movement" appears 100 times in the target corpus, then the frequency of occurrence of "movement" in the target corpus is: The total number of words contained in the total corpus is 5000, and the word "movement" appears 120 times in the total corpus, so the frequency of "movement" in the total corpus is: Then the second distribution deviation of "Motion" is:
在本公开实施例中,针对目标语料库中每条样本数据文本,分别根据样本文本数据中至少一个词的第二分布偏差计算对应词的第二重要性分数,其中,第二重要性分数反映了每个词在样本文本数据中的重要程度,其中,第二重要性分数越大,表示该词在样本文本数据中越重要,反之,第二重要性分数越小,表示该词在样本文本数据中越不重要。然后按照第二重要性分数从大到小的顺序对样本文本数据中的至少一个词进行排序,并将排序后的词组成的序列作为与样本文本数据对应的样本词序列。可以理解的是,在样本词序列中词越靠前,该词的第二重要性分数越大,表明该词在样本文本数据中越重要,越能表明该样本文本数据想要表达的含义、内容,或者越能表明该样本文本数据的类别。In the embodiment of the present disclosure, for each piece of sample data text in the target corpus, the second importance score of the corresponding word is calculated according to the second distribution deviation of at least one word in the sample text data, wherein the second importance score reflects the The importance of each word in the sample text data, where the larger the second importance score is, the more important the word is in the sample text data; otherwise, the smaller the second importance score is, the more important the word is in the sample text data. unimportant. Then at least one word in the sample text data is sorted in descending order of the second importance score, and the sequence consisting of the sorted words is used as a sample word sequence corresponding to the sample text data. It can be understood that the higher the word in the sample word sequence, the greater the second importance score of the word, indicating that the word is more important in the sample text data, and the more it can indicate the meaning and content that the sample text data wants to express. , or the more able to indicate the category of the sample text data.
基于目标语料库中至少一条样本文本数据对应的样本词序列,构建字典树。示例性的,当所有样本词序列中的第一个词不同时,可以假设将空节点作为字典树的根节点,将所有样本词序列中的第一个词作为该根节点的子节点,将所有样本词序列中的第二个词作为同一样本词序列中的第一个词所在节点的子节点,将所有样本词序列中的第三个词作为同一样本词序列中的第二个词所在节点的子节点,依次类推,直至将所有样本词序列中的所有词填写于字典树的多个节点中为止。当所有样本词序列中的第一个词相同时,可以将所有样本词序列中的第一个词作为字典树的根节点,将所有样本词序列中的第二个词作为该根节点的子节点,将所有样本词序列中的第三个词作为同一样本词序列中的第二个词所在节点的子节点,依次类推,直至将所有样本词序列中的所有词填写于字典树的多个节点中为止。示例性的,目标语料库中的五条样本文本数据对应的样本词序列分别为:[中间商品],[中间大不],[中间商韩],[商中间],[后面中间],则基于上述五条样本词序列构建的字典树如图2所示。A dictionary tree is constructed based on a sample word sequence corresponding to at least one piece of sample text data in the target corpus. Exemplarily, when the first words in all sample word sequences are different, it can be assumed that the empty node is used as the root node of the dictionary tree, the first word in all sample word sequences is used as the child node of the root node, and the The second word in all sample word sequences is taken as the child node of the node where the first word in the same sample word sequence is located, and the third word in all sample word sequences is taken as the location of the second word in the same sample word sequence The child nodes of the node, and so on, until all words in all sample word sequences are filled in multiple nodes of the dictionary tree. When the first words in all sample word sequences are the same, the first word in all sample word sequences can be used as the root node of the dictionary tree, and the second word in all sample word sequences can be used as a child of the root node node, take the third word in all sample word sequences as the child node of the node where the second word in the same sample word sequence is located, and so on, until all words in all sample word sequences are filled in multiple up to the node. Exemplarily, the sample word sequences corresponding to the five pieces of sample text data in the target corpus are: [intermediate commodity], [intermediate bigbuy], [intermediate business Korean], [business middle], [behind the middle], based on the above The dictionary tree constructed by five sample word sequences is shown in Figure 2.
在一些实施例中,针对所述目标语料库中每条样本文本数据,分别根据所 述样本文本数据中每个词的第二分布偏差计算对应词的第二重要性分数,包括:针对所述目标语料库中每条样本文本数据,分别计算所述样本文本数据中每个词的第二词频-逆文档频率;分别根据每个第二词频-逆文档频率及对应的第二分布偏差,计算所述样本文本数据中所述每个词的第二重要性分数。其中,第二词频-逆文档频率可以间接反映每个词在样本文本数据中的重要性,因此,可计算样本文本数据中每个词的第二词频-逆文档频率,然后根据每个第二词频-逆文档频率及对应的第二分布偏差,计算样本文本数据中所述每个词的第二重要性分数。其中,第二重要性分数为第二词频-逆文档频率与对应的第二分布偏差的乘积。示例性的,分别根据每个第二词频-逆文档频率及对应的第二分布偏差,计算所述样本文本数据中所述每个词的第二重要性分数,包括:根据如下公式计算所述样本文本数据中所述每个词的第二重要性分数:In some embodiments, for each piece of sample text data in the target corpus, calculating the second importance score of the corresponding word according to the second distribution deviation of each word in the sample text data, respectively, includes: for the target For each piece of sample text data in the corpus, calculate the second word frequency-inverse document frequency of each word in the sample text data; respectively, according to each second word frequency-inverse document frequency and the corresponding second distribution deviation, calculate the The second importance score for each word described in the sample text data. Among them, the second word frequency-inverse document frequency can indirectly reflect the importance of each word in the sample text data. Therefore, the second word frequency-inverse document frequency of each word in the sample text data can be calculated, and then according to each second word frequency-inverse document frequency Word frequency - inverse document frequency and corresponding second distribution deviation, to calculate the second importance score of each word in the sample text data. The second importance score is the product of the second word frequency-inverse document frequency and the corresponding second distribution deviation. Exemplarily, calculating the second importance score of each word in the sample text data according to each second word frequency-inverse document frequency and the corresponding second distribution deviation respectively, includes: calculating the second importance score according to the following formula: The second importance score for each word described in the sample text data:
其中,s(w)表示所述样本文本数据中词w的第二重要性分数,tf-idf
a(w)表示所述样本数据文本中的词w的第二词频-逆文档频率,
表示样本文本数据中词w的第二分布偏差。
Wherein, s(w) represents the second importance score of the word w in the sample text data, tf-idf a (w) represents the second word frequency-inverse document frequency of the word w in the sample data text, Represents the second distribution bias of word w in the sample text data.
可选的,分别确定所述样本文本数据中每个词的第二词频和第二逆文档频率,包括:根据如下公式分别计算所述样本文本数据中每个词的第二词频和第二逆文档频率:Optionally, determining the second word frequency and the second inverse document frequency of each word in the sample text data respectively includes: calculating the second word frequency and the second inverse document frequency of each word in the sample text data respectively according to the following formula: Document frequency:
tf(w)=mtf(w)=m
idf(w)=log((N/n))idf(w)=log((N/n))
根据所述第二词频和所述第二逆文档频率计算所述样本文本数据中对应词的第二词频-逆文档频率,包括:根据如下公式计算所述样本文本数据中每个词的第二词频-逆文档频率:Calculating the second word frequency-inverse document frequency of the corresponding word in the sample text data according to the second word frequency and the second inverse document frequency includes: calculating the second word frequency of each word in the sample text data according to the following formula Term Frequency - Inverse Document Frequency:
tf-idf(w)=tf(w)*idf(w)tf-idf(w)=tf(w)*idf(w)
其中,w表示所述样本数据文本中的任意一个词,tf(w)表示所述样本数据文本中的词w的第二词频,idf(w)表示所述样本数据文本中的词w的第二逆文档频率,tf-idf(w)表示所述样本数据文本中的词w的第二词频-逆文档频率,m表示词w在所述样本数据文本中出现的次数,n表示所述目标语料库中包含词w的样本文本数据的条数,N表示所述目标语料库中所包含的样本文本数据的总条数。Wherein, w represents any word in the sample data text, tf(w) represents the second word frequency of the word w in the sample data text, and idf(w) represents the second word frequency of the word w in the sample data text Two inverse document frequency, tf-idf(w) represents the second word frequency-inverse document frequency of the word w in the sample data text, m represents the number of times the word w appears in the sample data text, n represents the target The number of pieces of sample text data containing word w in the corpus, and N represents the total number of pieces of sample text data contained in the target corpus.
示例性的,在目标语料库中共包含200条样本文本数据,则N=200,在某条样本文本数据中,词“运动”出现了两次,则m=2,在200条样本文本数据中共有80条样本文本数据中包含了“运动”这一词,则n=80,所以,在该样本文本数据中词“运动”的第二词频为:tf(w)=m=2,第二逆文档频率为:idf(w)=log(N/n)=log(200/80)=0.398,则在该样本文本数据中词“运动”的第二词频-逆文档频率为:tf-idf(w)=tf(w)*idf(w)=2*0.398=0.796。Exemplarily, a total of 200 pieces of sample text data are included in the target corpus, then N=200. In a certain piece of sample text data, the word "movement" appears twice, then m=2, and there are a total of 200 pieces of sample text data. 80 pieces of sample text data contain the word "movement", then n=80, so the second word frequency of the word "movement" in the sample text data is: tf(w)=m=2, the second inverse The document frequency is: idf(w)=log(N/n)=log(200/80)=0.398, then the second word frequency-inverse document frequency of the word "movement" in the sample text data is: tf-idf( w)=tf(w)*idf(w)=2*0.398=0.796.
在一些实施例中,在基于至少一个样本词序列构建所述字典树之后,还包括:确定所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数;根据所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数对所述字典树进行剪枝,直至所述字典树所包含的节点数量达到预设数量为止。这样设置可以在保证基于字典树能够准确确定与目标文本数据对应的目标词序列的前提下,有效提高目标词序列的搜索速度,从而提高文本聚类的效率。示例性的,确定字典树中每个节点的词在所有的样本词序列中同一位置总共的出现次数,示例性的,在图2所示的字典树中,按照根节点到子节点的顺序,字典树的第一层级中的词“中间”在所有样本词序列中同一位置的出现次数为3,第一层级中的词“商”的出现次数为1,第一层级中的词“后面”的出现次数为1,第二层级中的词“商”的出现次数为2,第二层级中词“大”的出现次数为1,第二层级中的词“中间”的出现次数为2,第三层级中的词“品”、“不”及“韩”的出现次数均为1。根据字典树中每个节点的词在所有样本词序列中同一位置的出现次数,对字典树进行剪枝,直至字典树中所包含的节点数量达到预设数量为止。In some embodiments, after constructing the dictionary tree based on at least one sample word sequence, the method further includes: determining the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences; according to the dictionary The dictionary tree is pruned by the number of occurrences of the word of each node in the tree at the same position in all sample word sequences until the number of nodes contained in the dictionary tree reaches a preset number. This setting can effectively improve the search speed of the target word sequence on the premise that the target word sequence corresponding to the target text data can be accurately determined based on the dictionary tree, thereby improving the efficiency of text clustering. Exemplarily, determine the total number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences. Exemplarily, in the dictionary tree shown in FIG. 2, in the order from the root node to the child node, The number of occurrences of the word "middle" in the first level of the dictionary tree at the same position in all sample word sequences is 3, the number of occurrences of the word "quotient" in the first level is 1, and the word "behind" in the first level The number of occurrences of the word "quotient" in the second level is 2, the number of occurrences of the word "big" in the second level is 1, and the number of occurrences of the word "middle" in the second level is 2, The occurrences of the words "pin", "bu" and "han" in the third level are all 1. According to the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences, the dictionary tree is pruned until the number of nodes contained in the dictionary tree reaches a preset number.
可选的,根据所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数对所述字典树进行剪枝,直至所述字典树所包含的节点数量达到预设数量为止,包括:按照所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数从小到大的顺序,依次删除所述字典树中同一出现次数对应的节点,直至所述字典树所包含的节点数量达到预设数量为止。示例性的,可以将字典树中节点的词在所有样本词序列中同一位置的出现次数为1的节点删除,再将字典树中节点的词在所有样本词序列中同一位置的出现次数为2的节点删除,依次类推,直至字典树中所包含的节点数量达到预设数量为止。其中,可以按照从根节点到子节点的顺序依次删除字典树中同一出现次数对应的节点。Optionally, the dictionary tree is pruned according to the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences, until the number of nodes contained in the dictionary tree reaches a preset number. , including: according to the order of occurrences of the words of each node in the dictionary tree at the same position in all sample word sequences from small to large, sequentially delete the nodes corresponding to the same occurrence in the dictionary tree, until the dictionary tree The number of included nodes reaches the preset number. Exemplarily, the node whose word in the dictionary tree appears at the same position in all sample word sequences can be deleted, and then the number of occurrences of the node word in the dictionary tree at the same position in all sample word sequences is 2. Delete the nodes of , and so on, until the number of nodes contained in the dictionary tree reaches the preset number. Wherein, the nodes corresponding to the same number of occurrences in the dictionary tree can be deleted in sequence from the root node to the child nodes.
图3是本公开另一实施例中的一种文本聚类方法的流程图,如图3所示,该方法包括如下步骤:FIG. 3 is a flowchart of a text clustering method in another embodiment of the present disclosure. As shown in FIG. 3 , the method includes the following steps:
步骤310,获取待聚类的目标文本数据集;其中,目标文本数据集中包括至少一条目标文本数据。Step 310: Obtain a target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data.
步骤320,获取与预先构建的字典树对应的参数配置信息;其中,参数配置信息包括逆文档频率列表和分布偏差列表;其中,逆文档频率列表中包括字典树中所包含的每个词的逆文档频率,分布偏差列表中包括字典树中所包含的每个词的分布偏差。Step 320: Obtain parameter configuration information corresponding to the pre-built dictionary tree; wherein, the parameter configuration information includes an inverse document frequency list and a distribution deviation list; wherein, the inverse document frequency list includes the inverse of each word contained in the dictionary tree. Document frequency, distribution deviation The list includes the distribution deviation for each word contained in the dictionary tree.
步骤330,针对目标文本数据集中的每条目标文本数据,确定每个词在目标文本数据中的出现次数,并将出现次数作为对应词的第一词频。Step 330: For each piece of target text data in the target text data set, determine the number of occurrences of each word in the target text data, and use the number of occurrences as the first word frequency of the corresponding word.
步骤340,在逆文档频率列表中,分别查找与目标文本数据中的至少一个词对应的逆文档频率,作为目标文本数据中至少一个词的第一逆文档频率。 Step 340 , in the inverse document frequency list, search for the inverse document frequency corresponding to at least one word in the target text data, respectively, as the first inverse document frequency of the at least one word in the target text data.
步骤350,根据第一词频和第一逆文档频率计算对应词的第一词频-逆文档频率;其中,第一词频-逆文档频率为第一词频与第一逆文档频率的乘积。Step 350: Calculate the first word frequency-inverse document frequency of the corresponding word according to the first word frequency and the first inverse document frequency; wherein the first word frequency-inverse document frequency is the product of the first word frequency and the first inverse document frequency.
步骤360,在分布偏差列表中,分别查找与目标文本数据中的每个词对应的分布偏差,作为目标文本数据中所述每个词的第一分布偏差。 Step 360 , in the distribution deviation list, search for the distribution deviation corresponding to each word in the target text data, as the first distribution deviation of each word in the target text data.
步骤370,分别根据每个第一词频-逆文档频率及对应的第一分布偏差,计 算目标文本数据中所述每个词的第一重要性分数;其中,第一重要性分数为第一词频-逆文档频率与第一分布偏差的乘积。Step 370: Calculate the first importance score of each word in the target text data according to each first word frequency-inverse document frequency and the corresponding first distribution deviation; wherein, the first importance score is the first word frequency - The product of the inverse document frequency and the deviation of the first distribution.
步骤380,基于第一重要性分数对目标文本数据中的至少一个词进行排序,生成与目标文本数据对应的待搜索词序列。Step 380: Rank at least one word in the target text data based on the first importance score, and generate a to-be-searched word sequence corresponding to the target text data.
步骤390,针对每个待搜索词序列,在所述预先构建的字典树中按照从根节点到子节点的顺序,搜索与待搜索词序列适配的目标词序列。Step 390: For each word sequence to be searched, in the pre-built dictionary tree, search for a target word sequence adapted to the word sequence to be searched in the order from the root node to the child nodes.
步骤3100,分别根据至少一个目标词序列对所述至少一个目标词序列对应的目标文本数据进行聚类,得到文本聚类结果。Step 3100: Cluster the target text data corresponding to the at least one target word sequence according to the at least one target word sequence, respectively, to obtain a text clustering result.
本公开实施例的技术方案,通过确定的待聚类的文本数据中每个词的词频、逆文档频率及分布偏差,计算每个词的重要性分数,并基于重要性分数对待聚类的文本数据中的至少一个词进行排序,生成待搜索词序列,然后基于预先构建的字典树查找与待搜索词适配的目标词序列,从而基于目标词序列对文本数据进行聚类,简化了文本聚类的过程,大大降低了文本聚类的时间复杂度,有效提高了文本聚类的效率和准确性。The technical solution of the embodiment of the present disclosure calculates the importance score of each word by determining the word frequency, inverse document frequency and distribution deviation of each word in the text data to be clustered, and based on the importance score of the text to be clustered Sort at least one word in the data to generate a sequence of words to be searched, and then search for a target word sequence that matches the word to be searched based on a pre-built dictionary tree, so as to cluster the text data based on the target word sequence, which simplifies text clustering. The class process greatly reduces the time complexity of text clustering and effectively improves the efficiency and accuracy of text clustering.
图4是本公开另一实施例中的一种文本聚类方法的流程图,如图4所示,该方法包括如下步骤:FIG. 4 is a flowchart of a text clustering method in another embodiment of the present disclosure. As shown in FIG. 4 , the method includes the following steps:
步骤410,获取总语料库和目标语料库;其中,总语料库包括目标语料库,目标语料库中包含至少一条样本文本数据。Step 410: Obtain a total corpus and a target corpus; wherein, the total corpus includes a target corpus, and the target corpus includes at least one piece of sample text data.
步骤420,计算目标语料库中所包含的每个词相对于总语料库的第二分布偏差。Step 420: Calculate the second distribution deviation of each word contained in the target corpus relative to the total corpus.
可选的,计算目标语料库中所包含的每个词相对于总语料库的第二分布偏差,包括:根据如下公式计算目标语料库中所包含的每个词相对于总语料库的第二分布偏差:Optionally, calculating the second distribution deviation of each word included in the target corpus relative to the total corpus includes: calculating the second distribution deviation of each word included in the target corpus relative to the total corpus according to the following formula:
其中,b表示目标语料库的词w相对于总语料库的第二分布偏差,freq
a(w)表示词w在目标语料库中的出现频率,freq(w)表示词w在总语料库中的出现频率,t表示词w在目标语料库中的出现次数,M表示目标语料库中所包含词的总数量,t'表示词w在总语料库中的出现次数,M'表示总语料库中所包含词的总数量。
Among them, b represents the second distribution deviation of the word w of the target corpus relative to the total corpus, freq a (w) represents the frequency of the word w in the target corpus, freq(w) represents the frequency of the word w in the total corpus, t represents the number of occurrences of word w in the target corpus, M represents the total number of words contained in the target corpus, t' represents the number of occurrences of word w in the total corpus, and M' represents the total number of words contained in the total corpus.
步骤430,针对目标语料库中每条样本文本数据,分别根据样本文本数据中每个词的第二分布偏差计算对应词的第二重要性分数,并按照第二重要性分数从大到小的顺序对每条样本文本数据中的至少一个词进行排序,生成与样本文本数据对应的样本词序列。 Step 430, for each piece of sample text data in the target corpus, calculate the second importance score of the corresponding word according to the second distribution deviation of each word in the sample text data, and follow the second importance score in descending order. Sort at least one word in each piece of sample text data to generate a sample word sequence corresponding to the sample text data.
可选的,针对目标语料库中每条样本文本数据,分别根据样本文本数据中每个词的第二分布偏差计算对应词的第二重要性分数,包括:针对目标语料库中每条样本文本数据,分别计算样本文本数据中每个词的第二词频-逆文档频率;分别根据每个第二词频-逆文档频率及对应的第二分布偏差,计算样本文本数据中所述每个词的第二重要性分数。Optionally, for each piece of sample text data in the target corpus, calculating the second importance score of the corresponding word according to the second distribution deviation of each word in the sample text data, including: for each piece of sample text data in the target corpus, Calculate the second word frequency-inverse document frequency of each word in the sample text data respectively; according to each second word frequency-inverse document frequency and the corresponding second distribution deviation, calculate the second word frequency of each word in the sample text data. Importance Score.
可选的,分别计算样本文本数据中每个词的第二词频-逆文档频率,包括:分别确定样本文本数据中每个词的第二词频和第二逆文档频率;根据第二词频和第二逆文档频率计算样本文本数据中对应词的第二词频-逆文档频率。Optionally, calculating the second word frequency-inverse document frequency of each word in the sample text data respectively includes: respectively determining the second word frequency and the second inverse document frequency of each word in the sample text data; The second inverse document frequency calculates the second word frequency-inverse document frequency of the corresponding word in the sample text data.
可选的,分别确定样本文本数据中每个词的第二词频和第二逆文档频率,包括:根据如下公式分别计算样本文本数据中每个词的第二词频和第二逆文档频率:Optionally, determining the second word frequency and the second inverse document frequency of each word in the sample text data respectively includes: calculating the second word frequency and the second inverse document frequency of each word in the sample text data according to the following formula:
tf(w)=mtf(w)=m
idf(w)=log((N/n))idf(w)=log((N/n))
根据第二词频和第二逆文档频率计算样本文本数据中对应词的第二词频-逆 文档频率,包括:根据如下公式计算样本文本数据中每个词的第二词频-逆文档频率:Calculate the second word frequency-inverse document frequency of corresponding words in the sample text data according to the second word frequency and the second inverse document frequency, comprising: calculating the second word frequency-inverse document frequency of each word in the sample text data according to the following formula:
tf-idf(w)=tf(w)*idf(w)tf-idf(w)=tf(w)*idf(w)
其中,w表示样本数据文本中的任意一个词,tf(w)表示样本数据文本中的词w的第二词频,idf(w)表示样本数据文本中的词w的第二逆文档频率,tf-idf(w)表示样本数据文本中的词w的第二词频-逆文档频率,m表示词w在样本数据文本中出现的次数,n表示目标语料库中包含词w的样本文本数据的条数,N表示目标语料库中所包含的样本文本数据的总条数。Among them, w represents any word in the sample data text, tf(w) represents the second word frequency of the word w in the sample data text, idf(w) represents the second inverse document frequency of the word w in the sample data text, tf -idf(w) represents the second word frequency-inverse document frequency of the word w in the sample data text, m represents the number of times the word w appears in the sample data text, n represents the number of sample text data containing the word w in the target corpus , N represents the total number of sample text data contained in the target corpus.
可选的,分别根据每个第二词频-逆文档频率及对应的第二分布偏差,计算样本文本数据中所述每个词的第二重要性分数,包括:Optionally, calculate the second importance score of each word in the sample text data according to each second word frequency-inverse document frequency and the corresponding second distribution deviation, including:
根据如下公式计算样本文本数据中每个词的第二重要性分数:Calculate the second importance score of each word in the sample text data according to the following formula:
其中,s(w)表示样本文本数据中词w的第二重要性分数,tf-idf
a(w)表示样本数据文本中的词w的第二词频-逆文档频率,
表示样本文本数据中词w的第二分布偏差。
Among them, s(w) represents the second importance score of the word w in the sample text data, tf-idf a (w) represents the second word frequency-inverse document frequency of the word w in the sample data text, Represents the second distribution deviation of word w in the sample text data.
步骤440,基于至少一个样本词序列构建字典树。 Step 440, construct a dictionary tree based on at least one sample word sequence.
步骤450,确定字典树中每个节点的词在所有样本词序列中同一位置的出现次数。Step 450: Determine the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences.
步骤460,按照字典树中每个节点的词在所有样本词序列中同一位置的出现次数从小到大的顺序,依次删除字典树中同一出现次数对应的节点,直至字典树所包含的节点数量达到预设数量为止。Step 460: Delete the nodes corresponding to the same number of occurrences in the dictionary tree in order according to the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences, until the number of nodes contained in the dictionary tree reaches 460. up to the preset number.
步骤470,获取待聚类的目标文本数据集;其中,目标文本数据集中包括至少一条目标文本数据。Step 470: Obtain a target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data.
步骤480,针对目标文本数据集中的每条目标文本数据,计算目标文本数据 中至少一个词的第一重要性分数,并基于第一重要性分数对目标文本数据中的至少一个词进行排序,生成与目标文本数据对应的待搜索词序列。 Step 480, for each piece of target text data in the target text data set, calculate the first importance score of at least one word in the target text data, and sort at least one word in the target text data based on the first importance score, and generate The sequence of words to be searched corresponding to the target text data.
步骤490,针对每个待搜索词序列,在预先构建的字典树中按照从根节点到子节点的顺序,搜索与待搜索词序列适配的目标词序列;其中,目标词序列属于待搜索词序列的子序列。 Step 490, for each word sequence to be searched, in the pre-built dictionary tree in the order from the root node to the child node, search for a target word sequence that is adapted to the word sequence to be searched; wherein, the target word sequence belongs to the word to be searched. A subsequence of a sequence.
步骤4100,分别根据至少一个目标词序列对所述至少一个目标词序列对应的目标文本数据进行聚类,得到文本聚类结果。Step 4100: Cluster the target text data corresponding to the at least one target word sequence according to the at least one target word sequence, respectively, to obtain a text clustering result.
本公开实施例提供的文本聚类方法,可以构建与目标语料库匹配的字典树,并对字典树进行剪枝,然后计算待聚类的文本数据中至少一个词的重要性分数,并基于重要性分数对待聚类的文本数据中的至少一个词进行排序,生成待搜索词序列,然后基于字典树查找与待搜索词适配的目标词序列,从而基于目标词序列对文本数据进行聚类。通过对字典树进行剪枝,可减小字典树的深度,在保证基于字典树能够准确确定与目标文本数据对应的目标词序列的前提下,可有效提高目标词序列的搜索速度,大大降低了文本聚类的时间复杂度,有效提高了文本聚类的效率和准确性。The text clustering method provided by the embodiment of the present disclosure can build a dictionary tree matching the target corpus, prune the dictionary tree, and then calculate the importance score of at least one word in the text data to be clustered, and based on the importance The score sorts at least one word in the text data to be clustered to generate a sequence of words to be searched, and then searches for a target word sequence adapted to the word to be searched based on the dictionary tree, thereby clustering the text data based on the target word sequence. By pruning the dictionary tree, the depth of the dictionary tree can be reduced. On the premise that the target word sequence corresponding to the target text data can be accurately determined based on the dictionary tree, the search speed of the target word sequence can be effectively improved, and the search speed of the target word sequence can be greatly reduced. The time complexity of text clustering effectively improves the efficiency and accuracy of text clustering.
图5是本公开另一实施例中的一种文本聚类方法的流程图,如图5所示,该方法包括如下步骤:FIG. 5 is a flowchart of a text clustering method in another embodiment of the present disclosure. As shown in FIG. 5 , the method includes the following steps:
步骤510,获取总语料库和目标语料库;其中,总语料库包括目标语料库,目标语料库中包含至少一条样本文本数据。Step 510: Obtain a total corpus and a target corpus; wherein, the total corpus includes a target corpus, and the target corpus includes at least one piece of sample text data.
步骤520,计算目标语料库中所包含的每个词相对于总语料库的第二分布偏差。Step 520: Calculate the second distribution deviation of each word contained in the target corpus relative to the total corpus.
步骤530,针对目标语料库中每条样本文本数据,分别确定样本文本数据中每个词的第二词频和第二逆文档频率。Step 530: For each piece of sample text data in the target corpus, determine the second word frequency and the second inverse document frequency of each word in the sample text data, respectively.
步骤540,根据第二词频和第二逆文档频率计算样本文本数据中对应词的第二词频-逆文档频率。Step 540: Calculate the second word frequency-inverse document frequency of the corresponding word in the sample text data according to the second word frequency and the second inverse document frequency.
步骤550,分别根据每个第二词频-逆文档频率及对应的第二分布偏差,计 算样本文本数据中所述每个词的第二重要性分数。Step 550: Calculate the second importance score of each word in the sample text data according to each second word frequency-inverse document frequency and the corresponding second distribution deviation.
步骤560,按照第二重要性分数从大到小的顺序对每条样本文本数据中的至少一个词进行排序,生成与样本文本数据对应的样本词序列。Step 560: Sort at least one word in each piece of sample text data in descending order of the second importance score to generate a sample word sequence corresponding to the sample text data.
步骤570,基于至少一个样本词序列构建字典树。 Step 570, construct a dictionary tree based on at least one sample word sequence.
步骤580,将至少一个第二分布偏差构成的分布偏差列表及至少一个第二逆文档频率构成的逆文档列表,作为字典树的参数配置信息进行存储。Step 580: Store at least one distribution deviation list composed of the second distribution deviation and at least one inverse document list composed of the second inverse document frequency as parameter configuration information of the dictionary tree.
步骤590,获取待聚类的目标文本数据集;其中,目标文本数据集中包括至少一条目标文本数据。Step 590: Obtain a target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data.
步骤5100,针对目标文本数据集中的每条目标文本数据,确定每个词在目标文本数据中的出现次数,并将出现次数作为所述每个词的第一词频。Step 5100: For each piece of target text data in the target text data set, determine the number of occurrences of each word in the target text data, and use the number of occurrences as the first word frequency of each word.
步骤5110,在逆文档频率列表中,分别查找与目标文本数据中的每个词对应的逆文档频率,作为目标文本数据中所述每个词的第一逆文档频率。Step 5110: In the inverse document frequency list, search for the inverse document frequency corresponding to each word in the target text data, as the first inverse document frequency of each word in the target text data.
步骤5120,根据第一词频和第一逆文档频率计算对应词的第一词频-逆文档频率;其中,第一词频-逆文档频率为第一词频与第一逆文档频率的乘积。Step 5120: Calculate the first word frequency-inverse document frequency of the corresponding word according to the first word frequency and the first inverse document frequency; wherein the first word frequency-inverse document frequency is the product of the first word frequency and the first inverse document frequency.
步骤5130,在分布偏差列表中,分别查找与目标文本数据中的每个词对应的分布偏差,作为目标文本数据中所述每个词的第一分布偏差。Step 5130: In the distribution deviation list, search for the distribution deviation corresponding to each word in the target text data, as the first distribution deviation of each word in the target text data.
步骤5140,分别根据每个第一词频-逆文档频率及对应的第一分布偏差,计算目标文本数据中所述每个词的第一重要性分数;其中,第一重要性分数为第一词频-逆文档频率与第一分布偏差的乘积。Step 5140: Calculate the first importance score of each word in the target text data according to each first word frequency-inverse document frequency and the corresponding first distribution deviation; wherein, the first importance score is the first word frequency - The product of the inverse document frequency and the deviation of the first distribution.
步骤5150,基于第一重要性分数对目标文本数据中的至少一个词进行排序,生成与目标文本数据对应的待搜索词序列。Step 5150: Sort at least one word in the target text data based on the first importance score, and generate a to-be-searched word sequence corresponding to the target text data.
步骤5160,针对每个待搜索词序列,在预先构建的字典树中搜索与待搜索词序列适配的目标词序列;其中,目标词序列属于待搜索词序列的子序列。Step 5160: For each word sequence to be searched, a pre-built dictionary tree is searched for a target word sequence adapted to the word sequence to be searched; wherein the target word sequence belongs to a subsequence of the word sequence to be searched.
步骤5170,分别根据至少一个目标词序列对所述至少一个目标词序列对应的目标文本数据进行聚类,得到文本聚类结果。Step 5170: Cluster the target text data corresponding to the at least one target word sequence according to the at least one target word sequence, respectively, to obtain a text clustering result.
本公开实施例提供的文本聚类方法,基于字典树对文本数据进行聚类,简 化了文本聚类的过程,大大降低了文本聚类的时间复杂度,有效提高了文本聚类的效率和准确性。The text clustering method provided by the embodiments of the present disclosure clusters text data based on a dictionary tree, which simplifies the process of text clustering, greatly reduces the time complexity of text clustering, and effectively improves the efficiency and accuracy of text clustering. sex.
图6为本公开另一实施例提供的一种文本聚类装置的结构示意图。如图6所示,该装置包括:文本数据获取模块610,搜索词序列生成模块620,目标词序列确定模块630和文本聚类模块640。FIG. 6 is a schematic structural diagram of a text clustering apparatus according to another embodiment of the present disclosure. As shown in FIG. 6 , the apparatus includes: a text data acquisition module 610 , a search word sequence generation module 620 , a target word sequence determination module 630 and a text clustering module 640 .
文本数据获取模块610,设置为获取待聚类的目标文本数据集;其中,所述目标文本数据集中包括至少一条目标文本数据;The text data acquisition module 610 is configured to acquire the target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data;
搜索词序列生成模块620,设置为针对所述目标文本数据集中的每条目标文本数据,计算所述目标文本数据中至少一个词的第一重要性分数,并基于所述第一重要性分数对所述目标文本数据中的至少一个词进行排序,生成与所述目标文本数据对应的待搜索词序列;The search word sequence generation module 620 is configured to, for each piece of target text data in the target text data set, calculate the first importance score of at least one word in the target text data, and based on the first importance score Sort at least one word in the target text data to generate a to-be-searched word sequence corresponding to the target text data;
目标词序列确定模块630,设置为针对每个待搜索词序列,在预先构建的字典树中搜索与所述待搜索词序列适配的目标词序列;其中,所述目标词序列属于所述待搜索词序列的子序列;The target word sequence determination module 630 is configured to search a pre-built dictionary tree for a target word sequence that is adapted to the to-be-searched word sequence for each to-be-searched word sequence; wherein, the target word sequence belongs to the to-be-searched word sequence. search for subsequences of word sequences;
文本聚类模块640,设置为分别根据至少一个目标词序列对所述至少一个目标词序列对应的目标文本数据进行聚类,得到文本聚类结果。The text clustering module 640 is configured to cluster the target text data corresponding to the at least one target word sequence according to the at least one target word sequence, respectively, to obtain a text clustering result.
本公开实施例,获取待聚类的目标文本数据集;其中,所述目标文本数据集中包括至少一条目标文本数据;针对所述目标文本数据集中的每条目标文本数据,计算所述目标文本数据中至少一个词的第一重要性分数,并基于所述第一重要性分数对所述目标文本数据中的至少一个词进行排序,生成与所述目标文本数据对应的待搜索词序列;针对每个待搜索词序列,在预先构建的字典树中搜索与所述待搜索词序列适配的目标词序列;其中,所述目标词序列属于所述待搜索词序列的子序列;分别根据至少一个目标词序列对所述至少一个目标词序列对应的目标文本数据进行聚类,得到文本聚类结果。本公开实施例提供的文本聚类装置,计算待聚类的文本数据中每个词的重要性分数,并基于重要性分数对待聚类的文本数据中的至少一个词进行排序,生成待搜索词序列,然 后基于预先构建的字典树查找与待搜索词适配的目标词序列,从而基于目标词序列对文本数据进行聚类,简化了文本聚类的过程,大大降低了文本聚类的时间复杂度,有效提高了文本聚类的效率和准确性。In this embodiment of the present disclosure, a target text data set to be clustered is obtained; wherein, the target text data set includes at least one piece of target text data; for each piece of target text data in the target text data set, the target text data is calculated the first importance score of at least one word in the target text data, and sort at least one word in the target text data based on the first importance score, and generate a sequence of words to be searched corresponding to the target text data; A word sequence to be searched is searched in a pre-built dictionary tree for a target word sequence adapted to the word sequence to be searched; wherein, the target word sequence belongs to a subsequence of the word sequence to be searched; according to at least one The target word sequence performs clustering on the target text data corresponding to the at least one target word sequence to obtain a text clustering result. The text clustering apparatus provided by the embodiment of the present disclosure calculates the importance score of each word in the text data to be clustered, and sorts at least one word in the text data to be clustered based on the importance score, and generates the word to be searched Then, based on the pre-built dictionary tree, the target word sequence that is suitable for the search word is found, so that the text data is clustered based on the target word sequence, which simplifies the process of text clustering and greatly reduces the time complexity of text clustering. It effectively improves the efficiency and accuracy of text clustering.
可选的,所述搜索词序列生成模块,包括:Optionally, the search word sequence generation module includes:
第一词频-逆文档频率计算单元,设置为针对所述目标文本数据集中的每条目标文本数据,分别计算所述目标文本数据中至少一个词的第一词频-逆文档频率;a first word frequency-inverse document frequency calculation unit, configured to calculate the first word frequency-inverse document frequency of at least one word in the target text data for each piece of target text data in the target text data set;
第一重要性分数计算单元,设置为分别根据至少一个第一词频-逆文档频率,计算所述目标文本数据中至少一个词的第一重要性分数。The first importance score calculation unit is configured to calculate the first importance score of at least one word in the target text data according to at least one first word frequency-inverse document frequency, respectively.
可选的,所述第一词频-逆文档频率计算单元,包括:Optionally, the first word frequency-inverse document frequency calculation unit includes:
第一频率确定子单元,设置为分别确定所述目标文本数据中每个词的第一词频和第一逆文档频率;a first frequency determination subunit, configured to respectively determine the first word frequency and the first inverse document frequency of each word in the target text data;
第一词频-逆文档频率计算子单元,设置为根据所述第一词频和所述第一逆文档频率计算对应词的第一词频-逆文档频率;其中,所述第一词频-逆文档频率为所述第一词频与所述第一逆文档频率的乘积。a first word frequency-inverse document frequency calculation subunit, configured to calculate the first word frequency-inverse document frequency of the corresponding word according to the first word frequency and the first inverse document frequency; wherein, the first word frequency-inverse document frequency is the product of the first word frequency and the first inverse document frequency.
可选的,所述第一频率确定子单元,设置为:Optionally, the first frequency determination subunit is set to:
确定每个词在所述目标文本数据中的出现次数,并将所述出现次数作为对应词的第一词频;Determine the number of occurrences of each word in the target text data, and use the number of occurrences as the first word frequency of the corresponding word;
获取与所述字典树对应的参数配置信息;其中,所述参数配置信息包括逆文档频率列表,所述逆文档频率列表中包括所述字典树中所包含的每个词的逆文档频率;Acquiring parameter configuration information corresponding to the dictionary tree; wherein the parameter configuration information includes an inverse document frequency list, and the inverse document frequency list includes the inverse document frequency of each word contained in the dictionary tree;
在所述逆文档频率列表中,分别查找与所述目标文本数据中的每个词对应的逆文档频率,作为所述目标文本数据中所述每个词的第一逆文档频率。In the inverse document frequency list, the inverse document frequency corresponding to each word in the target text data is respectively searched as the first inverse document frequency of each word in the target text data.
可选的,所述参数配置信息还包括分布偏差列表;其中,所述分布偏差列表中包括所述字典树中所包含的每个词的分布偏差;Optionally, the parameter configuration information further includes a distribution deviation list; wherein, the distribution deviation list includes the distribution deviation of each word contained in the dictionary tree;
所述装置还包括:The device also includes:
分布偏差确定模块,设置为在分别根据至少一个第一词频-逆文档频率,计算所述目标文本数据中至少一个词的第一重要性分数之前,在所述分布偏差列表中,分别查找与所述目标文本数据中的每个词对应的分布偏差,作为所述目标文本数据中所述每个词的第一分布偏差;The distribution deviation determination module is set to, before calculating the first importance score of at least one word in the target text data according to the at least one first word frequency-inverse document frequency, respectively, in the distribution deviation list, to search for the distribution deviation list corresponding to the The distribution deviation corresponding to each word in the target text data is used as the first distribution deviation of each word in the target text data;
所述第一重要性分数计算单元,设置为:The first importance score calculation unit is set to:
分别根据每个第一词频-逆文档频率及对应的第一分布偏差,计算所述目标文本数据中所述每个词的第一重要性分数;其中,所述第一重要性分数为所述第一词频-逆文档频率与所述第一分布偏差的乘积。Calculate the first importance score of each word in the target text data according to each first word frequency-inverse document frequency and the corresponding first distribution deviation; wherein, the first importance score is the The first word frequency - the product of the inverse document frequency and the deviation of the first distribution.
可选的,所述目标词序列确定模块,设置为:Optionally, the target word sequence determination module is set to:
针对每个待搜索词序列,在预先构建的字典树中按照从根节点到子节点的顺序,搜索与所述待搜索词序列适配的目标词序列。For each word sequence to be searched, a pre-built dictionary tree is searched for a target word sequence adapted to the word sequence to be searched in the order from the root node to the child node.
可选的,所述装置还包括:Optionally, the device further includes:
语料库获取模块,设置为在获取待聚类的目标文本数据集之前,获取总语料库和目标语料库;其中,所述总语料库包括所述目标语料库,所述目标语料库中包含至少一条样本文本数据;The corpus acquisition module is configured to acquire a general corpus and a target corpus before acquiring the target text data set to be clustered; wherein, the general corpus includes the target corpus, and the target corpus contains at least one piece of sample text data;
分布偏差计算模块,设置为计算所述目标语料库中所包含的每个词相对于所述总语料库的第二分布偏差;a distribution deviation calculation module, configured to calculate the second distribution deviation of each word contained in the target corpus relative to the total corpus;
样本词序列生成模块,设置为针对所述目标语料库中每条样本文本数据,分别根据所述样本文本数据中每个词的第二分布偏差计算对应词的第二重要性分数,并按照所述第二重要性分数从大到小的顺序对每条样本文本数据中的至少一个词进行排序,生成与所述样本文本数据对应的样本词序列;The sample word sequence generation module is configured to, for each piece of sample text data in the target corpus, calculate the second importance score of the corresponding word according to the second distribution deviation of each word in the sample text data, and calculate the second importance score of the corresponding word according to the The second importance score sorts at least one word in each piece of sample text data in descending order to generate a sample word sequence corresponding to the sample text data;
字典树构建模块,设置为基于至少一个样本词序列构建所述字典树。A dictionary tree building module, configured to build the dictionary tree based on at least one sample word sequence.
可选的,所述样本词序列生成模块,包括:Optionally, the sample word sequence generation module includes:
第二词频-逆文档频率计算单元,设置为针对所述目标语料库中每条样本文本数据,分别计算所述样本文本数据中每个词的第二词频-逆文档频率;A second word frequency-inverse document frequency calculation unit, configured to calculate the second word frequency-inverse document frequency of each word in the sample text data for each piece of sample text data in the target corpus;
第二重要性分数计算单元,设置为分别根据每个第二词频-逆文档频率及对 应的第二分布偏差,计算所述样本文本数据中所述每个词的第二重要性分数。The second importance score calculation unit is configured to calculate the second importance score of each word in the sample text data according to each second word frequency-inverse document frequency and the corresponding second distribution deviation, respectively.
可选的,第二词频-逆文档频率计算单元,包括:Optionally, the second word frequency-inverse document frequency calculation unit, including:
第二频率确定子单元,设置为分别确定所述样本文本数据中每个词的第二词频和第二逆文档频率;A second frequency determination subunit, configured to respectively determine the second word frequency and the second inverse document frequency of each word in the sample text data;
第二词频-逆文档频率计算子单元,设置为根据所述第二词频和所述第二逆文档频率计算所述样本文本数据中对应词的第二词频-逆文档频率。The second word frequency-inverse document frequency calculation subunit is configured to calculate the second word frequency-inverse document frequency of the corresponding word in the sample text data according to the second word frequency and the second inverse document frequency.
可选的,所述第二频率确定子单元,设置为:Optionally, the second frequency determination subunit is set to:
根据如下公式分别计算所述样本文本数据中每个个词的第二词频和第二逆文档频率:Calculate the second word frequency and the second inverse document frequency of each word in the sample text data according to the following formulas:
tf(w)=mtf(w)=m
idf(w)=log((N/n))idf(w)=log((N/n))
所述第二词频-逆文档频率计算子单元,设置为:The second word frequency-inverse document frequency calculation subunit is set to:
根据如下公式计算所述样本文本数据中每个个词的第二词频-逆文档频率:Calculate the second word frequency-inverse document frequency of each word in the sample text data according to the following formula:
tf-idf(w)=tf(w)*idf(w)tf-idf(w)=tf(w)*idf(w)
其中,w表示所述样本数据文本中的任意一个词,tf(w)表示所述样本数据文本中的词w的第二词频,idf(w)表示所述样本数据文本中的词w的第二逆文档频率,tf-idf(w)表示所述样本数据文本中的词w的第二词频-逆文档频率,m表示词w在所述样本数据文本中出现的次数,n表示所述目标语料库中包含词w的样本文本数据的条数,N表示所述目标语料库中所包含的样本文本数据的总条数。Wherein, w represents any word in the sample data text, tf(w) represents the second word frequency of the word w in the sample data text, and idf(w) represents the second word frequency of the word w in the sample data text Two inverse document frequency, tf-idf(w) represents the second word frequency-inverse document frequency of the word w in the sample data text, m represents the number of times the word w appears in the sample data text, n represents the target The number of pieces of sample text data containing word w in the corpus, and N represents the total number of pieces of sample text data contained in the target corpus.
可选的,所述第二重要性分数计算单元,设置为:Optionally, the second importance score calculation unit is set to:
根据如下公式计算所述样本文本数据中每个词的第二重要性分数:Calculate the second importance score of each word in the sample text data according to the following formula:
其中,s(w)表示所述样本文本数据中词w的第二重要性分数,tf-idf
a(w)表示所述样本数据文本中的词w的第二词频-逆文档频率,
表示样本文本 数据中词w的第二分布偏差。
Wherein, s(w) represents the second importance score of the word w in the sample text data, tf-idf a (w) represents the second word frequency-inverse document frequency of the word w in the sample data text, represents the second distributional bias of word w in the sample text data.
可选的,所示分布偏差计算模块,设置为:Optionally, the shown distribution deviation calculation module is set to:
根据如下公式计算所述目标语料库中所包含的每个词相对于所述总语料库的第二分布偏差:Calculate the second distribution deviation of each word contained in the target corpus relative to the total corpus according to the following formula:
其中,b表示目标语料库的词w相对于所述总语料库的第二分布偏差,freq
a(w)表示词w在所述目标语料库中的出现频率,freq(w)表示词w在所述总语料库中的出现频率,t表示词w在所述目标语料库中的出现次数,M表示所述目标语料库中所包含词的总数量,t'表示词w在所述总语料库中的出现次数,M'表示所述总语料库中所包含词的总数量。
Among them, b represents the second distribution deviation of the word w in the target corpus relative to the total corpus, freq a (w) represents the frequency of the word w in the target corpus, freq(w) represents the word w in the total corpus. The frequency of occurrence in the corpus, t represents the number of occurrences of the word w in the target corpus, M represents the total number of words contained in the target corpus, t' represents the number of occurrences of the word w in the total corpus, M ' represents the total number of words contained in the total corpus.
可选的,所述装置还包括:Optionally, the device further includes:
出现次数确定模块,设置为在基于至少一个样本词序列构建所述字典树之后,确定所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数;A number of occurrence determination module, configured to determine the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences after constructing the dictionary tree based on at least one sample word sequence;
字典树剪枝模块,设置为根据所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数对所述字典树进行剪枝,直至所述字典树所包含的节点数量达到预设数量为止。The dictionary tree pruning module is set to prune the dictionary tree according to the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences, until the number of nodes contained in the dictionary tree reaches up to the preset number.
可选的,所述字典树剪枝模块,设置为:Optionally, the dictionary tree pruning module is set to:
按照所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数从小到大的顺序,依次删除所述字典树中同一出现次数对应的节点,直至所述字典树所包含的节点数量达到预设数量为止。According to the order of occurrences of the words of each node in the dictionary tree at the same position in all sample word sequences from small to large, delete the nodes corresponding to the same occurrence in the dictionary tree in turn, until the dictionary tree contains The number of nodes reaches the preset number.
上述装置可执行本公开前述所有实施例所提供的方法,具备执行上述方法相应的功能模块。未在本公开实施例中详尽描述的技术细节,可参见本公开前 述所有实施例所提供的方法。The foregoing apparatus can execute the methods provided by all the foregoing embodiments of the present disclosure, and has functional modules corresponding to executing the foregoing methods. For technical details that are not described in detail in the embodiments of the present disclosure, reference may be made to the methods provided by all the foregoing embodiments of the present disclosure.
下面参考图7,其示出了适于用来实现本公开实施例的电子设备300的结构示意图。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、PAD(平板电脑)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(Television,TV)、台式计算机等等的固定终端,或者各种形式的服务器,如独立服务器或者服务器集群。图7示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。Referring next to FIG. 7 , it shows a schematic structural diagram of an electronic device 300 suitable for implementing an embodiment of the present disclosure. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), PADs (tablets), portable multimedia players (Portable Media Players). , PMP), mobile terminals such as in-vehicle terminals (such as in-vehicle navigation terminals), and stationary terminals such as digital televisions (TVs), desktop computers, etc., or various forms of servers, such as independent servers or server clusters. The electronic device shown in FIG. 7 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
如图7所示,电子设备300可以包括处理装置(例如中央处理器、图形处理器等)301,其可以根据存储在只读存储装置(Read-Only Memory,ROM)302中的程序或者从存储装置305加载到随机访问存储装置(Random Access Memory,RAM)303中的程序而执行各种适当的动作和处理。在RAM 303中,还存储有电子设备300操作所需的各种程序和数据。处理装置301、ROM 302以及RAM 303通过总线304彼此相连。输入/输出(Input/Output,I/O)接口305也连接至总线304。As shown in FIG. 7 , the electronic device 300 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 301, which may be stored in accordance with a program stored in a read-only storage device (Read-Only Memory, ROM) 302 or from a storage device The device 305 loads a program into a random access memory (RAM) 303 to perform various appropriate actions and processes. In the RAM 303, various programs and data required for the operation of the electronic device 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other through a bus 304. An Input/Output (I/O) interface 305 is also connected to the bus 304 .
通常,以下装置可以连接至I/O接口305:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置306;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置307;包括例如磁带、硬盘等的存储装置308;以及通信装置309。通信装置309可以允许电子设备300与其他设备进行无线或有线通信以交换数据。虽然图7示出了具有各种装置的电子设备300,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Typically, the following devices can be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) Output device 307 , speaker, vibrator, etc.; storage device 308 including, eg, magnetic tape, hard disk, etc.; and communication device 309 . Communication means 309 may allow electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 7 illustrates electronic device 300 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行词语的推荐 方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置309从网络上被下载和安装,或者从存储装置305被安装,或者从ROM 302被安装。在该计算机程序被处理装置301执行时,执行本公开实施例的方法中限定的上述功能。In particular, according to embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing a recommended method of a word. In such an embodiment, the computer program may be downloaded and installed from the network via the communication device 309, or from the storage device 305, or from the ROM 302. When the computer program is executed by the processing device 301, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有至少一个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器((Erasable Programmable Read-Only Memory,EPROM)或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc-Read Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。It should be noted that the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two. The computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections having at least one wire, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable Read memory ((Erasable Programmable Read-Only Memory, EPROM) or flash memory), optical fiber, portable compact disk read only memory (Compact Disc-Read Only Memory, CD-ROM), optical storage device, magnetic storage device, or any of the above suitable combination.
在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)(射频)等等,或者上述的任意合适的组合。In this disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. In the present disclosure, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device . Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, radio frequency (RF) (radio frequency), etc., or any suitable combination of the foregoing.
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通 信网络的示例包括局域网(“Local Area Network,LAN”),广域网(“Wide Area Network,WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。In some embodiments, the client and server can use any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects. Examples of communication networks include local area networks ("Local Area Network, LAN"), wide area networks ("Wide Area Network, WAN"), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), and any currently known or future developed networks.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
上述计算机可读介质承载有至少一个程序,当上述至少一个程序被该电子设备执行时,使得该电子设备:获取待聚类的目标文本数据集;其中,所述目标文本数据集中包括至少一条目标文本数据;针对所述目标文本数据集中的每条目标文本数据,计算所述目标文本数据中各个词的第一重要性分数,并基于所述第一重要性分数对所述目标文本数据中的各个词进行排序,生成与所述目标文本数据对应的待搜索词序列;针对各个待搜索词序列,在预先构建的字典树中搜索与所述待搜索词序列适配的目标词序列;其中,所述目标词序列属于所述待搜索词序列的子序列;分别根据各个所述目标词序列对对应的目标文本数据进行聚类,得到文本聚类结果。The computer-readable medium carries at least one program, and when the at least one program is executed by the electronic device, the electronic device: acquires a target text data set to be clustered; wherein, the target text data set includes at least one target text data; for each piece of target text data in the target text data set, calculate the first importance score of each word in the target text data, and assign a value to each word in the target text data based on the first importance score. Each word is sorted to generate a sequence of words to be searched corresponding to the target text data; for each sequence of words to be searched, a pre-built dictionary tree is searched for a sequence of target words adapted to the sequence of words to be searched; wherein, The target word sequence belongs to a subsequence of the to-be-searched word sequence; the corresponding target text data are clustered according to each of the target word sequences to obtain a text clustering result.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、 或代码的一部分包含至少一个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains at least one configurable function for implementing the specified logical function. Execute the instruction. It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。The units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation of the unit itself under certain circumstances.
本文中以上描述的功能可以至少部分地由至少一个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。The functions described herein above may be performed, at least in part, by at least one hardware logic component. For example, without limitation, exemplary types of hardware logic components that may be used include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Products) Standard Parts, ASSP), system on chip (System on Chip, SOC), complex programmable logic device (Complex Programmable Logic Device, CPLD) and so on.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于至少一个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include at least one wire-based electrical connection, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
根据本公开实施例的至少一个实施例,本公开实施例提供了一种文本聚类 方法,包括:According to at least one embodiment of the present disclosure, the present disclosure provides a text clustering method, including:
获取待聚类的目标文本数据集;其中,所述目标文本数据集中包括至少一条目标文本数据;Obtain a target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data;
针对所述目标文本数据集中的每条目标文本数据,计算所述目标文本数据中至少一个词的第一重要性分数,并基于所述第一重要性分数对所述目标文本数据中的至少一个词进行排序,生成与所述目标文本数据对应的待搜索词序列;For each piece of target text data in the target text data set, calculate a first importance score of at least one word in the target text data, and assign at least one word in the target text data based on the first importance score The words are sorted, and a sequence of words to be searched corresponding to the target text data is generated;
针对每个待搜索词序列,在预先构建的字典树中搜索与所述待搜索词序列适配的目标词序列;其中,所述目标词序列属于所述待搜索词序列的子序列;For each word sequence to be searched, a pre-built dictionary tree is searched for a target word sequence adapted to the word sequence to be searched; wherein, the target word sequence belongs to a subsequence of the word sequence to be searched;
分别根据至少一个目标词序列对所述至少一个目标词序列对应的目标文本数据进行聚类,得到文本聚类结果。The target text data corresponding to the at least one target word sequence is clustered according to the at least one target word sequence, respectively, to obtain a text clustering result.
可选地,针对所述目标文本数据集中的每条目标文本数据,计算所述目标文本数据中至少一个词的第一重要性分数,包括:Optionally, for each piece of target text data in the target text data set, calculating the first importance score of at least one word in the target text data, including:
针对所述目标文本数据集中的每条目标文本数据,分别计算所述目标文本数据中至少一个词的第一词频-逆文档频率;For each piece of target text data in the target text data set, calculate the first word frequency-inverse document frequency of at least one word in the target text data respectively;
分别根据至少一个第一词频-逆文档频率,计算所述目标文本数据中至少一个词的第一重要性分数。A first importance score of at least one word in the target text data is calculated according to at least one first word frequency-inverse document frequency, respectively.
可选地,分别计算所述目标文本数据中至少一个词的第一词频-逆文档频率,包括:Optionally, respectively calculating the first word frequency-inverse document frequency of at least one word in the target text data, including:
分别确定所述目标文本数据中每个词的第一词频和第一逆文档频率;respectively determining the first word frequency and the first inverse document frequency of each word in the target text data;
根据所述第一词频和所述第一逆文档频率计算对应词的第一词频-逆文档频率;其中,所述第一词频-逆文档频率为所述第一词频与所述第一逆文档频率的乘积。Calculate the first word frequency-inverse document frequency of the corresponding word according to the first word frequency and the first inverse document frequency; wherein, the first word frequency-inverse document frequency is the first word frequency and the first inverse document frequency product of frequencies.
可选地,分别确定所述目标文本数据中每个词的第一词频和第一逆文档频率,包括:Optionally, respectively determining the first word frequency and the first inverse document frequency of each word in the target text data, including:
确定每个词在所述目标文本数据中的出现次数,并将所述出现次数作为对应词的第一词频;Determine the number of occurrences of each word in the target text data, and use the number of occurrences as the first word frequency of the corresponding word;
获取与所述字典树对应的参数配置信息;其中,所述参数配置信息包括逆文档频率列表,所述逆文档频率列表中包括所述字典树中所包含的每个词的逆文档频率;Acquiring parameter configuration information corresponding to the dictionary tree; wherein the parameter configuration information includes an inverse document frequency list, and the inverse document frequency list includes the inverse document frequency of each word contained in the dictionary tree;
在所述逆文档频率列表中,分别查找与所述目标文本数据中的每个词对应的逆文档频率,作为所述目标文本数据中所述每个词的第一逆文档频率。In the inverse document frequency list, the inverse document frequency corresponding to each word in the target text data is respectively searched as the first inverse document frequency of each word in the target text data.
可选地,所述参数配置信息还包括分布偏差列表;其中,所述分布偏差列表中包括所述字典树中所包含的每个词的分布偏差;Optionally, the parameter configuration information further includes a distribution deviation list; wherein, the distribution deviation list includes the distribution deviation of each word contained in the dictionary tree;
在分别根据至少一个第一词频-逆文档频率,计算所述目标文本数据中至少一个词的第一重要性分数之前,还包括:Before calculating the first importance score of at least one word in the target text data according to at least one first word frequency-inverse document frequency, the method further includes:
在所述分布偏差列表中,分别查找与所述目标文本数据中的每个词对应的分布偏差,作为所述目标文本数据中所述每个词的第一分布偏差;In the distribution deviation list, respectively find the distribution deviation corresponding to each word in the target text data as the first distribution deviation of each word in the target text data;
分别根据至少一个第一词频-逆文档频率,计算所述目标文本数据中至少一个词的第一重要性分数,包括:Calculate the first importance score of at least one word in the target text data according to at least one first word frequency-inverse document frequency, including:
分别根据每个第一词频-逆文档频率及对应的第一分布偏差,计算所述目标文本数据中所述每个词的第一重要性分数;其中,所述第一重要性分数为所述第一词频-逆文档频率与所述第一分布偏差的乘积。Calculate the first importance score of each word in the target text data according to each first word frequency-inverse document frequency and the corresponding first distribution deviation; wherein, the first importance score is the The first word frequency - the product of the inverse document frequency and the deviation of the first distribution.
可选地,针对每个待搜索词序列,在预先构建的字典树中搜索与所述待搜索词序列适配的目标词序列,包括:Optionally, for each word sequence to be searched, a pre-built dictionary tree is searched for a target word sequence adapted to the word sequence to be searched, including:
针对每个待搜索词序列,在预先构建的字典树中按照从根节点到子节点的顺序,搜索与所述待搜索词序列适配的目标词序列。For each word sequence to be searched, a pre-built dictionary tree is searched for a target word sequence adapted to the word sequence to be searched in the order from the root node to the child node.
可选地,在获取待聚类的目标文本数据集之前,还包括:Optionally, before acquiring the target text data set to be clustered, the method further includes:
获取总语料库和目标语料库;其中,所述总语料库包括所述目标语料库,所述目标语料库中包含至少一条样本文本数据;Acquiring a total corpus and a target corpus; wherein, the total corpus includes the target corpus, and the target corpus contains at least one piece of sample text data;
计算所述目标语料库中所包含的每个词相对于所述总语料库的第二分布偏差;calculating the second distribution deviation of each word contained in the target corpus relative to the total corpus;
针对所述目标语料库中每条样本文本数据,分别根据所述样本文本数据中 每个词的第二分布偏差计算对应词的第二重要性分数,并按照所述第二重要性分数从大到小的顺序对每条样本文本数据中的至少一个词进行排序,生成与所述样本文本数据对应的样本词序列;For each piece of sample text data in the target corpus, calculate the second importance score of the corresponding word according to the second distribution deviation of each word in the sample text data, and according to the second importance score from large to high Sort at least one word in each piece of sample text data in a small order to generate a sample word sequence corresponding to the sample text data;
基于至少一个样本词序列构建所述字典树。The dictionary tree is constructed based on at least one sample word sequence.
可选地,针对所述目标语料库中每条样本文本数据,分别根据所述样本文本数据中每个词的第二分布偏差计算对应词的第二重要性分数,包括:Optionally, for each piece of sample text data in the target corpus, calculate the second importance score of the corresponding word according to the second distribution deviation of each word in the sample text data, including:
针对所述目标语料库中每条样本文本数据,分别计算所述样本文本数据中每个词的第二词频-逆文档频率;For each piece of sample text data in the target corpus, calculate the second word frequency-inverse document frequency of each word in the sample text data;
分别根据每个第二词频-逆文档频率及对应的第二分布偏差,计算所述样本文本数据中所述每个词的第二重要性分数。A second importance score of each word in the sample text data is calculated according to each second word frequency-inverse document frequency and the corresponding second distribution deviation, respectively.
可选地,分别计算所述样本文本数据中每个词的第二词频-逆文档频率,包括:Optionally, separately calculating the second word frequency-inverse document frequency of each word in the sample text data, including:
分别确定所述样本文本数据中每个词的第二词频和第二逆文档频率;respectively determining the second word frequency and the second inverse document frequency of each word in the sample text data;
根据所述第二词频和所述第二逆文档频率计算所述样本文本数据中对应词的第二词频-逆文档频率。The second word frequency-inverse document frequency of the corresponding word in the sample text data is calculated according to the second word frequency and the second inverse document frequency.
可选地,分别确定所述样本文本数据中每个词的第二词频和第二逆文档频率,包括:Optionally, respectively determining the second word frequency and the second inverse document frequency of each word in the sample text data, including:
根据如下公式分别计算所述样本文本数据中每个词的第二词频和第二逆文档频率:Calculate the second word frequency and the second inverse document frequency of each word in the sample text data according to the following formulas:
tf(w)=mtf(w)=m
idf(w)=log((N/n))idf(w)=log((N/n))
根据所述第二词频和所述第二逆文档频率计算所述样本文本数据中对应词的第二词频-逆文档频率,包括:Calculate the second word frequency-inverse document frequency of the corresponding word in the sample text data according to the second word frequency and the second inverse document frequency, including:
根据如下公式计算所述样本文本数据中每个词的第二词频-逆文档频率:Calculate the second word frequency-inverse document frequency of each word in the sample text data according to the following formula:
tf-idf(w)=tf(w)*idf(w)tf-idf(w)=tf(w)*idf(w)
其中,w表示所述样本数据文本中的任意一个词,tf(w)表示所述样本数据 文本中的词w的第二词频,idf(w)表示所述样本数据文本中的词w的第二逆文档频率,tf-idf(w)表示所述样本数据文本中的词w的第二词频-逆文档频率,m表示词w在所述样本数据文本中出现的次数,n表示所述目标语料库中包含词w的样本文本数据的条数,N表示所述目标语料库中所包含的样本文本数据的总条数。Wherein, w represents any word in the sample data text, tf(w) represents the second word frequency of the word w in the sample data text, and idf(w) represents the second word frequency of the word w in the sample data text Two inverse document frequency, tf-idf(w) represents the second word frequency-inverse document frequency of the word w in the sample data text, m represents the number of times the word w appears in the sample data text, n represents the target The number of pieces of sample text data containing word w in the corpus, and N represents the total number of pieces of sample text data contained in the target corpus.
可选地,分别根据每个第二词频-逆文档频率及对应的第二分布偏差,计算所述样本文本数据中所述每个词的第二重要性分数,包括:Optionally, calculate the second importance score of each word in the sample text data according to each second word frequency-inverse document frequency and the corresponding second distribution deviation, including:
根据如下公式计算所述样本文本数据中所述每个词的第二重要性分数:The second importance score of each word in the sample text data is calculated according to the following formula:
其中,s(w)表示所述样本文本数据中词w的第二重要性分数,tf-idf
a(w)表示所述样本数据文本中的词w的第二词频-逆文档频率,
表示样本文本数据中词w的第二分布偏差。
Wherein, s(w) represents the second importance score of the word w in the sample text data, tf-idf a (w) represents the second word frequency-inverse document frequency of the word w in the sample data text, Represents the second distribution deviation of word w in the sample text data.
可选地,计算所述目标语料库中所包含的每个词相对于所述总语料库的第二分布偏差,包括:Optionally, calculating the second distribution deviation of each word contained in the target corpus relative to the total corpus, including:
根据如下公式计算所述目标语料库中所包含的每个词相对于所述总语料库的第二分布偏差:Calculate the second distribution deviation of each word contained in the target corpus relative to the total corpus according to the following formula:
其中,b表示目标语料库的词w相对于所述总语料库的第二分布偏差,freq
a(w)表示词w在所述目标语料库中的出现频率,freq(w)表示词w在所述总语料库中的出现频率,t表示词w在所述目标语料库中的出现次数,M表示所述目标语料库中所包含词的总数量,t'表示词w在所述总语料库中的出现次数,M'表 示所述总语料库中所包含词的总数量。
Among them, b represents the second distribution deviation of the word w in the target corpus relative to the total corpus, freq a (w) represents the frequency of the word w in the target corpus, freq(w) represents the word w in the total corpus. The frequency of occurrence in the corpus, t represents the number of occurrences of the word w in the target corpus, M represents the total number of words contained in the target corpus, t' represents the number of occurrences of the word w in the total corpus, M ' represents the total number of words contained in the total corpus.
可选地,在基于各个样本词序列构建所述字典树之后,还包括:Optionally, after constructing the dictionary tree based on each sample word sequence, the method further includes:
确定所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数;Determine the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences;
根据所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数对所述字典树进行剪枝,直至所述字典树所包含的节点数量达到预设数量为止。The dictionary tree is pruned according to the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences, until the number of nodes included in the dictionary tree reaches a preset number.
可选地,根据所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数对所述字典树进行剪枝,直至所述字典树所包含的节点数量达到预设数量为止,包括:Optionally, the dictionary tree is pruned according to the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences, until the number of nodes contained in the dictionary tree reaches a preset number. ,include:
按照所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数从小到大的顺序,依次删除所述字典树中同一出现次数对应的节点,直至所述字典树所包含的节点数量达到预设数量为止。According to the order of occurrences of the words of each node in the dictionary tree at the same position in all sample word sequences from small to large, delete the nodes corresponding to the same occurrence in the dictionary tree in turn, until the dictionary tree contains The number of nodes reaches the preset number.
Claims (17)
- 一种文本聚类方法,包括:A text clustering method comprising:获取待聚类的目标文本数据集;其中,所述目标文本数据集中包括至少一条目标文本数据;Obtain a target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data;针对所述目标文本数据集中的每条目标文本数据,计算所述每条目标文本数据中至少一个词的第一重要性分数,并基于所述第一重要性分数对所述每条目标文本数据中的至少一个词进行排序,生成与所述每条目标文本数据对应的待搜索词序列;For each piece of target text data in the target text data set, calculate a first importance score of at least one word in each piece of target text data, and assign each piece of target text data based on the first importance score Sort at least one word in the to-be-searched word sequence corresponding to each piece of target text data;针对每个待搜索词序列,在预先构建的字典树中搜索与所述每个待搜索词序列适配的目标词序列;其中,所述目标词序列属于所述每个待搜索词序列的子序列;For each word sequence to be searched, a pre-built dictionary tree is searched for a target word sequence adapted to each word sequence to be searched; wherein, the target word sequence belongs to the child of each word sequence to be searched sequence;分别根据至少一个目标词序列对所述至少一个目标词序列对应的目标文本数据进行聚类,得到文本聚类结果。The target text data corresponding to the at least one target word sequence is clustered according to the at least one target word sequence, respectively, to obtain a text clustering result.
- 根据权利要求1所述的方法,其中,针对所述目标文本数据集中的每条目标文本数据,计算所述每条目标文本数据中至少一个词的第一重要性分数,包括:The method according to claim 1, wherein, for each piece of target text data in the target text data set, calculating the first importance score of at least one word in the each piece of target text data, comprising:针对所述目标文本数据集中的每条目标文本数据,分别计算所述每条目标文本数据中至少一个词的第一词频-逆文档频率;For each piece of target text data in the target text data set, calculate the first word frequency-inverse document frequency of at least one word in each piece of target text data;分别根据至少一个第一词频-逆文档频率,计算所述每条目标文本数据中至少一个词的第一重要性分数。Calculate the first importance score of at least one word in each piece of target text data according to at least one first word frequency-inverse document frequency, respectively.
- 根据权利要求2所述的方法,其中,分别计算所述每条目标文本数据中至少一个词的第一词频-逆文档频率,包括:The method according to claim 2, wherein calculating the first word frequency-inverse document frequency of at least one word in each piece of target text data respectively comprises:分别确定所述每条目标文本数据中每个词的第一词频和第一逆文档频率;Respectively determine the first word frequency and the first inverse document frequency of each word in each piece of target text data;根据所述第一词频和所述第一逆文档频率计算对应词的第一词频-逆文档频率;其中,所述第一词频-逆文档频率为所述第一词频与所述第一逆文档频率的乘积。Calculate the first word frequency-inverse document frequency of the corresponding word according to the first word frequency and the first inverse document frequency; wherein, the first word frequency-inverse document frequency is the first word frequency and the first inverse document frequency product of frequencies.
- 根据权利要求3所述的方法,其中,分别确定所述每条目标文本数据中 每个词的第一词频和第一逆文档频率,包括:The method according to claim 3, wherein, respectively determining the first word frequency and the first inverse document frequency of each word in each piece of target text data, comprising:确定每个词在所述每条目标文本数据中的出现次数,并将所述出现次数作为对应词的第一词频;Determine the number of occurrences of each word in each piece of target text data, and use the number of occurrences as the first word frequency of the corresponding word;获取与所述字典树对应的参数配置信息;其中,所述参数配置信息包括逆文档频率列表,所述逆文档频率列表中包括所述字典树中所包含的每个词的逆文档频率;Acquiring parameter configuration information corresponding to the dictionary tree; wherein the parameter configuration information includes an inverse document frequency list, and the inverse document frequency list includes the inverse document frequency of each word contained in the dictionary tree;在所述逆文档频率列表中,分别查找与所述每条目标文本数据中的每个词对应的逆文档频率,作为所述每条目标文本数据中所述每个词的第一逆文档频率。In the inverse document frequency list, search for the inverse document frequency corresponding to each word in each piece of target text data, as the first inverse document frequency of each word in each piece of target text data .
- 根据权利要求4所述的方法,其中,所述参数配置信息还包括分布偏差列表;其中,所述分布偏差列表中包括所述字典树中所包含的每个词的分布偏差;The method according to claim 4, wherein the parameter configuration information further includes a distribution deviation list; wherein, the distribution deviation list includes the distribution deviation of each word included in the dictionary tree;在分别根据至少一个第一词频-逆文档频率,计算所述每条目标文本数据中至少一个词的第一重要性分数之前,还包括:Before calculating the first importance score of at least one word in each piece of target text data according to at least one first word frequency-inverse document frequency, the method further includes:在所述分布偏差列表中,分别查找与所述每条目标文本数据中的每个词对应的分布偏差,作为所述每条目标文本数据中所述每个词的第一分布偏差;In the distribution deviation list, find the distribution deviation corresponding to each word in each piece of target text data, as the first distribution deviation of each word in each piece of target text data;分别根据至少一个第一词频-逆文档频率,计算所述每条目标文本数据中至少一个词的第一重要性分数,包括:Calculate the first importance score of at least one word in each piece of target text data according to at least one first word frequency-inverse document frequency, including:分别根据每个第一词频-逆文档频率及对应的第一分布偏差,计算所述每条目标文本数据中所述每个词的第一重要性分数;其中,所述第一重要性分数为所述第一词频-逆文档频率与所述第一分布偏差的乘积。Calculate the first importance score of each word in each piece of target text data according to each first word frequency-inverse document frequency and the corresponding first distribution deviation; wherein, the first importance score is The first word frequency - the product of the inverse document frequency and the first distribution deviation.
- 根据权利要求1所述的方法,其中,针对每个待搜索词序列,在预先构建的字典树中搜索与所述待搜索词序列适配的目标词序列,包括:The method according to claim 1, wherein, for each to-be-searched word sequence, searching a pre-built dictionary tree for a target word sequence adapted to the to-be-searched word sequence, comprising:针对每个待搜索词序列,在预先构建的字典树中按照从根节点到子节点的顺序,搜索与所述每个待搜索词序列适配的目标词序列。For each word sequence to be searched, a pre-built dictionary tree is searched for a target word sequence adapted to each word sequence to be searched in the order from the root node to the child node.
- 根据权利要求1所述的方法,在获取待聚类的目标文本数据集之前,还 包括:The method according to claim 1, before acquiring the target text data set to be clustered, also comprising:获取总语料库和目标语料库;其中,所述总语料库包括所述目标语料库,所述目标语料库中包含至少一条样本文本数据;Acquiring a total corpus and a target corpus; wherein, the total corpus includes the target corpus, and the target corpus contains at least one piece of sample text data;计算所述目标语料库中所包含的每个词相对于所述总语料库的第二分布偏差;calculating the second distribution deviation of each word contained in the target corpus relative to the total corpus;针对所述目标语料库中每条样本文本数据,分别根据所述每条样本文本数据中每个词的第二分布偏差计算对应词的第二重要性分数,并按照所述第二重要性分数从大到小的顺序对每条样本文本数据中的至少一个词进行排序,生成与所述每条样本文本数据对应的样本词序列;For each piece of sample text data in the target corpus, calculate the second importance score of the corresponding word according to the second distribution deviation of each word in each piece of sample text data, and calculate from the second importance score from Sort at least one word in each piece of sample text data in an order from large to small, and generate a sample word sequence corresponding to each piece of sample text data;基于至少一个样本词序列构建所述字典树。The dictionary tree is constructed based on at least one sample word sequence.
- 根据权利要求7所述的方法,其中,针对所述目标语料库中每条样本文本数据,分别根据所述每条样本文本数据中每个词的第二分布偏差计算对应词的第二重要性分数,包括:The method according to claim 7, wherein, for each piece of sample text data in the target corpus, a second importance score of a corresponding word is calculated according to the second distribution deviation of each word in each piece of sample text data, respectively ,include:针对所述目标语料库中每条样本文本数据,分别计算所述每条样本文本数据中每个词的第二词频-逆文档频率;For each piece of sample text data in the target corpus, calculate the second word frequency-inverse document frequency of each word in the each piece of sample text data;分别根据每个第二词频-逆文档频率及对应的第二分布偏差,计算所述每条样本文本数据中所述每个词的第二重要性分数。The second importance score of each word in each piece of sample text data is calculated according to each second word frequency-inverse document frequency and the corresponding second distribution deviation, respectively.
- 根据权利要求8所述的方法,其中,分别计算所述每条样本文本数据中每个词的第二词频-逆文档频率,包括:The method according to claim 8, wherein calculating the second word frequency-inverse document frequency of each word in each piece of sample text data respectively comprises:分别确定所述每条样本文本数据中每个词的第二词频和第二逆文档频率;respectively determining the second word frequency and the second inverse document frequency of each word in the each piece of sample text data;根据所述第二词频和所述第二逆文档频率计算所述每条样本文本数据中对应词的第二词频-逆文档频率。The second word frequency-inverse document frequency of the corresponding word in each piece of sample text data is calculated according to the second word frequency and the second inverse document frequency.
- 根据权利要求9所述的方法,其中,分别确定所述每条样本文本数据中每个词的第二词频和第二逆文档频率,包括:The method according to claim 9, wherein determining the second word frequency and the second inverse document frequency of each word in each piece of sample text data respectively comprises:根据如下公式分别计算所述每条样本文本数据中每个词的第二词频和第二逆文档频率:Calculate the second word frequency and the second inverse document frequency of each word in each piece of sample text data according to the following formulas:tf(w)=mtf(w)=midf(w)=log((N/n))idf(w)=log((N/n))根据所述第二词频和所述第二逆文档频率计算所述每条样本文本数据中对应词的第二词频-逆文档频率,包括:Calculate the second word frequency-inverse document frequency of the corresponding word in each piece of sample text data according to the second word frequency and the second inverse document frequency, including:根据如下公式计算所述每条样本文本数据中每个词的第二词频-逆文档频率:Calculate the second word frequency-inverse document frequency of each word in each piece of sample text data according to the following formula:tf-idf(w)=tf(w)*idf(w)tf-idf(w)=tf(w)*idf(w)其中,w表示所述每条样本数据文本中的任意一个词,tf(w)表示所述每条样本数据文本中的词w的第二词频,idf(w)表示所述每条样本数据文本中的词w的第二逆文档频率,tf-idf(w)表示所述每条样本数据文本中的词w的第二词频-逆文档频率,m表示词w在所述每条样本数据文本中出现的次数,n表示所述目标语料库中包含词w的样本文本数据的条数,N表示所述目标语料库中所包含的样本文本数据的总条数。Wherein, w represents any word in each piece of sample data text, tf(w) represents the second word frequency of word w in each piece of sample data text, and idf(w) represents each piece of sample data text The second inverse document frequency of the word w in The number of occurrences in the target corpus, n represents the number of pieces of sample text data containing word w in the target corpus, and N represents the total number of pieces of sample text data contained in the target corpus.
- 根据权利要求8所述的方法,其中,分别根据每个第二词频-逆文档频率及对应的第二分布偏差,计算所述每条样本文本数据中所述每个词的第二重要性分数,包括:The method according to claim 8, wherein the second importance score of each word in each piece of sample text data is calculated according to each second word frequency-inverse document frequency and the corresponding second distribution deviation, respectively ,include:根据如下公式计算所述每条样本文本数据中所述每个词的第二重要性分数:Calculate the second importance score of each word in each piece of sample text data according to the following formula:其中,s(w)表示所述每条样本文本数据中词w的第二重要性分数,tf-idf a(w)表示所述每条样本数据文本中的词w的第二词频-逆文档频率, 表示所述每条样本文本数据中词w的第二分布偏差。 Among them, s(w) represents the second importance score of word w in each piece of sample text data, and tf-idf a (w) represents the second word frequency-inverse document of word w in each piece of sample data text frequency, represents the second distribution deviation of the word w in each piece of sample text data.
- 根据权利要求8-11任一所述的方法,其中,计算所述目标语料库中所包含的每个词相对于所述总语料库的第二分布偏差,包括:The method according to any one of claims 8-11, wherein calculating the second distribution deviation of each word included in the target corpus relative to the total corpus comprises:根据如下公式计算所述目标语料库中所包含的每个词相对于所述总语料库的第二分布偏差:Calculate the second distribution deviation of each word contained in the target corpus relative to the total corpus according to the following formula:其中,b表示目标语料库的词w相对于所述总语料库的第二分布偏差,freq a(w)表示词w在所述目标语料库中的出现频率,freq(w)表示词w在所述总语料库中的出现频率,t表示词w在所述目标语料库中的出现次数,M表示所述目标语料库中所包含词的总数量,t'表示词w在所述总语料库中的出现次数,M'表示所述总语料库中所包含词的总数量。 Among them, b represents the second distribution deviation of the word w in the target corpus relative to the total corpus, freq a (w) represents the frequency of the word w in the target corpus, freq(w) represents the word w in the total corpus. The frequency of occurrence in the corpus, t represents the number of occurrences of the word w in the target corpus, M represents the total number of words contained in the target corpus, t' represents the number of occurrences of the word w in the total corpus, M ' represents the total number of words contained in the total corpus.
- 根据权利要求7所述的方法,在基于至少一个样本词序列构建所述字典树之后,还包括:The method according to claim 7, after constructing the dictionary tree based on at least one sample word sequence, further comprising:确定所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数;Determine the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences;根据所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数对所述字典树进行剪枝,直至所述字典树所包含的节点数量达到预设数量为止。The dictionary tree is pruned according to the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences, until the number of nodes included in the dictionary tree reaches a preset number.
- 根据权利要求13所述的方法,其中,根据所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数对所述字典树进行剪枝,直至所述字典树所包含的节点数量达到预设数量为止,包括:The method according to claim 13, wherein the dictionary tree is pruned according to the number of occurrences of the word of each node in the dictionary tree at the same position in all sample word sequences, until the dictionary tree contains The number of nodes reaches the preset number, including:按照所述字典树中每个节点的词在所有样本词序列中同一位置的出现次数从小到大的顺序,依次删除所述字典树中同一出现次数对应的节点,直至所述字典树所包含的节点数量达到预设数量为止。According to the order of occurrences of the words of each node in the dictionary tree at the same position in all sample word sequences from small to large, delete the nodes corresponding to the same occurrence in the dictionary tree in turn, until the dictionary tree contains The number of nodes reaches the preset number.
- 一种文本聚类装置,包括:A text clustering device, comprising:文本数据获取模块,设置为获取待聚类的目标文本数据集;其中,所述目标文本数据集中包括至少一条目标文本数据;A text data acquisition module, configured to acquire a target text data set to be clustered; wherein, the target text data set includes at least one piece of target text data;搜索词序列生成模块,设置为针对所述目标文本数据集中的每条目标文本数据,计算所述每条目标文本数据中至少一个词的第一重要性分数,并基于所 述第一重要性分数对所述每条目标文本数据中的至少一个词进行排序,生成与所述每条目标文本数据对应的待搜索词序列;A search word sequence generation module, configured to calculate the first importance score of at least one word in each piece of target text data for each piece of target text data in the target text data set, and based on the first importance score Sort at least one word in each piece of target text data, and generate a word sequence to be searched corresponding to each piece of target text data;目标词序列确定模块,设置为针对每个待搜索词序列,在预先构建的字典树中搜索与所述每个待搜索词序列适配的目标词序列;其中,所述目标词序列属于所述每个待搜索词序列的子序列;The target word sequence determination module is configured to search a pre-built dictionary tree for a target word sequence adapted to each to-be-searched word sequence for each to-be-searched word sequence; wherein, the target word sequence belongs to the a subsequence of each sequence of words to be searched;文本聚类模块,设置为分别根据至少一个目标词序列对所述至少一个目标词序列对应的目标文本数据进行聚类,得到文本聚类结果。The text clustering module is configured to cluster the target text data corresponding to the at least one target word sequence according to the at least one target word sequence, respectively, to obtain a text clustering result.
- 一种电子设备,其包括:An electronic device comprising:至少一个处理装置;at least one processing device;存储装置,设置为存储至少一个程序;a storage device configured to store at least one program;当所述至少一个程序被所述至少一个处理装置执行,使得所述至少一个处理装置实现如权利要求1-14中任一所述的文本聚类方法。When the at least one program is executed by the at least one processing device, the at least one processing device implements the text clustering method according to any one of claims 1-14.
- 一种计算机可读介质,所述计算机可读介质上存储有计算机程序,所述计算机程序被处理装置执行时实现如权利要求1-14中任一所述的文本聚类方法。A computer-readable medium having a computer program stored on the computer-readable medium, when the computer program is executed by a processing device, implements the text clustering method according to any one of claims 1-14.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011630633.2A CN112632285A (en) | 2020-12-31 | 2020-12-31 | Text clustering method and device, electronic equipment and storage medium |
CN202011630633.2 | 2020-12-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022143069A1 true WO2022143069A1 (en) | 2022-07-07 |
Family
ID=75290541
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/136677 WO2022143069A1 (en) | 2020-12-31 | 2021-12-09 | Text clustering method and apparatus, electronic device, and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112632285A (en) |
WO (1) | WO2022143069A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117875262A (en) * | 2024-03-12 | 2024-04-12 | 青岛天一红旗软控科技有限公司 | Data processing method based on management platform |
CN117891411A (en) * | 2024-03-14 | 2024-04-16 | 济宁蜗牛软件科技有限公司 | Optimized storage method for massive archive data |
CN118012979A (en) * | 2024-04-10 | 2024-05-10 | 济南宝林信息技术有限公司 | Intelligent acquisition and storage system for common surgical operation |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112632285A (en) * | 2020-12-31 | 2021-04-09 | 北京有竹居网络技术有限公司 | Text clustering method and device, electronic equipment and storage medium |
CN117811851B (en) * | 2024-03-01 | 2024-05-17 | 深圳市聚亚科技有限公司 | Data transmission method for 4G communication module |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120166441A1 (en) * | 2010-12-23 | 2012-06-28 | Microsoft Corporation | Keywords extraction and enrichment via categorization systems |
CN109508456A (en) * | 2018-10-22 | 2019-03-22 | 网易(杭州)网络有限公司 | A kind of text handling method and device |
CN110472043A (en) * | 2019-07-03 | 2019-11-19 | 阿里巴巴集团控股有限公司 | A kind of clustering method and device for comment text |
CN111651596A (en) * | 2020-05-27 | 2020-09-11 | 软通动力信息技术有限公司 | Text clustering method, text clustering device, server and storage medium |
CN112632285A (en) * | 2020-12-31 | 2021-04-09 | 北京有竹居网络技术有限公司 | Text clustering method and device, electronic equipment and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106713273B (en) * | 2016-11-23 | 2019-08-09 | 中国空间技术研究院 | A kind of protocol keyword recognition methods based on dictionary tree pruning search |
CN109740165A (en) * | 2019-01-09 | 2019-05-10 | 网易(杭州)网络有限公司 | Dictionary tree constructing method, sentence data search method, apparatus, equipment and storage medium |
CN111090719B (en) * | 2019-10-11 | 2024-05-03 | 平安科技(上海)有限公司 | Text classification method, apparatus, computer device and storage medium |
CN110826605A (en) * | 2019-10-24 | 2020-02-21 | 北京明略软件系统有限公司 | Method and device for identifying user in cross-platform manner |
CN111221968B (en) * | 2019-12-31 | 2023-07-21 | 北京航空航天大学 | Author disambiguation method and device based on subject tree clustering |
CN112115232A (en) * | 2020-09-24 | 2020-12-22 | 腾讯科技(深圳)有限公司 | Data error correction method and device and server |
-
2020
- 2020-12-31 CN CN202011630633.2A patent/CN112632285A/en active Pending
-
2021
- 2021-12-09 WO PCT/CN2021/136677 patent/WO2022143069A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120166441A1 (en) * | 2010-12-23 | 2012-06-28 | Microsoft Corporation | Keywords extraction and enrichment via categorization systems |
CN109508456A (en) * | 2018-10-22 | 2019-03-22 | 网易(杭州)网络有限公司 | A kind of text handling method and device |
CN110472043A (en) * | 2019-07-03 | 2019-11-19 | 阿里巴巴集团控股有限公司 | A kind of clustering method and device for comment text |
CN111651596A (en) * | 2020-05-27 | 2020-09-11 | 软通动力信息技术有限公司 | Text clustering method, text clustering device, server and storage medium |
CN112632285A (en) * | 2020-12-31 | 2021-04-09 | 北京有竹居网络技术有限公司 | Text clustering method and device, electronic equipment and storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117875262A (en) * | 2024-03-12 | 2024-04-12 | 青岛天一红旗软控科技有限公司 | Data processing method based on management platform |
CN117875262B (en) * | 2024-03-12 | 2024-06-04 | 青岛天一红旗软控科技有限公司 | Data processing method based on management platform |
CN117891411A (en) * | 2024-03-14 | 2024-04-16 | 济宁蜗牛软件科技有限公司 | Optimized storage method for massive archive data |
CN118012979A (en) * | 2024-04-10 | 2024-05-10 | 济南宝林信息技术有限公司 | Intelligent acquisition and storage system for common surgical operation |
Also Published As
Publication number | Publication date |
---|---|
CN112632285A (en) | 2021-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022143069A1 (en) | Text clustering method and apparatus, electronic device, and storage medium | |
US10649770B2 (en) | κ-selection using parallel processing | |
CN111221984A (en) | Multimodal content processing method, device, equipment and storage medium | |
CN112840336A (en) | Techniques for ranking content item recommendations | |
US8930342B2 (en) | Enabling multidimensional search on non-PC devices | |
US20210374344A1 (en) | Method for resource sorting, method for training sorting model and corresponding apparatuses | |
CN107301195B (en) | Method and device for generating classification model for searching content and data processing system | |
WO2023160500A1 (en) | Encyclopedia information display method and apparatus, device and medium | |
US9407589B2 (en) | System and method for following topics in an electronic textual conversation | |
JP2022046759A (en) | Retrieval method, device, electronic apparatus and storage medium | |
JP2022191412A (en) | Method for training multi-target image-text matching model and image-text retrieval method and apparatus | |
US11836174B2 (en) | Method and apparatus of establishing similarity model for retrieving geographic location | |
WO2022156730A1 (en) | Text processing method and apparatus, device, and medium | |
CN110275962B (en) | Method and apparatus for outputting information | |
CN114385780B (en) | Program interface information recommendation method and device, electronic equipment and readable medium | |
JP7140913B2 (en) | Video distribution statute of limitations determination method and device | |
CN113407814B (en) | Text searching method and device, readable medium and electronic equipment | |
CN113204691B (en) | Information display method, device, equipment and medium | |
CN110209781B (en) | Text processing method and device and related equipment | |
CN117131281B (en) | Public opinion event processing method, apparatus, electronic device and computer readable medium | |
CN113536763A (en) | Information processing method, device, equipment and storage medium | |
CN111400456A (en) | Information recommendation method and device | |
CN111555960A (en) | Method for generating information | |
CN114298007A (en) | Text similarity determination method, device, equipment and medium | |
WO2021196470A1 (en) | Information pushing method and apparatus, device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21913810 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21913810 Country of ref document: EP Kind code of ref document: A1 |