WO2017161899A1 - 一种文本处理方法、装置及计算设备 - Google Patents

一种文本处理方法、装置及计算设备 Download PDF

Info

Publication number
WO2017161899A1
WO2017161899A1 PCT/CN2016/105951 CN2016105951W WO2017161899A1 WO 2017161899 A1 WO2017161899 A1 WO 2017161899A1 CN 2016105951 W CN2016105951 W CN 2016105951W WO 2017161899 A1 WO2017161899 A1 WO 2017161899A1
Authority
WO
WIPO (PCT)
Prior art keywords
keyword
corrected
text
word
model
Prior art date
Application number
PCT/CN2016/105951
Other languages
English (en)
French (fr)
Inventor
贾应波
周文礼
刘若曦
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2017161899A1 publication Critical patent/WO2017161899A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3335Syntactic pre-processing, e.g. stopword elimination, stemming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a text processing method, apparatus, and computing device.
  • Full-text search technology has been developed for decades and is now a relatively mature technology.
  • the traditional search system generally imports the files inside the system by default. Therefore, as long as some pre-processing is performed on the files in the import system, the import system can be indexed according to its own method and provided to the application for search business.
  • the content and sources that need to be searched become more diverse, and many of the content itself is converted multiple times before being imported into the search engine for indexing. Therefore, there may be some conversion errors that result in the loss of the original key information and the search results.
  • the present invention provides a text processing method, apparatus, and computing device.
  • the present invention provides a text processing method, the method running on a text processing system, the text processing system comprising at least one computing device, the method comprising:
  • the source text is obtained, wherein the source text includes sample text and test text; the source text can be part of historical text collected by the text processing system, and the text processing system trains the text correction model based on the source text to make the text more accurate.
  • the sample text is segmented, and at least one keyword in the sample text and a related word corresponding to each keyword are obtained.
  • the first model is trained according to at least one keyword and the associated word corresponding to each keyword; the model is also used to correct the model of the keyword to be modified. Segment the test text, obtain the keywords to be corrected in the test text, and repair The related words corresponding to the keyword.
  • the related words corresponding to the keywords to be modified and the keywords to be corrected are input into the first model, and the corrected keywords corresponding to the keywords to be corrected are obtained.
  • the second model is trained according to the corrected keyword and the keyword to be corrected corresponding to the keyword to be corrected.
  • the word to be corrected is segmented, and the result of the word segmentation of the text to be corrected is input into the
  • the first model is used to detect that there may be an error keyword to be corrected in the text; the second model is trained by using the modified keyword and the corrected keyword corresponding thereto, and the second model is used. Correcting the keywords with errors in other texts to be corrected improves the accuracy of the keywords in the text, and also helps to improve the accuracy of subsequent searches based on keywords.
  • the sample text is segmented, the at least one keyword in the sample text is acquired, and the associated word corresponding to each keyword includes: segmenting the sample text, Obtaining a word segmentation result of the sample text, the word segmentation result of the sample text includes at least one sample text word; obtaining at least one keyword from at least one sample text word, the word frequency of the at least one keyword in the sample text is greater than a first threshold; The candidate words of the keywords are obtained from the candidate words of each keyword, and the associated words corresponding to each keyword are greater than the second threshold.
  • keywords with higher value in the text and higher frequency of subsequent use are obtained, and the associated words of each keyword are filtered by the joint probability with the keywords for follow-up training.
  • the training the first model according to the at least one keyword and the associated word corresponding to each keyword includes: according to at least one The first model is trained by the keyword, the associated word corresponding to each keyword, and the joint probability of each keyword corresponding to each keyword.
  • the keyword to be modified and the related word corresponding to the keyword to be modified are input
  • the first model obtains the corrected keyword corresponding to the keyword to be modified, and specifically includes: modifying the keyword to be modified into at least one candidate candidate keyword by using the first model;
  • the candidate correction keyword constitutes a candidate correction keyword group;
  • the corrected keyword corresponding to the to-be-corrected keyword is selected in the candidate correction keyword group, wherein the first correction probability value corresponding to the corrected keyword is to be selected
  • each of the keywords to be modified may have multiple candidate corrections.
  • the keyword is obtained as the final corrected keyword by obtaining the candidate modified keyword with the highest corrected probability value.
  • the second model is trained according to the modified keyword and the to-be-corrected keyword corresponding to the keyword to be modified, including The second model is trained according to the corrected keyword corresponding to the keyword to be corrected, the first modified probability value corresponding to the corrected keyword and the corrected keyword.
  • the method further includes: obtaining a log keyword in the query log, wherein the log keyword is a word whose word frequency is greater than a third threshold in the query log; and the log keyword is used as a keyword of the sample text.
  • an embodiment of the present invention provides a text processing apparatus, where the apparatus includes:
  • a word segmentation module configured to obtain source text, the source text includes sample text and test text; segmentation of the sample text, obtaining at least one keyword in the sample text and a related word corresponding to each keyword; and a processing module for using at least one Keywords and associated words for each keyword, Training the first model; the word segmentation module is further configured to perform segmentation on the test text, obtain the to-be-corrected keyword in the test text, and the associated word corresponding to the keyword to be modified; the processing module is further used to: the keyword to be modified and the key to be corrected The related words corresponding to the words are input into the first model, and the corrected keywords corresponding to the keywords to be modified are obtained; the second model is trained according to the corrected keywords and the keywords to be corrected corresponding to the keywords to be corrected; the word segmentation module is also used to treat Correcting the text for word segmentation; the processing module is further configured to input the word segmentation result of the text to be corrected into the second model, and correct the text to be corrected.
  • the word segmentation module is specifically configured to perform segmentation on the sample text, obtain a word segmentation result of the sample text, and include at least one sample text word in the segmentation result of the sample text.
  • the associated words corresponding to each keyword, the joint probability of the associated words corresponding to each keyword and each keyword is greater than the second threshold.
  • the processing module is specifically configured to: according to the at least one keyword, the associated word corresponding to each keyword And the joint probability of the associated word corresponding to each keyword and the each keyword, and training the first model.
  • the processing module is specifically configured to use the first model to be Correcting the keyword correction to at least one candidate correction keyword; forming at least one candidate correction keyword to form a candidate correction keyword group; and selecting, in the candidate correction keyword group, a corrected keyword corresponding to the keyword to be corrected, wherein
  • the first modified probability value corresponding to the corrected keyword is a maximum value of the corrected probability values corresponding to the candidate modified keywords in the candidate modified keyword group, and the corrected probability value is each of at least one candidate modified keyword.
  • the processing module is specifically configured to: according to the modified keyword corresponding to the keyword to be modified, The first modified probability value corresponding to the keyword and the corrected keyword is corrected, and the second model is trained.
  • the processing module is further configured to: obtain a log keyword in the query log, where the log keyword is a word whose word frequency is greater than a third threshold in the query log; and the log keyword is used as a keyword of the sample text.
  • an embodiment of the present invention provides a computing device, including: a processor, a memory, a bus, and a communication interface.
  • the processor, the memory, and the communication interface implement a communication connection through a bus, and the memory is used to store a processor.
  • the executed instructions are executed by the processor for implementing the method of any of the text processing methods described in the first aspect.
  • FIG. 1 is a schematic structural diagram of a text processing system according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a text processing method according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a text processing apparatus according to an embodiment of the present invention.
  • the application environment of the embodiment of the present invention may be text obtained by using a voice recognition software to process a voice file, or may be text obtained in other forms.
  • a voice recognition software to process a voice file
  • text obtained in other forms may be text obtained in other forms.
  • by language The text converted from the audio file is corrected as an example.
  • the embodiment of the present invention provides a system architecture diagram of a text processing system.
  • the system includes: a recorder 110, and a computing device 120.
  • the computing device 120 includes a processor 1201, a memory 1202, and a bus 1203. Communication interface 1204.
  • the recorder 110 can be a microphone or other device that can record, and the recorder 110 receives the sound signal sent by the user and records it to generate a voice file.
  • the processor 1201, the memory 1202, and the communication interface 1204 in the computing device may establish a communication connection through the bus 1203, or may perform communication through other means such as wireless transmission.
  • the processor 1201 can be a central processing unit (English: central processing unit, abbreviation: CPU).
  • the memory 1202 may include a volatile memory (English: volatile memory), such as a random access memory (English: random-access memory, abbreviation: RAM); the memory may also include a non-volatile memory (English: non-volatile memory) , for example, read-only memory (English: read-only memory, abbreviation: ROM), flash memory, hard disk (English: hard disk drive, abbreviation: HDD) or solid state drive (English: solid state drive, abbreviation: SSD); memory 1202 may also include a combination of the above types of memory.
  • the program code for implementing the text processing method provided by FIG. 2 of the present application is saved in the memory 1202 and executed by the processor 1201.
  • Computing device 120 communicates with recorder 110 via communication interface 1204.
  • FIG. 2 is a flowchart 200 of a text processing method according to Embodiment 1 of the present invention, including:
  • Step 210 Acquire source text, and the source text includes sample text and test text.
  • the recorder 110 transmits the voice file to the computing device 120, which converts the voice file into a plurality of texts and divides the plurality of text into sample text and test text.
  • the computing device 120 can convert the voice file into text by using Automatic Speech Recognition (ASR) technology.
  • ASR Automatic Speech Recognition
  • Step 220 performing segmentation on the sample text, and acquiring at least one keyword in the sample text to And related words corresponding to each keyword.
  • the computing device 120 may perform word segmentation processing on the sample text by using a Natural Language Processing (NLP) technique to obtain a word segmentation result of the sample text.
  • NLP Natural Language Processing
  • At least one sample text word is included in the word segmentation result of the sample text. Obtaining at least one keyword from at least one sample text time, wherein a word frequency of each keyword in the sample text is greater than a first threshold.
  • a text is a text related to a credit card.
  • the text there is a statement that "new users who want to apply for a credit card must carry a personal ID card”.
  • computing device 120 will segment the sentence as "new / user / if / want / want / handle / a / credit card /, / must / carry / personal / / ID /" .
  • computing device 120 also counts the number of times the word appears in the sample text.
  • the word frequency is greater than the first threshold (for example, the first threshold is 20, that is, the number of occurrences of a word is greater than 20)
  • the word may be defined as a keyword in the sample text. .
  • the text when text segmentation is performed using NLP technology, the text can be placed in a sub-scene.
  • the sub-scenario here is a professional vocabulary.
  • the system can more easily use "credit card” as a word, instead of simply using "credit” as a word, "card” alone as a word. That is to say, in the sub-scene, when using NLP technology to segment words, the text can be more consistent, and the accuracy of word segmentation can be higher.
  • the candidate related words of each keyword may also be obtained, and the associated words corresponding to each keyword are obtained from the candidate related words of each keyword, wherein the joint probability value of the associated words and keywords corresponding to each keyword Greater than the second threshold.
  • the joint probability value between the keyword and the related word of the keyword may be calculated by using a word vector method.
  • one of the keywords in the sample text is a "credit card.”
  • the candidate words to be selected may be "to”, “handle”, “one”, “must”, “carry”, etc., and the joint probability value between the candidate to be selected and the keyword may be calculated by the Bayesian formula.
  • the keyword event is represented by A
  • the related word event is represented by B.
  • P(A) is the probability that the keyword appears alone
  • P(B) is the probability that the related word appears alone
  • P(AB) is the keyword and the associated word appear simultaneously.
  • B) represents the probability of occurrence of a keyword under the condition of occurrence of a related word, that is, the joint probability between a keyword and a related word.
  • the specific calculation formula is as follows:
  • the keyword should be eliminated.
  • the keywords are only obtained through statistical algorithm calculations, and are not necessarily completely accurate.
  • the contact context if the joint probability value between a certain keyword and its associated related word is less than the second threshold, then the keyword is a pseudo keyword, so it should be eliminated.
  • Step 230 Train the first model according to at least one keyword and the associated word corresponding to each keyword.
  • the at least one keyword and the associated word corresponding to the keyword are used as input parameters to train the first model, so as to use the first model to obtain the corrected keyword corresponding to the keyword to be corrected in the text to be corrected.
  • the first model may also be trained according to at least one keyword, a related word corresponding to each keyword, and a joint probability between the related word and the keyword corresponding to each keyword.
  • the first model may employ a machine learning model, such as a naive Bayes or a Support Vector Machine (SVM).
  • SVM Support Vector Machine
  • the keyword is “automated teller machine”, etc.
  • the related words are “fault” and “repair”.
  • the joint probability between the keyword and the two related words is 0.8546702, 0.4326960, etc., as shown in Table 1.
  • Step 240 Perform word segmentation on the test text, and obtain the to-be-corrected keyword in the test text and the associated word corresponding to the keyword to be modified.
  • Step 250 Input the related words corresponding to the keyword to be modified and the keyword to be corrected into the first model, and obtain the corrected keyword corresponding to the keyword to be modified.
  • the step 250 includes: modifying, by using the first model, the to-be-corrected keyword to at least one candidate correction keyword; selecting, from the at least one candidate correction keyword, the to-be-corrected keyword And corresponding to the modified keyword, wherein the corrected probability value corresponding to the modified keyword is a maximum value of the corrected probability values corresponding to the at least one candidate modified keyword, and each candidate for the correction is selected
  • the corrected probability value corresponding to the word is a joint probability between the candidate modified keyword and the associated word corresponding to the candidate to be modified.
  • the to-be-corrected keyword appearing in the test text is determined, and at the same time, the related word corresponding to the to-be-corrected keyword may be determined according to the context of the to-be-corrected keyword in the test text, where the corresponding word corresponding to the to-be-corrected keyword is determined.
  • the number of related words can also be one or more.
  • the candidate to be modified may be modified into one or more candidate correction keywords, and the one or more candidate correction keywords constitute a candidate correction keyword group. Then, the corrected keyword is obtained from the candidate correction keyword group.
  • the first model is a table, as shown in Table 1.
  • the keyword to be modified is matched with each keyword in the table 1 by the first model, and the related words of the keyword to be modified are matched with the related words corresponding to the keyword in the table 1 when both are
  • one or more keywords that match the success are used as the candidate modified keywords corresponding to the to-be-corrected keyword to form a candidate modified keyword group.
  • the candidate correction keyword group the one with the largest joint probability value between the candidate correction keyword and the related word corresponding to the candidate candidate keyword is determined as the corrected keyword.
  • the "automatic teller machine failure” is converted into “automatic machine failure” (** can be any word after misspelling, or you can use ** or other characters instead For example, "automatic journal machine”, “automatic #&machine”, etc., and the keyword “automatic machine fault” to be corrected appears multiple times in the text.
  • the related words corresponding to the keywords to be corrected may also be repairs (assuming that the content recorded in the original text is: the automatic machine is detected to be malfunctioning, so the automatic ** machine needs to be repaired immediately, and the obtained related words are to be corrected.
  • the second word on the right side of the keyword so when the first automatic machine appears, the second related word is the word occurrence and failure; and the second time the automatic machine appears, The second related word is taken, and the two words are repaired).
  • the processor will match the erroneous keyword "automatic machine” with one or more keywords in the first model, and at the same time, the associated word "fault" of the keyword to be modified and the first model The corresponding words corresponding to each keyword match. Get the revised keyword group.
  • the related words are "faulty”
  • the keywords including the automatic machine include two keywords, the first is the automatic teller machine, and the second is the automatic deposit. machine.
  • the joint probability value between "automated teller machine” and “fault” is 0.8546702, that is, the correction probability value corresponding to "automated teller machine” is 0.8546702, and the joint probability value between "automatic deposit machine” and “fault” It is 0.6543890, that is, the correction probability value corresponding to the "automatic deposit machine” is 0.6543890.
  • the keyword “automatic machine” also includes a “repair”. When matching with the first model, it is found that the related word “repair” is also present. The keyword corresponding to the related word is also “automated teller machine”. The probability value is 0.4326960.
  • the processor will use the two keywords as candidate correction keywords corresponding to the keywords to be modified, and form the two candidate modified keywords into a candidate correction keyword group. And from the candidate correction keyword group, the candidate correction keyword with the highest correction probability value is selected as the corrected keyword.
  • the first model can also be a classifier model, or other model similar to the classifier model.
  • the keyword to be corrected is input into the first model, and the first model outputs one or more candidate correction keywords corresponding to the model to be corrected. Form a candidate correction keyword group. And from the candidate correction keyword group, the one with the largest joint probability value between the candidate correction keyword and the related word corresponding to the candidate candidate keyword is determined as the corrected keyword.
  • the first model is used to perform iterative processing on the to-be-corrected keywords in the test text.
  • the total number of occurrences of the "automatic machine” is 100 times.
  • the number of "automatic machine” is changed to "automatic teller machine” 60 times, which will be "
  • the number of automatic ** machines modified to "automatic deposit machine” is 40 times.
  • the keyword to be corrected in the test text is again input into the first model for the second processing, and the keyword "automatic machine” to be corrected in the test text is corrected to "automated teller machine”.
  • the number of times was 79, and the number of times the "automatic machine” was corrected to "automatic deposit machine” was 31 times.
  • the processed result still has no change or the change range is smaller than expected, that is, the n+1th word iterative process
  • the correction result of the keyword to be corrected in the nth iteration process has no change or the change range is smaller than expected, and will be " Automatic ** machine” corrected to "self
  • the number of "moving machine” is 80 times, and the number of "automatic machine” is changed to “automatic deposit machine” is 20 times.
  • the correction probability value corresponding to each candidate correction keyword may also correct the ratio of the to-be-corrected keyword for the candidate correction keyword.
  • the iteration result is to correct the "automatic machine” to "automated teller machine” 80 times, that is, the correction probability value is 80%; the "automatic machine” is corrected The number of "automatic deposit machines” is 20, indicating that the correction probability value is 20%.
  • the corrected candidate value of each candidate modified keyword in the modified keyword group may be determined, and the candidate modified keyword with the largest corrected probability value may be selected as the corrected keyword.
  • the correction probability value of the candidate correction keyword "automated teller machine" is the largest, which is 80%.
  • the selected corrected keyword is "Automatic Teller Machine". It should be noted that, in this embodiment, only two candidate correction keywords are matched with the keywords to be modified, and in one case, if the candidate to be corrected can match the candidate for correction.
  • the joint probability value between each candidate correction keyword and the associated word corresponding to the candidate candidate keyword may be used as a metric.
  • a related word corresponding to a keyword to be corrected and a keyword to be corrected is an automatic machine/arrearage
  • a plurality of keywords to be selected corresponding to the associated word “arrears” include a plurality of related words.
  • the candidate keywords it also includes automatic...the machine includes multiple words.
  • the first few words with larger probability values can be taken as the recommended candidate for correction, and added to the candidate for correction keyword.
  • the candidate keywords in the candidate correction keyword group are iteratively processed.
  • the correction probability value corresponding to each candidate correction keyword is determined, and the candidate keyword with the largest correction probability value is determined as the corrected keyword corresponding to the keyword to be corrected.
  • Step 260 Train the second model according to the corrected keyword and the keyword to be corrected corresponding to the keyword to be corrected.
  • the second model may be trained by using the to-be-corrected keyword and the corrected keyword corresponding thereto as the input parameter, and the second model may mechanically learn the model, such as SVM, neural network, and the like.
  • the first modified probability value, and the to-be-corrected keyword and the corrected keyword may also be used.
  • the second model is trained as an input parameter.
  • the input parameter may further include an associated word of the keyword to be modified or a modified probability value corresponding to the modified keyword.
  • an associated word of the keyword to be modified or a modified probability value corresponding to the modified keyword.
  • a modified probability value corresponding to the modified keyword. For example, ATM**/machine/ATM deposits/90%.
  • ATM** is the keyword to be modified
  • the machine is the related word corresponding to the modified keyword
  • the ATM deposit is the corrected keyword
  • 90% is the modified probability value.
  • a keyword with errors may be filled or corrected in one or more bytes during the correction process. As in the above, it is very likely that the keyword is "ATM deposit withdrawal", that is, the filled bytes are 4 instead of two.
  • a keyword is "credit card”, but when a voice file is converted into text, the credit card is converted into “credit”, so "credit/arrearage” appears in the statement because it needs to be corrected.
  • the word is a word, and in the first model, the joint probability value of the two words that have been inquired about credit card and arrears is 90%; then, when the keyword is used as the recommended correction keyword, the keyword can be specified. The joint probability value with the associated word does not change, which is 90%.
  • an ATM deposit machine is converted into an ATM** machine at the time of conversion, and may be an ATM deposit machine when the keyword is matched, or may be an ATM deposit and withdrawal machine.
  • the correct one is actually an ATM deposit machine, not an ATM deposit machine.
  • such a probability is generally not very large, so it is multiplied by a correction factor when calculating the joint probability value between the keyword and the associated word.
  • step 270 the modified text is segmented, and the word segmentation result of the text to be corrected is input into the second model to correct the text to be corrected.
  • the process of word segmentation of the revised text is similar to the process of segmenting the sample text and the test text, and will not be described here.
  • the text to be corrected here is generally text other than test text and sample text.
  • the first model is used to detect whether there is a keyword to be modified in the text, and when there is a keyword to be modified, the second modified model is used to correct the keyword to be corrected in the text.
  • the method may further include the step 280 of acquiring a log keyword in the query log, wherein the log keyword is a word whose word frequency is greater than a third threshold in the query log; and the log keyword is used as a keyword of the sample text.
  • Step 280 may be performed after step 270, that is, after the correction of the text to be corrected is finished, the log keyword extracted from the log is used as a selection basis for the keyword of the new sample text, for example, a new sample text may be combined.
  • the word frequency of the Chinese word is combined with whether each word is a log keyword to judge the keyword in the new sample text.
  • Step 280 may also be performed before step 210, that is, before the keywords are extracted from the sample text, the log keywords are extracted and used in the extraction of the keywords of the sample text in step 210.
  • the log keywords in the log may be obtained, for example, keywords that do not exist in the current first model and popular words in the technical field where the keyword is located. These log keywords are words in the query log whose word frequency is greater than the third threshold.
  • log keywords can be updated to the first model to obtain a better second model and improve the accuracy of the words in the text.
  • the salesperson has recorded the voice of the communication into a voice file through the recording device during the communication with the client, and then the computing device converts the voice file into text. .
  • the processor uses the first model to find out the keywords to be modified appearing in the text, and the associated words corresponding thereto. For example, in the text, the specific statement is “credit,” and “la” in “credit” appears because the ASR system recognizes that there is The syllables exist, but because of noise interference or jitter, the specific content is not correctly identified, and the linguistic words are used for filling. The correct content should be “credit card”.
  • the real business scenario is the customer's consultation with the credit card.
  • the "credit card” is a complete word in the search index, and the word may become the keyword of the search, because the recognition error will lead to the search failure.
  • the keyword "credit” can be corrected to "credit card” by using the second correction model.
  • the corrected text is then stored in memory.
  • the memory includes a data warehouse component.
  • the corrected text is stored in the data warehouse component and then the indexing task is built.
  • the search software such as Baidu
  • the application software invokes the program through the API interface to interact with the full-text search engine, and the search engine can find and user in the data warehouse component according to the index.
  • the text corresponding to the input keyword is sent to the search software and displayed to the user through the display screen.
  • the text processing method provided by the embodiment of the present invention trains the first model according to at least one keyword in the sample text and the associated word corresponding to the at least one keyword, and acquires the corrected keyword corresponding to the keyword to be corrected by using the first model.
  • the second model is trained, and the second model is used to correct the text to be corrected, thereby improving the text precision.
  • the correction model of each sub-scene can be obtained by using the existing classification of the service, and the correction model of each sub-scene is used to correct the keyword content in the erroneous data text, and the context information is fully utilized. .
  • the amount of keywords in each sub-area is small, which is relatively easy when training the revised model.
  • the error introduction of the data source is avoided, and the accuracy of the business search can be effectively improved, which is very practical.
  • the embodiment of the present invention further provides a text processing apparatus 300, which can be implemented by the computing device 120 shown in FIG. 1, and can also be implemented by an application specific integrated circuit (English: application) -specific integrated circuit, abbreviation: ASIC) implementation, or programmable logic device (English: programmable logic device, abbreviation: PLD) implementation.
  • the above PLD can be a complex programmable logic device (English: complex programmable logic device, abbreviation: CPLD), that is, a field programmable gate array (English: field programmable) Gate array (abbreviated as FPGA), general array logic (English: general array logic, abbreviation: GAL) or any combination thereof.
  • the text processing apparatus 300 is for implementing the text processing method shown in FIG. 2. When the text processing method shown in FIG. 2 is implemented by software, the text processing apparatus 300 and its respective modules may also be software modules.
  • the specific text processing device is shown in FIG. 3, and the device includes: a word segmentation module 301, and a processing module 302.
  • the word segmentation module 301 is configured to obtain the source text, wherein the source text includes the sample text and the test text; and the sample text is segmented, and at least one keyword in the sample text and the associated word corresponding to each keyword are obtained.
  • the word segmentation module 301 performs segmentation on the sample text, and obtains a word segmentation result of the sample text.
  • the word segmentation result of the sample text includes at least one sample text word; at least one keyword is obtained from at least one sample text word, and at least one keyword is at least The word frequency in the sample text is greater than the first threshold.
  • the processing module 302 is configured to train the first model according to the at least one keyword and the associated word corresponding to each keyword.
  • the processing module 302 trains the first model according to at least one keyword, a related word corresponding to each keyword, and a joint probability of each keyword corresponding to each keyword.
  • the word segmentation module 301 is further configured to perform word segmentation on the test text, and obtain the to-be-corrected keyword in the test text and the associated word corresponding to the keyword to be modified.
  • the processing module 302 is further configured to input the to-be-corrected keyword and the related word corresponding to the keyword to be modified into the first model, and obtain the corrected keyword corresponding to the keyword to be modified.
  • the first modified model is used to correct the to-be-corrected keyword to at least one candidate modified keyword; and the at least one candidate modified keyword is selected to correspond to the to-be-corrected keyword.
  • the corrected keyword wherein the corrected probability value corresponding to the corrected keyword is a maximum value of the corrected probability values corresponding to the at least one candidate modified keyword, and each candidate modified keyword
  • the corresponding correction probability value is a joint probability between the candidate correction keyword and the associated word corresponding to the to-be-corrected keyword.
  • the processing module 302 may train the second model according to the modified keyword corresponding to the keyword to be modified, the to-be-corrected keyword, and the first modified probability value corresponding to the modified keyword.
  • the word segmentation module 301 is also used to perform word segmentation on the corrected text.
  • the processing module 302 is further configured to input the word segmentation result of the text to be corrected into the second model, and correct the text to be corrected.
  • the log keyword in the query log is obtained, wherein the log keyword is a word whose word frequency is greater than a third threshold in the query log; and the log keyword is used as a keyword of the sample text.
  • the method provided in the first embodiment of the present application is implemented when the device provided in the second embodiment of the present application is in operation, and the working details thereof refer to the method provided in the first embodiment of the present application.
  • the first model is trained according to a keyword in a sample text and a related word associated with the keyword, and the corrected keyword corresponding to the keyword to be corrected is obtained by the first model;
  • the modified keyword and the corresponding modified keyword are used to train the second model, and the second model is used to correct the text to be corrected, thereby improving the accuracy of the text.
  • the correction model corresponding to each sub-scene can be obtained by using the existing classification of the service, and the correction model corresponding to each sub-scene is used to correct the keyword content in the erroneous data text, and the utilization is fully utilized.
  • Contextual information The amount of keywords in the corresponding lexicon of each sub-field is small, which is relatively easy when training the revised model.
  • the error introduction of the data source is avoided, and the accuracy of the business search can be effectively improved, which is very practical.
  • the embodiment of the present invention further provides a computing device, where the computing device includes: a processor and a memory bus and a communication interface, wherein the processor, the memory, and the communication interface implement each other through a bus Communication between the connections.
  • the computing device includes: a processor and a memory bus and a communication interface, wherein the processor, the memory, and the communication interface implement each other through a bus Communication between the connections.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

一种文本处理方法、装置及计算设备。该方法运行于文本处理系统,包括:对样本文本进行分词,获取样本文本中的至少一个关键词以及每个关键词对应的关联词(220);根据至少一个关键词以及每个关键词对应的关联词,训练第一模型(230);对测试文本进行分词,获取测试文本中的待修正关键词以及待修正关键词对应的关联词(240);将待修正关键词以及待修正关键词对应的关联词输入第一模型,获取待修正关键词对应的修正后关键词(250);根据修正后关键词和待修正关键词,训练第二模型(260);对待修正文本进行分词,将待修正文本的分词结果输入第二模型,修正待修正文本(270)。

Description

一种文本处理方法、装置及计算设备 技术领域
本发明涉及计算机技术领域,尤其涉及一种文本处理方法、装置及计算设备。
背景技术
全文搜索技术已经发展了数十年,目前已经是一种较为成熟的技术。而传统的搜索系统一般都会默认导入系统内部的文件是正确无误的。因此,只要对导入系统内的文件做一些预处理后,就可以按照自己的方法对导入系统建立索引,提供给应用程序做搜索业务。然而,随着技术的进步,需要搜索的内容和来源变得更加多元化,很多内容本身在导入搜索引擎建立索引之前,都是经过多次转换的。所以,可能会存在一些转换的错误导致原有的关键信息丢失而搜索不到结果。
发明内容
针对上述技术问题,本发明提供了一种文本处理方法、装置及计算设备。在文本导入搜索引擎之前,发现并修正文本中存在的错误来提高关键词搜索结果的方法和装置。
第一方面,本发明提供了一种文本处理方法,所述方法运行于文本处理系统,所述文本处理系统包括至少一个计算设备,所述方法包括:
获取源文本,其中,源文本包括样本文本和测试文本;源文本可以为文本处理系统收集到的历史文本中的一部分,文本处理系统根据源文本来训练文本修正模型,以使文本更为精确。对样本文本进行分词,获取样本文本中的至少一个关键词以及每个关键词对应的关联词。根据至少一个关键词以及每个关键词对应的关联词,训练第一模型;该模型也即用于修正待修正关键词的模型。对测试文本进行分词,获取测试文本中的待修正关键词以及待修 正关键词对应的关联词。将待修正关键词以及待修正关键词对应的关联词输入第一模型,获取待修正关键词对应的修正后关键词。根据待修正关键词对应的修正后关键词和待修正关键词,训练第二模型。对待修正文本进行分词,将待修正文本的分词结果输入第二模型,修正待修正文本。
在文本导入到搜索引擎之前,利用第一模型检测出文本中可能存在待修正的错误关键词;利用待修正关键词和与之对应的修正后关键词,训练第二模型,利用第二模型,将其他待修正文本中存在错误的关键词进行修正,提升了文本内关键词的精度,也有助于提高后续根据关键词进行搜索的精确度。
结合第一方面,在第一方面的第一种可能的实现方式中,对样本文本进行分词,获取样本文本中的至少一个关键词以及每个关键词对应的关联词包括:对样本文本进行分词,获取样本文本的分词结果,样本文本的分词结果中包括至少一个样本文本词;从至少一个样本文本词中获取至少一个关键词,至少一个关键词在样本文本中的词频大于第一阈值;获取每个关键词的待选关联词,从每个关键词的待选关联词中获取每个关键词对应的关联词,每个关键词对应的关联词与每个关键词的联合概率大于第二阈值。
通过对词频高于第一阈值的关键词的提取,获取了文本中价值较高,后续使用频率较高的关键词,并通过与关键词的联合概率来筛选出各个关键词的关联词,以供后续训练。
结合第一方面的第一种可能的实现方式,在第一方面的第二种可能的实现方式中,根据至少一个关键词以及每个关键词对应的关联词,训练第一模型包括:根据至少一个关键词、每个关键词对应的关联词以及每个关键词对应的关联词与每个关键词的联合概率,训练第一模型。
结合第一方面至第一方面的第二种可能的实现方式中的任一种,在第一方面的第三种可能的实现方式中,将待修正关键词以及待修正关键词对应的关联词输入第一模型,获取待修正关键词对应的修正后关键词,具体包括:利用第一模型将待修正关键词修正为至少一个待选修正关键词;将至少一个 待选修正关键词构成待选修正关键词组;在待选修正关键词组中选取与待修正关键词对应的修正后关键词,其中,修正后关键词所对应的第一修正概率值,为待选修正关键词组中与待选修正关键词对应的修正概率值中的最大值,修正概率值为至少一个待选修正关键词中的每一个待选修正关键词和待选修正关键词对应的关联词之间的联合概率。
由于第一模型在判断每个待修正关键词的修正后关键词的过程中,不是简单的进行匹配,而是有一定概率的影响,因此每个待修正关键词可能对应有多个待选修正关键词,通过获取修正概率值最高的待选修正关键词作为最终的修正后关键词。
结合第一方面的第三种可能的实现方式,在第一方面的第四种可能的实现方式中,根据待修正关键词对应的修正后关键词和待修正关键词,训练第二模型,包括:根据待修正关键词对应的修正后关键词,待修正关键词以及修正后关键词对应的第一修正概率值,训练第二模型。
结合第一方面至第一方面的第二种可能的实现方式,以及第一方面的第四种可能的实现方式中的任一种,在第一方面的第五种可能的实现方式中,该方法还包括:获取查询日志中的日志关键词,日志关键词为查询日志中词频大于第三阈值的词;将日志关键词作为样本文本的关键词。
从用户的日志中,获取用户感兴趣的关键词,并将其作为样本文本中提取关键词的手段之一,提升了从样本文本中提取最有价值的关键词的精度,与第一方面的前几种实现方式相比,关键词的提取不仅仅依赖于样本文本中各个词的词频。
第二方面,本发明实施例提供了一种文本处理装置,该装置包括:
分词模块,用于获取源文本,源文本包括样本文本和测试文本;对样本文本进行分词,获取样本文本中的至少一个关键词以及每个关键词对应的关联词;处理模块,用于根据至少一个关键词以及每个关键词对应的关联词, 训练第一模型;分词模块还用于,对测试文本进行分词,获取测试文本中的待修正关键词以及待修正关键词对应的关联词;处理模块还用于,将待修正关键词以及待修正关键词对应的关联词输入第一模型,获取待修正关键词对应的修正后关键词;根据待修正关键词对应的修正后关键词和待修正关键词,训练第二模型;分词模块还用于,对待修正文本进行分词;处理模块还用于,将待修正文本的分词结果输入第二模型,修正待修正文本。
结合第二方面,在第二方面的第一种可能的实现方式中,分词模块具体用于,对样本文本进行分词,获取样本文本的分词结果,样本文本的分词结果中包括至少一个样本文本词;从至少一个样本文本词中获取至少一个关键词,至少一个关键词在样本文本中的词频大于第一阈值;获取每个关键词的待选关联词,从每个关键词的待选关联词中获取每个关键词对应的关联词,每个关键词对应的关联词与每个关键词的联合概率大于第二阈值。
结合第二方面的第一种可能的实现方式,在第二方面的第二种可能的实现方式中,处理模块具体用于:根据所述至少一个关键词、所述每个关键词对应的关联词以及所述每个关键词对应的关联词与所述每个关键词的联合概率,训练所述第一模型。
结合第二方面至第二方面的第二种可能的实现方式中的任一种实现方式,在第二方面的第三种可能的实现方式中,处理模块具体用于,利用第一模型将待修正关键词修正为至少一个待选修正关键词;将至少一个待选修正关键词构成待选修正关键词组;在待选修正关键词组中选取与待修正关键词对应的修正后关键词,其中,修正后关键词所对应的第一修正概率值,为待选修正关键词组中与待选修正关键词对应的修正概率值中的最大值,修正概率值为至少一个待选修正关键词中的每一个待选修正关键词和待选修正关键词对应的关联词之间的联合概率。
结合第二方面的第三种可能的实现方式,在第二方面的第四种可能的实现方式中,处理模块具体用于,根据待修正关键词对应的修正后关键词,待 修正关键词以及修正后关键词对应的第一修正概率值,训练第二模型。
结合第二方面至第二方面的第二种可能的实现方式,以及第二方面的第四种可能的实现方式中的任一种实现方式,在第二方面的第五种可能的实现方式中,处理模块还用于:获取查询日志中的日志关键词,日志关键词为查询日志中词频大于第三阈值的词;将日志关键词作为样本文本的关键词。
第三方面,本发明实施例提供了一种计算设备,该计算设备包括:处理器,存储器,总线及通信接口,处理器、存储器和通信接口通过总线实现通信连接,存储器用于存储处理器需要执行的指令,指令被处理器执行以用于实现在第一方面中所介绍的文本处理方法中的任一项所述的方法。
附图说明
图1为本发明实施例提供的一种修文本处理系统的架构图;
图2为本发明实施例提供的一种文本处理方法流程图;
图3为本发明实施例提供的一种文本处理装置的结构示意图。
具体实施方式
下面通过附图和实施例,对本发明实施例的技术方案做进一步的详细描述。
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例的应用环境可以是利用语音识别软件处理语音文件获得的文本,也可以是以其他形式获取的文本。在下面的具体实施例中,以通过语 音文件转换得来的文本进行修正为例说明。
本发明实施例提供了一种文本处理系统的系统架构图,如图1所示,该系统包括:录音器110,和计算设备120,计算设备120中包括处理器1201,存储器1202,总线1203以及通信接口1204。
录音器110可以是麦克风或者其他可以录音的设备,录音器110接收用户发来的声音信号,并对其进行记录生成语音文件。
计算设备中的处理器1201、存储器1202和通信接口1204可以通过总线1203建立通信连接,也可以通过无线传输等其他手段实现通信。
处理器1201可以为中央处理器(英文:central processing unit,缩写:CPU)。
存储器1202可以包括易失性存储器(英文:volatile memory),例如随机存取存储器(英文:random-access memory,缩写:RAM);存储器也可以包括非易失性存储器(英文:non-volatile memory),例如只读存储器(英文:read-only memory,缩写:ROM),快闪存储器,硬盘(英文:hard disk drive,缩写:HDD)或固态硬盘(英文:solid state drive,缩写:SSD);存储器1202还可以包括上述种类的存储器的组合。
在通过软件来实现本申请提供的技术方案时,用于实现本申请图2提供的文本处理方法的程序代码保存在存储器1202中,并由处理器1201来执行。计算设备120通过通信接口1204与录音器110通信。
图2为本发明实施例一提供的一种文本处理方法流程图200,包括:
步骤210,获取源文本,源文本包括样本文本和测试文本。
具体的,录音器110将语音文件发送到计算设备120中,计算设备120将语音文件转换成多个文本,并将多个文本分为样本文本和测试文本。
计算设备120中可以利用自动语音识别(Automatic Speech Recognition,简称ASR)技术,将语音文件转换成文本。
步骤220,对样本文本进行分词,获取样本文本中的至少一个关键词以 及每个关键词对应的关联词。
具体的,计算设备120可以采用自然语言处理(Natural Language Processing,简称NLP)技术对样本文本进行分词处理,获取样本文本的分词结果。
样本文本的分词结果中包括至少一个样本文本词。从至少一个样本文本次中获取至少一个关键词,其中,每一个关键词在样本文本中的词频大于第一阈值。
例如,一篇文本是与信用卡有关的文本。在文本中出现了“新用户如果想要办理一张信用卡,必须携带个人的身份证”的语句。在采用NLP技术进行分词时,计算设备120会将这句话分词为“新/用户/如果/想/要/办理/一张/信用卡/,/必须/携带/个人/的/身份证/”。而每出现一个词,计算设备120还会统计该词在样本文本中所出现的次数。当一个词在样本文本中出现的次数,也即是词频大于第一阈值(例如第一阈值为20,即一个词出现次数大于20次)时,可以将该词定义为样本文本中的关键词。
可选的,在利用NLP技术对文本分词时,可以将该文本置于一个子场景中。这里的子场景就是一个专业词库。例如关于信用卡的文本,将该文本置于一个包括信用卡的专业词库中时,系统可以更容易将“信用卡”作为一个词,而不是单单的将“信用”作为一个词,“卡”单独作为一个词。也就是说,在子场景中,利用NLP技术分词时,可以更加的符合文本的情景,分词的准确率能够更高一些。
可选的,还可以获取每个关键词的待选关联词,从每个关键词的待选关联词中获取每个关键词对应的关联词,其中每个关键词对应的关联词与关键词的联合概率值大于第二阈值。
具体的,可以利用词向量的方法,计算关键词与该关键词的关联词之间的联合概率值。
例如,沿用上文中所举的例子,样本文本中的一个关键词是“信用卡”, 而待选关联词可以为“要”、“办理”、“一张”、必须”和“携带”等等,可以通过贝叶斯公式计算待选关联词和关键词之间的联合概率值。例如,将关键词事件用A表示,关联词事件用B表示。P(A)是关键词单独出现的概率,P(B)是关联词单独出现的概率,而P(AB)则是关键词和关联词同时出现的概率。P(A|B)代表关联词出现的条件下关键词出现的概率,也即是关键词和关联词之间的联合概率,具体计算公式如下:
Figure PCTCN2016105951-appb-000001
可选的,当关键词与所有该关键词的关联词之间的联合概率值均小于第二阈值时(例如,第二阈值为1%),则应该剔除掉该关键词。
因为,虽然利用NLP技术分词后,已经确定了一些关键词,但是关键词也仅仅是通过统计学的算法计算而得到的,不一定完全的准确。而联系上下文,如果获取的某一关键词与其相关联的关联词之间的联合概率值均小于第二阈值,那么则说明该关键词是伪关键词,所以应该剔除掉。
步骤230,根据至少一个关键词以及每个关键词对应的关联词,训练第一模型。
具体的,将至少一个关键词和关键词对应的关联词,作为输入参数对第一模型进行训练,以便于后续利用第一模型,获取待修正文本中的待修正关键词对应的修正后关键词。
可选的,也可以根据至少一个关键词,每个关键词对应的关联词以及每个关键词对应的关联词与关键词之间的联合概率,训练第一模型。第一模型可以采用机器学习模型,例如采用朴素贝叶斯或者支持向量机(Support Vector Machine,简称SVM)等。例如,关键词是“自动取款机”等,关联词是“故障”和“修理”等,关键词与两个关联词之间的联合概率分别是0.8546702,0.4326960等,具体参见表1。
表1
Figure PCTCN2016105951-appb-000002
步骤240,对测试文本进行分词,获取测试文本中的待修正关键词以及待修正关键词对应的关联词。
具体的分词步骤,以及获取测试文本中的待修正关键词以及待修正关键词对应的关联词等分别与步骤210和步骤220中对样本文本进行分词,和获取关键词以及关键词对应的关联词的步骤类似,这里不再赘述。
步骤250,将待修正关键词以及待修正关键词对应的关联词输入第一模型,获取待修正关键词对应的修正后关键词。
可选的,步骤250包括:利用所述第一模型将所述待修正关键词修正为至少一个待选修正关键词;从所述至少一个待选修正关键词中选取与所述待修正关键词对应的所述修正后关键词,其中,所述修正后关键词所对应的修正概率值,为所述至少一个待选修正关键词对应的修正概率值中的最大值,每个待选修正关键词对应的修正概率值为该待选修正关键词和所述待修正关键词对应的关联词之间的联合概率。
具体的,首先确定测试文本中出现的待修正关键词,同时,还可以根据待修正关键词在测试文本中的上下文,确定与待修正关键词对应的关联词,其中,与待修正关键词对应的关联词的个数同样可以一个或者多个。第一模 型针对每一个待修正关键词,可能会将该待修正关键词修正为一个或者多个待选修正关键词,这一个或者多个待选修正关键词组成了待选修正关键词组。然后从待选修正关键词组中获取修正后关键词。
在一个具体的例子中,例如该第一模型是一个表格,如表1所示。那么,通过第一模型将待修正关键词与表格1中的每一个关键词进行匹配,同时,将待修正关键词的关联词与表格1中的关键词所对应的关联词相匹配,当二者皆匹配成功时,则会将匹配成功的一个或者多个关键词作为该待修正关键词对应的待选修正关键词,构成一个待选修正关键词组。而从待选修正关键词组中,确定待选修正关键词和待选修正关键词对应的关联词之间的联合概率值最大的一个,作为修正后关键词。
例如在语音文档转换成文本时,将“自动取款机故障”转换成了“自动**机故障”(**可以是拼错的后的任意词,或者,也可以使用**或其他字符代替,例如“自动期刊机”,“自动#&机”等),并且,该待修正关键词“自动**机故障”在该文本中出现了多次。而与待修正关键词对应的关联词还可以是修理(假设,原文中所记载的内容是:检测到自动**机发生故障,所以需要立即对自动**机进行修理,所取得关联词是待修正关键词右边的第二个词,所以,在第一次自动**机出现时,所取的第二关联词是发生和故障这两个词;而在第二次出现自动**机时,所取的第二关联词是进行,和修理这两个词)。处理器将会将该错误的关键词“自动**机”与第一模型中的一个或者多个关键词进行匹配,同时将该待修正关键词的关联词“故障”与第一模型中的与每一个关键词分别对应的关联词相匹配。获取修正关键词组。由表1中可以知道,在第一模型中,关联词是“故障”的,并且关键词包括自动**机的包括了两个关键词,第一个是自动取款机,第二个是自动存款机。并且,“自动取款机”与“故障”之间的联合概率值是0.8546702,即“自动取款机”对应的修正概率值为0.8546702,而“自动存款机”与“故障”之间的联合概率值是0.6543890,即“自动存款机”对应的修正概率值为0.6543890。而待修正 关键词“自动**机”的关联词还包括一个“修理”,在与第一模型相匹配时,发现同样存在关联词“修理”,与该关联词对应的关键词同样是“自动取款机”,联合概率值是0.4326960。处理器则会将这两个关键词作为待修正关键词对应的待选修正关键词,将这两个待选修正关键词构成一个待选修正关键词组。而从待选修正关键词组中,选出修正概率值最高的待选修正关键词作为修正后关键词。
在另一个具体的例子中,第一模型也可以是一个分类器模型,或者类似分类器模型的其他模型。将待修正关键词输入到第一模型中,第一模型输出与待修正模型对应的一个或者多个待选修正关键词。构成待选修正关键词组。而从待选修正关键词组中,确定待选修正关键词和待选修正关键词对应的关联词之间的联合概率值最大的一个,作为修正后关键词。
利用第一模型对测试文本中的待修正关键词进行多次的迭代处理。例如,在测试文本中,“自动**机”出现的总次数为100次,在第一次处理后,将“自动**机”修正为“自动取款机”的次数为60次,将“自动**机”修正为“自动存款机”的次数为40次。将经过第一次处理后的,测试文本中的待修正关键词再次输入第一模型进行第二次处理后,测试文本中的待修正关键词“自动**机”被修正为“自动取款机”的次数为79次,“自动**机”被修正为“自动存款机”的次数为31次,进行多次前述迭代处理,直至第n-2次,将“自动**机”修正为“自动取款机”的次数为78次,将“自动**机”修正为“自动存款机”的次数为32次;第n-1次,将“自动**机”修正为“自动取款机”的次数为80次,将“自动**机”修正为“自动存款机”的次数为20次,第n次,将“自动**机”修正为“自动取款机”的次数为80次,将“自动**机”修正为“自动存款机”的次数为20次,第n+1次时,处理后的结果仍然没有变化或变化幅度小于预期,也即第n+1词迭代处理与第n次迭代处理中待修正关键词的修正结果无变化或变化幅度小于预期,将“自动**机”修正为“自 动取款机”的次数为80次,将“自动**机”修正为“自动存款机”的次数为20次。
每个待选修正关键词对应的修正概率值还可以为该待选修正关键词修正待修正关键词的比例。承接上例,经过多次迭代处理后,迭代结果为将“自动**机”修正为“自动取款机”的次数为80次,即修正概率值为80%;将“自动**机”修正为“自动存款机”的次数为20次,则说明修正概率值为20%。在确定修正后关键词时,可以根据修正关键词组中各个待选修正关键词的修正概率值确定,选取修正概率值最大的待选修正关键词,作为修正后关键词。由此,将待选修正关键词“自动取款机”的修正概率值最大,为80%。因此,选取的修正后关键词为“自动取款机”。需要说明的是,在本实施例中,仅仅是列举了两个待选修正关键词与待修正关键词进行匹配,而在一种情况中,若与待修正关键词可以匹配的待选修正关键词为多个时,在构建待选修正关键词组时,可以按照每一个待选修正关键词与待选修正关键词对应的关联词之间的联合概率值来作为一个衡量标准。例如,一个待修正关键词和待修正关键词对应的关联词分别是自动**机/欠费,而在第一模型中与关联词“欠费”对应的待选关键词包括多个,即关联词同样是欠费,而在待选关键词中同样包括自动……机的包括多个词。此时,就取决于待选关键词与关联词“欠费”之间的联合概率值,可以取概率值较大的前几个词作为推荐的待选修正关键词,加入到待选修正关键词组中,再将待选修正关键词组中的待选关键词进行迭代处理。最后确定每一个待选修正关键词对应的修正概率值,确定修正概率值最大的待选关键词作为待修正关键词对应的修正后关键词。
步骤260,根据待修正关键词对应的修正后关键词和待修正关键词,训练第二模型。
具体的,可以利用待修正关键词和与之对应的修正后关键词作为输入参数训练第二模型,第二模型可以机械学习模型,例如SVM,神经网络等。
可选的,还可以将第一修正概率值,与待修正关键词和修正后关键词一 起,作为输入参数对第二模型进行训练。
可选的,在训练过程中,输入参数还可以包括待修正关键词的关联词或与修正后关键词对应的修正概率值。例如,ATM**/机/ATM存款/90%。
其中,ATM**为待修正关键词,机是待修正关键词对应的关联词,ATM存款是修正后关键词,90%是修正概率值。
应当理解,一个存在错误的关键词,在修正的过程中,可能填充或者修正的字节为一个或者多个,如上文中,很有可能关键词是“ATM存款取款”,即填充的字节是4个,而不是两个。
再或者,例如一个关键词是“信用卡”,但是在语音文件转换为文本时,将信用卡转化成了“信用啊”,所以在语句中出现了“信用啊/欠费”,因为,需要修正的词时一个字,而在第一模型中,已经查询到信用卡和欠费这两个词的联合概率值是90%;那么,当这个关键词作为推荐的修正关键词时,可以规定该关键词与关联词之间的联合概率值不发生改变,就是90%。
而当语音文件转换成文本时,将“信用卡”转换成了“信用阿拉”,即语句中出现了“信用阿拉/欠费”,那么,即使信用卡和欠费这两个词之间的联合概率值是90%,此时若想将“信用卡”作为待修正关键词“信用阿拉”的推荐关键词时,则规定该关键词语关联词之间的联合概率值不再是90%,而是要再乘以一个修正系数。
这样做的目的是,可以提高修正的精确率。因为总会有一些词,可能缺少的本身就是两个词,而不是一个词。例如ATM存款机,在转换时转换成了ATM**机,而在关键词匹配时,可以是ATM存款机,或者也可以是ATM存款取款机。而正确的其实是ATM存款取款机,而不是ATM存款机。当然,这样的概率一般不会很大,所以才会在算关键词与关联词之间的联合概率值时,乘以一个修正系数。
步骤270,对待修正文本进行分词,将待修正文本的分词结果输入第二模型,修正待修正文本。
具体的,对待修正文本进行分词过程与对样本文本、对测试文本进行分词过程类似,这里不再赘述。
应理解,这里的待修正文本一般为测试文本和样本文本之外的文本。在获取文本时,首先利用第一模型检测文本中是否存在待修正关键词,在存在待修正关键词时,则利用第二修正模型,对该文本中的待修正关键词进行修正。
进一步可选的,因为在任何一个技术领域中,关键词都不是固定的,而根据样本文本所获取的关键词也不能完全覆盖整个技术领域的关键词。因此,该方法还可以包括步骤280,获取查询日志中的日志关键词,其中,日志关键词为查询日志中词频大于第三阈值的词;将日志关键词作为样本文本的关键词。步骤280可以执行于步骤270之后,即在本次对待修正文本的修正结束后,将从日志中提取的日志关键词作为新的样本文本的关键词的选择依据,例如,可以结合新的样本文本中词的词频与各个词是否为日志关键词结合判断新的样本文本中的关键词。步骤280还可以执行于步骤210之前,也即在本次从样本文本中提取关键词之前就提取日志关键词,并用于步骤210中对样本文本的关键词的提取中。
具体的,在用户查询信息时,可以获取日志中的日志关键词,例如在当前第一模型中不存在的关键词和关键词所在的技术领域内的热门词。这些日志关键词均是在查询日志中词频大于第三阈值的词。
可以将这些日志关键词,更新第一模型,进而可以获得更好的第二模型,提高文本中词的精确度。
在一个具体的实施例中,例如某银行的呼叫中心,业务员在跟客户沟通的过程中,已经将沟通的语音通过录音设备录制成了语音文件,然后计算设备又将语音文件转换成了文本。处理器利用第一模型查找出该文本中出现的待修正关键词,以及与之对应的关联词。例如,在文本中具体语句为“信用啦啊”,其中“信用啦啊”中的“啦啊”出现是因为ASR系统识别出该处有 音节存在,但是因为噪音干扰或者抖动,未能正确的识别具体内容,使用语气词进行填充,其正确的内容应该是“信用卡”。
真实的业务场景是客户在咨询信用卡的相关事宜,“信用卡”在搜索索引中是一个完整的词语,而该词可能成为搜索的关键词,因为识别的错误,将会导致搜索失败。此时,则可以利用第二修正模型,将该关键词“信用啦啊”修正为“信用卡”。然后,将修正后的文本存储在存储器中。而在本发明实施例中,存储器中包含数据仓库组件。将修正后的文本则存储在数据仓库组件中,然后建立索引任务。当用户需要进行搜索应用时,可以利用搜索软件,例如百度等搜索,而应用软件则会通过API接口调用程序,与全文搜索引擎进行交互,搜索引擎则可以根据索引在数据仓库组件中找到与用户输入的关键词对应的文本,发送到搜索软件中,并通过显示屏显示给用户。
本发明实施例提供的文本处理方法,根据样本文本中的至少一个关键词以及与至少一个关键词对应的的关联词训练第一模型,并且通过第一模型获取待修正关键词对应的修正后关键词;根据待修正关键词,以及对应的修正后关键词,训练第二模型,利用第二模型修正待修正文本,提升了文本精度。
还应理解,在本发明的实施例中,可以利用业务已有的分类得到各个子场景的修正模型,利用每个子场景的修正模型来纠正错误数据文本中的关键词内容,充分利用了上下文信息。而每个子领域中的关键词词量较小,在训练修正模型时,相对容易。在特定的子场景中,避免了数据源的错误引入,同样可以有效的提高业务搜索的精确度,非常实用。
与上述文本处理方法相对应的,本发明实施例还提供了一种文本处理装置300,该文本处理装置300可以通过图1所示的计算设备120实现,还可以通过专用集成电路(英文:application-specific integrated circuit,缩写:ASIC)实现,或可编程逻辑器件(英文:programmable logic device,缩写:PLD)实现。上述PLD可以是复杂可编程逻辑器件(英文:complex programmable logic device,缩写:CPLD),即现场可编程门阵列(英文:field programmable  gate array,缩写FPGA),通用阵列逻辑(英文:generic array logic,缩写:GAL)或其任意组合。该文本处理装置300用于实现图2所示的文本处理方法。通过软件实现图2所示的文本处理方法时,文本处理装置300及其各个模块也可以为软件模块。
具体的文本处理装置如图3所示,所述装置包括:分词模块301,处理模块302。
分词模块301,用于获取源文本,其中,源文本包括样本文本和测试文本;并对样本文本进行分词,获取样本文本中的至少一个关键词以及每个关键词对应的关联词。
具体的,分词模块301对样本文本进行分词,获取样本文本的分词结果,样本文本的分词结果中包括至少一个样本文本词;从至少一个样本文本词中获取至少一个关键词,至少一个关键词在样本文本中的词频大于第一阈值。
获取每个关键词的待选关联词,从每个关键词的待选关联词中获取每个关键词对应的关联词,每个关键词对应的关联词与每个关键词的联合概率大于第二阈值。
处理模块302,用于根据至少一个关键词以及每个关键词对应的关联词,训练第一模型。
具体的,处理模块302根据至少一个关键词、每个关键词对应的关联词以及每个关键词对应的关联词与每个关键词的联合概率,训练第一模型。
分词模块301还用于,对测试文本进行分词,获取测试文本中的待修正关键词以及待修正关键词对应的关联词。
处理模块302还用于,将待修正关键词以及待修正关键词对应的关联词输入第一模型,获取待修正关键词对应的修正后关键词。
根据待修正关键词对应的修正后关键词和待修正关键词,训练第二模型;
具体的,利用所述第一模型将所述待修正关键词修正为至少一个待选修正关键词;从所述至少一个待选修正关键词中选取与所述待修正关键词对应 的所述修正后关键词,其中,所述修正后关键词所对应的修正概率值,为所述至少一个待选修正关键词对应的修正概率值中的最大值,每个待选修正关键词对应的修正概率值为该待选修正关键词和所述待修正关键词对应的关联词之间的联合概率。
可选的,处理模块302可以根据待修正关键词对应的修正后关键词,待修正关键词以及修正后关键词对应的第一修正概率值,训练第二模型。
分词模块301还用于,对待修正文本进行分词。
处理模块302还用于,将待修正文本的分词结果输入第二模型,修正待修正文本。
修正文本之后,还包括获取查询日志中的日志关键词,其中,日志关键词为查询日志中词频大于第三阈值的词;将日志关键词作为样本文本的关键词。
本申请实施例二提供的装置运行时执行本申请实施例一提供的方法,其工作细节参考本申请实施例一提供的方法。
本发明实施例提供的一种文本处理装置,根据样本文本中的关键词以及与关键词相关联的关联词训练第一模型,并且通过第一模型获取待修正关键词对应的修正后关键词;根据待修正关键词,以及对应的修正后关键词,训练第二模型,利用第二模型修正待修正文本,提升了文本的精度。
还应理解,在本发明的实施例中,可以利用业务已有的分类得到各个子场景对应的修正模型,利用每个子场景对应的修正模型来纠正错误数据文本中的关键词内容,充分利用了上下文信息。而每个子领域对应的词库中的关键词词量较小,在训练修正模型时,相对容易。在特定的子场景中,避免了数据源的错误引入,同样可以有效的提高业务搜索的精确度,非常实用。
与上述修正关键词的方法相对应的,本发明实施例还提供了一种计算设备,该计算设备包括:处理器和存储器总线及通信接口,其中,处理器、存储器和通信接口通过总线实现彼此之间的通信连接。处理器和存储器的组成 部件以及所执行的方法步骤已经分别在上文中所介绍的文本处理系统和文本处理方法流程中做了详细的介绍,这里不再赘述。
专业人员应该还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、获取机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
以上所述的具体实施方式,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施方式而已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (13)

  1. 一种文本处理方法,其特征在于,所述方法运行于文本处理系统,所述文本处理系统包括至少一个计算设备,所述方法包括:
    获取源文本,所述源文本包括样本文本和测试文本;
    对所述样本文本进行分词,获取所述样本文本中的至少一个关键词以及每个关键词对应的关联词;
    根据所述至少一个关键词以及所述每个关键词对应的关联词,训练第一模型;
    对所述测试文本进行分词,获取所述测试文本中的待修正关键词以及所述待修正关键词对应的关联词;
    将所述待修正关键词以及所述待修正关键词对应的关联词输入所述第一模型,获取所述待修正关键词对应的修正后关键词;
    根据所述待修正关键词对应的修正后关键词和所述待修正关键词,训练第二模型;
    对待修正文本进行分词,将所述待修正文本的分词结果输入所述第二模型,修正所述待修正文本。
  2. 如权利要求1所述的方法,其特征在于,所述对所述样本文本进行分词,获取所述样本文本中的至少一个关键词以及每个关键词对应的关联词包括:
    对所述样本文本进行分词,获取所述样本文本的分词结果,所述样本文本的分词结果中包括至少一个样本文本词;
    从所述至少一个样本文本词中获取所述至少一个关键词,所述至少一个关键词在所述样本文本中的词频大于第一阈值;
    获取所述每个所述关键词的待选关联词,从所述每个关键词的待选关联词中获取所述每个关键词对应的关联词,所述每个关键词对应的关联词与所述每个关键词的联合概率大于第二阈值。
  3. 如权利要求2所述的方法,其特征在于,所述根据所述至少一个关键词以及所述每个关键词对应的关联词,训练第一模型包括:
    根据所述至少一个关键词、所述每个关键词对应的关联词以及所述每个关键词对应的关联词与所述每个关键词的联合概率,训练所述第一模型。
  4. 如权利要求1至3任一所述方法,其特征在于,所述将所述待修正关键词以及所述待修正关键词对应的关联词输入所述第一模型,获取所述待修正关键词对应的修正后关键词,包括:
    利用所述第一模型将所述待修正关键词修正为至少一个待选修正关键词;
    从所述至少一个待选修正关键词中选取与所述待修正关键词对应的所述修正后关键词,其中,所述修正后关键词所对应的修正概率值,为所述至少一个待选修正关键词对应的修正概率值中的最大值,每个待选修正关键词对应的修正概率值为该待选修正关键词和所述待修正关键词对应的关联词之间的联合概率,
  5. 如权利要求4所述方法,其特征在于,根据所述待修正关键词对应的修正后关键词和所述待修正关键词,训练第二模型,包括:
    根据所述待修正关键词对应的修正后关键词,所述待修正关键词以及所述修正后关键词对应的修正概率值,训练所述第二模型。
  6. 如权利要求1-3或5任一项所述的方法,其特征在于,所述方法还包括:
    获取查询日志中的日志关键词,所述日志关键词为所述查询日志中词频大于第三阈值的词;
    将所述日志关键词作为所述样本文本的关键词。
  7. 一种文本处理装置,其特征在于,所述装置包括:
    分词模块,用于获取源文本,所述源文本包括样本文本和测试文本;
    对所述样本文本进行分词,获取所述样本文本中的至少一个关键词以及 每个关键词对应的关联词;
    处理模块,用于根据所述至少一个关键词以及所述每个关键词对应的关联词,训练第一模型;
    所述分词模块还用于,对所述测试文本进行分词,获取所述测试文本中的待修正关键词以及所述待修正关键词对应的关联词;
    所述处理模块还用于,将所述待修正关键词以及所述待修正关键词对应的关联词输入所述第一模型,获取所述待修正关键词对应的修正后关键词;
    根据所述待修正关键词对应的修正后关键词和所述待修正关键词,训练第二模型;
    所述分词模块还用于,对待修正文本进行分词;
    所述处理模块还用于,将所述待修正文本的分词结果输入所述第二模型,修正所述待修正文本。
  8. 如权利要求7所述的装置,其特征在于,所述分词模块具体用于,对所述样本文本进行分词,获取所述样本文本的分词结果,所述样本文本的分词结果中包括至少一个样本文本词;
    从所述至少一个样本文本词中获取所述至少一个关键词,所述至少一个关键词在所述样本文本中的词频大于第一阈值;
    获取所述每个所述关键词的待选关联词,从所述每个关键词的待选关联词中获取所述每个关键词对应的关联词,所述每个关键词对应的关联词与所述每个关键词的联合概率大于第二阈值。
  9. 如权利要求8所述的装置,其特征在于,所述处理模块具体用于,根据所述至少一个关键词、所述每个关键词对应的关联词以及所述每个关键词对应的关联词与所述每个关键词的联合概率,训练所述第一模型。
  10. 如权利要求7至9任一项所述的装置,其特征在于,所述处理模块具体用于,利用所述第一模型将所述待修正关键词修正为至少一个待选修正关键词;
    从所述至少一个待选修正关键词中选取与所述待修正关键词对应的所述修正后关键词,其中,所述修正后关键词所对应的修正概率值,为所述至少一个待选修正关键词对应的修正概率值中的最大值,每个待选修正关键词对应的修正概率值为该待选修正关键词和所述待修正关键词对应的关联词之间的联合概率。
  11. 如权利要求10所述的装置,其特征在于,所述处理模块具体用于,根据所述待修正关键词对应的修正后关键词,所述待修正关键词以及所述修正后关键词对应的修正概率值,训练所述第二模型。
  12. 如权利要求7-9或11任一项所述的装置,其特征在于,所述处理模块还用于:获取查询日志中的日志关键词,所述日志关键词为所述查询日志中词频大于第三阈值的词;
    将所述日志关键词作为所述样本文本的关键词。
  13. 一种计算设备,其特征在于,所述计算设备包括:处理器、存储器,总线及通信接口,所述处理器、所述存储器和所述通信接口通过所述总线建立通信连接,所述存储器用于存储指令,所述处理器运行时执行所述指令以实现权利要求1-6中任一项所述的方法。
PCT/CN2016/105951 2016-03-24 2016-11-15 一种文本处理方法、装置及计算设备 WO2017161899A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610171019.1 2016-03-24
CN201610171019.1A CN107229627B (zh) 2016-03-24 2016-03-24 一种文本处理方法、装置及计算设备

Publications (1)

Publication Number Publication Date
WO2017161899A1 true WO2017161899A1 (zh) 2017-09-28

Family

ID=59899332

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/105951 WO2017161899A1 (zh) 2016-03-24 2016-11-15 一种文本处理方法、装置及计算设备

Country Status (2)

Country Link
CN (1) CN107229627B (zh)
WO (1) WO2017161899A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291561A (zh) * 2018-12-07 2020-06-16 阿里巴巴集团控股有限公司 文本识别方法、装置和系统
CN113806542A (zh) * 2021-09-18 2021-12-17 上海幻电信息科技有限公司 文本分析方法及系统
CN116842169A (zh) * 2023-09-01 2023-10-03 国网山东省电力公司聊城供电公司 电力网格会话管理方法、系统、终端及存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304868A (zh) * 2018-01-25 2018-07-20 阿里巴巴集团控股有限公司 模型训练方法、数据类型识别方法和计算机设备
CN111667813B (zh) * 2019-03-06 2024-04-19 北京精鸿软件科技有限公司 处理文件的方法和装置
CN111696545B (zh) * 2019-03-15 2023-11-03 北京汇钧科技有限公司 语音识别纠错方法、装置以及存储介质
CN110689891A (zh) * 2019-11-20 2020-01-14 广东奥园奥买家电子商务有限公司 一种基于公众显示装置的语音交互方法以及设备
CN111783424B (zh) * 2020-06-17 2024-02-13 泰康保险集团股份有限公司 一种文本分句方法和装置
CN111737979B (zh) * 2020-06-18 2021-01-12 龙马智芯(珠海横琴)科技有限公司 语音文本的关键词修正方法、装置、修正设备及存储介质
CN113190675A (zh) * 2021-05-12 2021-07-30 平安国际智慧城市科技股份有限公司 文本摘要生成方法、装置、计算机设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929925A (zh) * 2012-09-20 2013-02-13 百度在线网络技术(北京)有限公司 一种基于浏览内容的搜索方法及装置
CN103336765A (zh) * 2013-06-20 2013-10-02 上海大学 一种文本关键词的马尔可夫矩阵离线修正方法
CN104882139A (zh) * 2015-05-28 2015-09-02 百度在线网络技术(北京)有限公司 语音合成的方法和装置
CN105279149A (zh) * 2015-10-21 2016-01-27 上海应用技术学院 一种中文文本自动校正方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101655837B (zh) * 2009-09-08 2010-10-13 北京邮电大学 一种对语音识别后文本进行检错并纠错的方法
CN103366741B (zh) * 2012-03-31 2019-05-17 上海果壳电子有限公司 语音输入纠错方法及系统
US8713433B1 (en) * 2012-10-16 2014-04-29 Google Inc. Feature-based autocorrection
CN103914444B (zh) * 2012-12-29 2018-07-24 高德软件有限公司 一种纠错方法及其装置
CN104462085B (zh) * 2013-09-12 2019-04-12 腾讯科技(深圳)有限公司 检索关键词纠错方法及装置
KR101573854B1 (ko) * 2014-07-15 2015-12-02 부산대학교 산학협력단 관계어 기반 확률추정 방법을 이용한 통계적 문맥의존 철자오류 교정 장치 및 방법

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929925A (zh) * 2012-09-20 2013-02-13 百度在线网络技术(北京)有限公司 一种基于浏览内容的搜索方法及装置
CN103336765A (zh) * 2013-06-20 2013-10-02 上海大学 一种文本关键词的马尔可夫矩阵离线修正方法
CN104882139A (zh) * 2015-05-28 2015-09-02 百度在线网络技术(北京)有限公司 语音合成的方法和装置
CN105279149A (zh) * 2015-10-21 2016-01-27 上海应用技术学院 一种中文文本自动校正方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291561A (zh) * 2018-12-07 2020-06-16 阿里巴巴集团控股有限公司 文本识别方法、装置和系统
CN111291561B (zh) * 2018-12-07 2023-04-18 阿里巴巴集团控股有限公司 文本识别方法、装置和系统
CN113806542A (zh) * 2021-09-18 2021-12-17 上海幻电信息科技有限公司 文本分析方法及系统
CN113806542B (zh) * 2021-09-18 2024-05-17 上海幻电信息科技有限公司 文本分析方法及系统
CN116842169A (zh) * 2023-09-01 2023-10-03 国网山东省电力公司聊城供电公司 电力网格会话管理方法、系统、终端及存储介质
CN116842169B (zh) * 2023-09-01 2024-01-12 国网山东省电力公司聊城供电公司 电力网格会话管理方法、系统、终端及存储介质

Also Published As

Publication number Publication date
CN107229627B (zh) 2020-12-22
CN107229627A (zh) 2017-10-03

Similar Documents

Publication Publication Date Title
WO2017161899A1 (zh) 一种文本处理方法、装置及计算设备
US11636264B2 (en) Stylistic text rewriting for a target author
US11182568B2 (en) Sentence evaluation apparatus and sentence evaluation method
US11244167B2 (en) Generating a response to a user query utilizing visual features of a video segment and a query-response-neural network
CN107480143B (zh) 基于上下文相关性的对话话题分割方法和系统
US11734514B1 (en) Automated translation of subject matter specific documents
US10713571B2 (en) Displaying quality of question being asked a question answering system
WO2018157805A1 (zh) 一种自动问答处理方法及自动问答系统
CN105988990B (zh) 汉语零指代消解装置和方法、模型训练方法和存储介质
US11031009B2 (en) Method for creating a knowledge base of components and their problems from short text utterances
WO2020211720A1 (zh) 数据处理方法和代词消解神经网络训练方法
CN110727839A (zh) 自然语言查询的语义解析
CN108027814B (zh) 停用词识别方法与装置
US20230128497A1 (en) Machine learning-implemented chat bot database query system for multi-format database queries
CN109299227B (zh) 基于语音识别的信息查询方法和装置
CN110210041B (zh) 互译句对齐方法、装置及设备
EP4060526A1 (en) Text processing method and device
CN110888946A (zh) 一种基于知识驱动的查询的实体链接方法
CN114330251B (zh) 文本生成方法、模型的训练方法、设备及存储介质
US11990131B2 (en) Method for processing a video file comprising audio content and visual content comprising text content
US20230061731A1 (en) Significance-based prediction from unstructured text
CN111324705A (zh) 自适应性调整关连搜索词的系统及其方法
CN111310452A (zh) 一种分词方法和装置
Shi et al. Dual-feedback knowledge retrieval for task-oriented dialogue systems
Dinarelli et al. Concept segmentation and labeling for conversational speech

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16895244

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16895244

Country of ref document: EP

Kind code of ref document: A1