WO2021129411A1 - Procédé et dispositif de traitement de texte - Google Patents

Procédé et dispositif de traitement de texte Download PDF

Info

Publication number
WO2021129411A1
WO2021129411A1 PCT/CN2020/135636 CN2020135636W WO2021129411A1 WO 2021129411 A1 WO2021129411 A1 WO 2021129411A1 CN 2020135636 W CN2020135636 W CN 2020135636W WO 2021129411 A1 WO2021129411 A1 WO 2021129411A1
Authority
WO
WIPO (PCT)
Prior art keywords
word
text
words
processed
candidate
Prior art date
Application number
PCT/CN2020/135636
Other languages
English (en)
Chinese (zh)
Inventor
刘杰
祝官文
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201911335070.1A external-priority patent/CN113095072B/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20905268.7A priority Critical patent/EP4060526A4/fr
Priority to US17/788,052 priority patent/US20230065965A1/en
Publication of WO2021129411A1 publication Critical patent/WO2021129411A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/232Orthographic correction, e.g. spell checking or vowelisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning

Definitions

  • This application relates to the field of natural language processing, and more specifically, to a text processing method and device.
  • Artificial intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge, and use knowledge to obtain the best results.
  • artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Text error correction is to perform error detection on the original text and correct the errors according to natural language processing technology.
  • the original text may include multiple types of characters, and the above methods can only check and correct a single type of characters, resulting in a decrease in the accuracy of error correction.
  • the present application provides a text processing method and device, which can detect and filter multiple types of characters, and improve the accuracy of text error correction.
  • a text processing method including: obtaining the text to be processed; performing error detection processing on the text to be processed to obtain non-words in the text to be processed; if the non-words in the text to be processed belong to the first category of non-words , Use the non-words in the text to be processed as the correction result of the non-words in the text to be processed; if the non-words in the text to be processed belong to the second type of non-word, the third type of non-word or the fourth type of non-word, select and
  • the correction method for matching the category of non-words in the text to be processed is to correct the non-words in the text to be processed to obtain a correction result of the non-words in the text to be processed.
  • the first type of non-words include non-words with all capital letters, non-words with a word length within the preset word length, and non-words belonging to the first preset thesaurus, and the second type of non-words include non-words with merged errors.
  • the three types of non-words include non-words that contain non-letter characters, and the fourth type of non-words include non-words other than the first type, second type, and third type.
  • the text to be processed may be an optical character recognition (OCR) output text, or it may be a text input by a user.
  • OCR optical character recognition
  • the text input by the user may include content posted in a social network, or may be content entered in a search box of a search engine, and so on.
  • the text to be processed can be any text that needs to be corrected, and this application does not limit the specific form of the text to be processed.
  • the non-word error detection of the text to be processed may be performed based on the second preset vocabulary to obtain the non-word in the text to be processed.
  • Non-words refer to words that do not exist in the second preset word library.
  • the first preset vocabulary is different from the second preset vocabulary.
  • the second preset lexicon may be an English vocabulary.
  • Non-words are words that do not exist in the English thesaurus, for example, wasld.
  • multiple types of characters in the text to be processed can be detected and processed separately, which reduces the interference of multiple types of characters on the error correction process, and improves the accuracy of text error correction. Improve the robustness of the error correction method to the input text.
  • the category matching correction method corrects the non-words in the text to be processed to obtain the correction results of the non-words in the text to be processed, including: if the non-words in the text to be processed belong to the fourth category of non-words, generate the text to be processed Candidate words corresponding to the non-words in the text to be processed; determine the target candidate words corresponding to the non-words in the text to be processed from the candidate words corresponding to the non-words in the text to be processed; to be processed according to the target candidate words corresponding to the non-words in the text to be processed The non-words in the text are corrected to obtain the correction results of the non-words in the text to be processed.
  • determining the target candidate word corresponding to the non-word in the text to be processed from among multiple candidate words corresponding to the non-word in the text to be processed includes: The similarity between the non-words and the candidate words corresponding to the non-words in the text to be processed and the perplexity of the candidate words corresponding to the non-words in the text to be processed are scored for the candidate words corresponding to the non-words in the text to be processed, where , The perplexity of the candidate words corresponding to the non-words in the text to be processed is used to indicate the possibility of the candidate words corresponding to the non-words in the text to be processed in the text to be processed; the candidate words corresponding to the non-words in the text to be processed The candidate word with the highest score among the words is determined as the target candidate word corresponding to the non-word in the text to be processed.
  • the perplexity of the candidate words corresponding to the non-words can be scored through the language model.
  • the score corresponding to each candidate word can be obtained by weighting the scores corresponding to the above items, that is, the weights are set for the scores corresponding to each item.
  • the weight can be preset or obtained through training.
  • the candidate words are scored using the similarity between the candidate word and the non-word and the degree of confusion of the candidate word, and the similarity between the non-word and the candidate word and the semantic information of the text to be processed are considered. It is more in line with the original intent of the input text, better candidate words can be obtained, and the accuracy of text error correction is improved.
  • the similarity between the non-word in the text to be processed and the candidate word corresponding to the non-word in the text to be processed satisfies the first preset condition.
  • the similarity between the non-word in the text to be processed and the candidate word corresponding to the non-word in the text to be processed may include the similarity between the non-word in the text to be processed and the candidate word corresponding to the non-word in the text to be processed.
  • Edit distance and/or common string That is, the candidate words corresponding to the non-words in the text to be processed can be determined based on the edit distance and/or the common character string.
  • Edit distance refers to the number of editing operations required to convert one word to another. Editing operations include operations such as insertion, deletion, translocation, and replacement of characters in a word.
  • the common character string refers to the number of consecutive identical characters contained in two words.
  • the first preset condition may be that the edit distance is less than the first preset value.
  • the non-words in the text to be processed are corrected according to the target candidate words corresponding to the non-words in the text to be processed to obtain the correction results of the non-words in the text to be processed, including :
  • the perplexity of the target candidate word corresponding to the non-word in the text to be processed is lower than or equal to the first perplexity threshold, use the target candidate word corresponding to the non-word in the text to be processed to replace the non-word in the text to be processed Words, as the correction result of non-words in the text to be processed.
  • the target candidate word whose perplexity is lower than or equal to the first perplexity threshold is used to replace non-words in the text to be processed, which can make full use of the semantic information of the text , To further improve the accuracy of text error correction.
  • the merging error non-word is a non-word that includes at least two true words, and if the non-word in the text to be processed belongs to the second type of non-word and the third type of non-word Or the fourth type of non-word, select the correction method that matches the category of the non-word in the text to be processed to correct the non-word in the text to be processed, and obtain the correction result of the non-word in the text to be processed, including: if the text to be processed The non-words in belongs to the second type of non-words, and the non-words in the text to be processed are corrected, and at least two true words are obtained as the correction results of the non-words in the text to be processed.
  • a space can be added at an appropriate position in the non-word to modify the non-word to at least two true words.
  • the non-word in the processed text is inChina, and the non-word belongs to the second type of non-word. Add a space in the non-word and modify the non-word to in China.
  • At least two modified true words are input into the language model, and if the modified at least two true words can reduce the degree of confusion, the modified at least The two true words are used as the correction results of non-words in the text to be processed.
  • the language model is used to further determine the perplexity of the modified at least two true words, and the semantic information of the text to be processed is used, which is more in line with the original intent of the input text, and better candidate words can be obtained. , Improve the accuracy of text error correction.
  • the non-word in the text to be processed belongs to the second type of non-word, the third type of non-word, or the fourth type of non-word, select the non-word in the text to be processed
  • the category matching correction method corrects the non-words in the text to be processed, and obtains the correction results of the non-words in the text to be processed, including: if the non-words in the text to be processed belong to the third category of non-words, add the non-words in the text to be processed
  • the non-letter characters in the non-words are modified into letters, and the modified words are used as the correction result of the non-words in the text to be processed.
  • the modified word is input into the language model, and if the modified word can reduce the degree of confusion, the modified word is regarded as a non-word in the text to be processed The result of the correction.
  • the language model is used to further determine the perplexity of the modified word, and the semantic information of the text to be processed is used, which is more in line with the original intent of the input text, can obtain better candidate words, and improve the text The accuracy of error correction.
  • the method further includes: performing true word error detection on the text to be processed to obtain false true words in the text to be processed; generating candidate words corresponding to the false true words; Determine the target candidate word corresponding to the wrong true word among the corresponding candidate words; correct the wrong true word according to the target candidate word corresponding to the wrong true word.
  • the true word error detection can be performed on the text to be processed based on the language model to obtain the false true word in the text to be processed.
  • the language model may be a statistical language model.
  • the language model can also be a neural network model.
  • determining the target candidate word corresponding to the wrong true word among the candidate words corresponding to the wrong true word includes: according to the difference between the wrong true word and the candidate words corresponding to the wrong true word The similarity and the perplexity of the candidate words corresponding to the false true words are scored for the candidate words corresponding to the false true words. Among them, the perplexity of the candidate words corresponding to the false true words is used to indicate that the candidate words corresponding to the false true words are in the text to be processed. The possibility of appearing in the true word; the candidate word with the highest score among the candidate words corresponding to the false true word is determined as the target candidate word corresponding to the false true word.
  • the perplexity of the candidate words corresponding to the wrong true words can be scored through the language model.
  • the score corresponding to each candidate word can be obtained by weighting the scores corresponding to the above items, that is, the weights are set for the scores corresponding to each item.
  • the weight can be preset or obtained through training.
  • the candidate words are scored based on the similarity between the candidate word and the false true word and the degree of confusion of the candidate word, while the similarity between the false true word and the candidate word and the semantic information of the text to be processed are considered. , It can be more in line with the original intent of the input text, better candidate words can be obtained, and the accuracy of text error correction can be improved.
  • the similarity between the false true word in the text to be processed and the candidate word corresponding to the false true word in the text to be processed satisfies the second preset condition.
  • the similarity between the false true words in the text to be processed and the candidate words corresponding to the false true words in the text to be processed may include the candidates corresponding to the false true words in the text to be processed and the false true words in the text to be processed.
  • the second preset condition may be that the edit distance is less than the second preset value.
  • first preset condition and the second preset condition may be the same or different.
  • the first preset value and the second preset value may be the same or different.
  • the false true word is corrected according to the target candidate word corresponding to the false non-word, and the correction result of the false true word is obtained, including: When the perplexity is lower than or equal to the second perplexity threshold, the target candidate word corresponding to the false true word is used to replace the false true word as a correction result of the false true word.
  • the target candidate word whose perplexity is lower than or equal to the second perplexity threshold is used to replace non-words in the text to be processed, which can make full use of the semantic information of the text , To further improve the accuracy of text error correction.
  • a text processing device including: an acquisition unit and a processing unit.
  • the obtaining unit is used to obtain the to-be-processed text.
  • the processing unit is used to: perform error detection processing on the text to be processed to obtain non-words in the text to be processed; if the non-words in the text to be processed belong to the first type of non-words, use the non-words in the text to be processed as the text to be processed The correction result of the non-word; if the non-word in the text to be processed belongs to the second type of non-word, the third type of non-word or the fourth type of non-word, select the correction method that matches the non-word category in the text to be processed The non-words in the processed text are corrected to obtain the correction results of the non-words in the text to be processed.
  • the first type of non-words include non-words with all capital letters, non-words with a word length within the preset word length, and non-words belonging to the first preset thesaurus, and the second type of non-words include non-words with merged errors.
  • the three types of non-words include non-words that contain non-letter characters, and the fourth type of non-words include non-words other than the first type, second type, and third type.
  • the processing unit is configured to: if the non-word in the text to be processed belongs to the fourth category of non-words, generate candidate words corresponding to the non-word in the text to be processed; From the candidate words corresponding to the non-words in the text, determine the target candidate words corresponding to the non-words in the text to be processed; correct the non-words in the text to be processed according to the target candidate words corresponding to the non-words in the text to be processed to obtain The correction result of non-words in the text.
  • the processing unit is configured to: according to the similarity between the non-words in the text to be processed and the candidate words corresponding to the non-words in the text to be processed, and the differences in the text to be processed
  • the perplexity of the candidate word corresponding to the non-word is scored for the candidate word corresponding to the non-word in the text to be processed, where the perplexity of the candidate word corresponding to the non-word in the text to be processed is used to indicate the non-word correspondence in the text to be processed
  • the possibility that the candidate word appears in the text to be processed; the candidate word with the highest score among the candidate words corresponding to the non-word in the text to be processed is determined as the target candidate word corresponding to the non-word in the text to be processed.
  • the similarity between the non-word in the text to be processed and the candidate word corresponding to the non-word in the text to be processed satisfies the first preset condition.
  • the processing unit is configured to: when the perplexity of the target candidate word corresponding to the non-word in the text to be processed is lower than or equal to the first perplexity threshold, use the The target candidate word corresponding to the non-word in the processed text replaces the non-word in the text to be processed as a correction result of the non-word in the text to be processed.
  • the merging error non-word is a non-word that includes at least two true words
  • the processing unit is configured to: if the non-word in the text to be processed belongs to the second type of non-word, The non-words in the text to be processed are corrected, and at least two true words are obtained as the correction results of the non-words in the text to be processed.
  • the processing unit is configured to: if the non-word in the text to be processed belongs to the third type of non-word, modify the non-letter characters in the non-word in the text to be processed to Letter, use the modified word as the correction result of the non-word in the text to be processed.
  • the processing unit is also used to: perform true word error detection on the text to be processed to obtain the false true words in the text to be processed; generate candidate words corresponding to the false true words; Determine the target candidate word corresponding to the wrong true word among the candidate words corresponding to the true word; correct the false true word according to the target candidate word corresponding to the false true word.
  • the processing unit is used to: compare the false true word according to the similarity between the false true word and the candidate word corresponding to the false true word and the perplexity of the candidate word corresponding to the false true word.
  • the candidate words corresponding to the words are scored, where the perplexity of the candidate words corresponding to the wrong true words is used to indicate the possibility that the candidate words corresponding to the wrong true words appear in the text to be processed; the candidate words corresponding to the wrong true words are scored The highest candidate word is determined as the target candidate word corresponding to the wrong true word.
  • the similarity between the false true word and the candidate word corresponding to the false true word satisfies the second preset condition.
  • the processing unit is configured to: when the perplexity of the target candidate word corresponding to the false true word is lower than or equal to the second perplexity threshold, use the false true word corresponding to the The target candidate word replaces the false true word as the correction result of the false true word.
  • a text processing device which includes: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed by the processor
  • the processor is configured to execute the text processing method in the foregoing first aspect or any one of the first aspects.
  • a computer-readable medium stores program code for device execution, and the program code includes text for executing the first aspect or any one of the first aspects. Approach.
  • a computer program product includes computer program code, which when the computer program code runs on a computer, causes the computer to execute the methods in the foregoing aspects.
  • the above-mentioned computer program code may be stored in whole or in part on a first storage medium, where the first storage medium may be packaged with the processor, or may be packaged separately with the processor.
  • first storage medium may be packaged with the processor, or may be packaged separately with the processor.
  • a chip in a sixth aspect, includes a processor and a data interface, the processor reads instructions stored in a memory through the data interface, and executes any one of the above-mentioned first aspect or the first aspect The text processing method in the implementation mode.
  • the chip may further include a memory in which instructions are stored, and the processor is configured to execute instructions stored on the memory.
  • the processor is configured to execute the text processing method in the first aspect or any one of the implementation manners in the first aspect.
  • FIG. 1 is a schematic diagram of an application scenario of natural language processing provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of another application scenario of natural language processing provided by an embodiment of the present application.
  • Fig. 3 is a schematic diagram of a natural language processing related device provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of text processing according to a CNN model provided by an embodiment of the present application.
  • Fig. 6 is another schematic diagram of text processing according to the CNN model provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of the hardware structure of a chip provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a text processing method provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of another text processing method provided by an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of yet another text processing method provided by an embodiment of the present application.
  • FIG. 12 is a schematic block diagram of a text processing apparatus provided by an embodiment of the present application.
  • FIG. 13 is a schematic block diagram of another text processing apparatus provided by an embodiment of the present application.
  • Figure 1 shows a natural language processing system that includes user equipment and data processing equipment.
  • user equipment includes smart terminals such as mobile phones, personal computers, or information processing centers.
  • the user equipment is the originator of natural language data processing, and as the originator of the language question and answer or query request, usually the user initiates the request through the user equipment.
  • the above-mentioned data processing device may be a device or server with data processing functions such as a cloud server, a network server, an application server, and a management server.
  • the data processing equipment receives the query sentence/voice/text question sentence from the smart terminal through the interactive interface, and then performs machine learning, deep learning, search, reasoning, decision-making and other language through the memory of the data storage and the processor of the data processing. data processing.
  • the memory in the data processing device can be a general term, including a database for local storage and storing historical data.
  • the database can be on the data processing device or on other network servers.
  • the user equipment can receive instructions from the user. For example, the user equipment can receive a piece of text input by the user, and then initiate a request to the data processing device, so that the data processing device responds to the user equipment
  • a piece of text executes natural language processing applications (for example, text classification, text sequence labeling, translation, etc.), so as to obtain the processing results of the corresponding natural language processing application for the piece of text (for example, text classification, text sequence labeling, translation, etc.) .
  • the user equipment may receive the text to be processed input by the user, and then initiate a request to the data processing device, so that the data processing device classifies the text to be processed, so as to obtain a classification result for the text to be processed.
  • the classification result can refer to the user's semantic intention indicated by the text to be processed, for example, the user's intention to indicate singing, setting time, and opening navigation; or, the classification result can also be used to indicate the user's emotional classification result, such as , The classification result may indicate that the user emotion corresponding to the text to be processed is classified as depressed, happy, or angry.
  • the data processing device in FIG. 1 can execute the text processing method of the embodiment of the present application.
  • Figure 2 shows another natural language processing system.
  • the user equipment is directly used as a data processing device.
  • the user equipment can directly receive input from the user and process it directly by the hardware of the user equipment itself.
  • Figure 1 is similar, and you can refer to the above description, which will not be repeated here.
  • the user equipment can receive instructions from the user, and the user equipment itself classifies the text to be processed to obtain the classification result of the text to be processed.
  • the user equipment can receive instructions from the user.
  • the user equipment can receive a piece of text input by the user, and then the user equipment itself executes a natural language processing application (for example, text Classification, text sequence labeling, translation, etc.), so as to obtain the processing result of the corresponding natural language processing application for the piece of text (for example, text classification, text sequence labeling, translation, etc.).
  • a natural language processing application for example, text Classification, text sequence labeling, translation, etc.
  • the user equipment itself can execute the text processing method of the embodiment of the present application.
  • Fig. 3 is a schematic diagram of a natural language processing related device provided by an embodiment of the present application.
  • the user equipment in FIG. 1 and FIG. 2 may specifically be the local device 130 or the local device 120 in FIG. 3, and the data processing device in FIG. 1 may specifically be the execution device 110 in FIG. 3, where the data storage system 150 may To store the to-be-processed data of the execution device 110, the data storage system 150 may be integrated on the execution device 110, or may be set on the cloud or other network servers.
  • the processors in Figure 1 and Figure 2 can perform data training/machine learning/deep learning through neural network models or other models, and use the data finally trained or learned models to process the input text to be processed, so as to obtain the to-be-processed text Text processing result.
  • a neural network can be composed of neural units.
  • a neural unit can refer to an arithmetic unit that takes xs and intercept 1 as inputs.
  • the output of the arithmetic unit can be:
  • s 1, 2,...n, n is a natural number greater than 1
  • Ws is the weight of xs
  • b is the bias of the neural unit.
  • f is the activation function of the neural unit, which is used to introduce nonlinear characteristics into the neural network to convert the input signal in the neural unit into an output signal.
  • the output signal of the activation function can be used as the input of the next convolutional layer, and the activation function can be a sigmoid function.
  • a neural network is a network formed by connecting multiple above-mentioned single neural units together, that is, the output of one neural unit can be the input of another neural unit.
  • the input of each neural unit can be connected with the local receptive field of the previous layer to extract the characteristics of the local receptive field.
  • the local receptive field can be a region composed of several neural units.
  • Deep neural network also known as multi-layer neural network
  • the DNN is divided according to the positions of different layers.
  • the neural network inside the DNN can be divided into three categories: input layer, hidden layer, and output layer.
  • the first layer is the input layer
  • the last layer is the output layer
  • the number of layers in the middle are all hidden layers.
  • the layers are fully connected, that is to say, any neuron in the i-th layer must be connected to any neuron in the i+1th layer.
  • DNN looks complicated, it is not complicated in terms of the work of each layer. Simply put, it is the following linear relationship expression: among them, Is the input vector, Is the output vector, Is the offset vector, W is the weight matrix (also called coefficient), and ⁇ () is the activation function.
  • Each layer is just the input vector After such a simple operation, the output vector is obtained Due to the large number of DNN layers, the coefficient W and the offset vector The number is also relatively large.
  • DNN The definition of these parameters in DNN is as follows: Take coefficient W as an example: Suppose in a three-layer DNN, the linear coefficients from the fourth neuron in the second layer to the second neuron in the third layer are defined as The superscript 3 represents the number of layers where the coefficient W is located, and the subscript corresponds to the output third-level index 2 and the input second-level index 4.
  • the coefficient from the kth neuron in the L-1th layer to the jth neuron in the Lth layer is defined as
  • Convolutional neural network (convolutional neuron network, CNN) is a deep neural network with a convolutional structure.
  • the convolutional neural network contains a feature extractor composed of a convolutional layer and a sub-sampling layer.
  • the feature extractor can be regarded as a filter.
  • the convolutional layer refers to the neuron layer that performs convolution processing on the input signal in the convolutional neural network.
  • a neuron can be connected to only part of the neighboring neurons.
  • a convolutional layer usually contains several feature planes, and each feature plane can be composed of some rectangularly arranged neural units. Neural units in the same feature plane share weights, and the shared weights here are the convolution kernels.
  • Sharing weight can be understood as the way of extracting image information has nothing to do with location.
  • the convolution kernel can be initialized in the form of a matrix of random size. In the training process of the convolutional neural network, the convolution kernel can obtain reasonable weights through learning. In addition, the direct benefit of sharing weights is to reduce the connections between the layers of the convolutional neural network, and at the same time reduce the risk of overfitting.
  • the neural network can use an error back propagation (BP) algorithm to correct the size of the parameters in the initial neural network model during the training process, so that the reconstruction error loss of the neural network model becomes smaller and smaller. Specifically, forwarding the input signal to the output will cause error loss, and the parameters in the initial neural network model are updated by backpropagating the error loss information, so that the error loss is converged.
  • the backpropagation algorithm is a backpropagation motion dominated by error loss, and aims to obtain the optimal parameters of the neural network model, for example, the weight matrix.
  • NLP Natural language processing
  • Natural language is human language
  • natural language processing is the processing of human language.
  • Natural language processing is a process of systematic analysis, understanding and information extraction of text data in an intelligent and efficient way.
  • automatic summarization automatic summarization
  • machine translation MT
  • NER Named entity recognition
  • RE relation extraction
  • RE information extraction
  • IE information extraction
  • sentiment analysis speech recognition
  • speech recognition question answering, topic segmentation, etc.
  • the language model is the basic model in NPL.
  • LM can infer the probability of unknown words based on existing information (such as text information such as words that have appeared in the context). It can also be understood that LM is used To calculate the probability model of a sentence.
  • the language model is the probability distribution of the natural language text sequence, which represents the possibility of the existence of a certain length of a certain sequence of text.
  • the language model predicts what the next word will be based on the context. Since there is no need to manually label the corpus, the language model can learn rich semantic knowledge from an unlimited large-scale corpus.
  • an embodiment of the present application provides a system architecture 200.
  • the data collection device 260 is used to collect training data.
  • the training data in the embodiment of the present application may be training text of a training text processing model.
  • the data collection device 260 stores the training data in the database 230, and the training device 220 trains based on the training data maintained in the database 230 to obtain the target model/rule 201 (ie, the text processing model of the present application).
  • the target model/rule 201 can be used to implement the text processing method provided in the embodiment of the present application, that is, the text to be processed is processed through relevant preprocessing (the preprocessing module 213 and/or the preprocessing module 214 can be used for processing). Input the target model/rule 201 for processing, and then the processing result corresponding to the target task executed by the target processing model can be obtained.
  • the target processing model can be a text error correction model, and the text to be processed is input into the target model/rule 201 (that is, the text processing model of the present application) for text error correction processing, and the treatment can be obtained. Process the error correction text of the text.
  • the target student model can be a text translation model
  • the text to be processed is input into the target model/rule 201 (that is, the text processing model of this application) for translation processing, and the translation of the text to be processed can be obtained. text.
  • the target model/rule 201 is obtained by training the original processing model. It should be noted that in actual applications, the training data maintained in the database 230 may not all come from the collection of the data collection device 260, and may also be received from other devices.
  • the training device 220 does not necessarily perform the training of the target model/rule 201 completely based on the training data maintained by the database 230. It may also obtain training data from the cloud or other places for model training. The above description should not be used as a reference to this application. Limitations of the embodiment. It should also be noted that at least part of the training data maintained in the database 230 may also be used to execute the process of setting 210 to process the text to be processed.
  • the target model/rule 201 trained according to the training device 220 can be applied to different systems or devices, such as the execution device 210 shown in FIG. 4, which can be a terminal, such as a mobile phone terminal, a tablet computer, notebook computers, augmented reality (AR)/virtual reality (VR), in-vehicle terminals, etc., can also be servers or clouds.
  • the execution device 210 shown in FIG. 4 can be a terminal, such as a mobile phone terminal, a tablet computer, notebook computers, augmented reality (AR)/virtual reality (VR), in-vehicle terminals, etc., can also be servers or clouds.
  • the execution device 210 is configured with an input/output (input/output, I/O) interface 212 for data interaction with external devices.
  • the user can input data to the I/O interface 212 through the client device 240.
  • the input data in this embodiment of the present application may include: text to be processed.
  • the preprocessing module 213 and/or the preprocessing module 214 are used for preprocessing according to the input data received by the I/O interface 212.
  • the preprocessing module 213 and the preprocessing module 214 may not be provided (or There is only one preprocessing module), and the calculation module 211 is directly used to process the input data. It should be noted that the preprocessing module 213 or the preprocessing module 214 can preprocess all input data, or can preprocess part of the input data.
  • preprocessing module 113 and/or the preprocessing module 214 may also be trained in the training device 220.
  • the calculation module 211 may be used to perform calculations and other related processing on the input data from the preprocessing module 213 or the I/O interface 212 according to the target model/rule 201 described above.
  • the execution device 210 When the execution device 210 preprocesses input data, or when the calculation module 211 of the execution device 210 performs calculations and other related processing, the execution device 210 can call data, codes, etc. in the data storage system 250 for corresponding processing. , The data, instructions, etc. obtained by corresponding processing may also be stored in the data storage system 250.
  • the I/O interface 212 feeds back the processing results (such as error correction results, translation results, etc.) to the client device 240.
  • processing results such as error correction results, translation results, etc.
  • the training device 220 can generate a target model/rule 201 corresponding to the downstream system for different downstream systems, and the corresponding target model/rule 201 can be used to achieve the above goals or complete the above tasks, thereby providing users Provide the desired result. It should be noted that the training device 220 may also generate corresponding preprocessing models for the target models/rules 201 corresponding to different downstream systems, such as the corresponding preprocessing models in the preprocessing module 213 and/or the preprocessing module 214.
  • the user can manually set input data (for example, text to be processed), and the manual setting can be operated through the interface provided by the I/O interface 212.
  • the client device 240 can automatically send input data (for example, text to be processed) to the I/O interface 212. If the client device 240 is required to automatically send the input data and the user's authorization is required, the user can log in to the client device Set corresponding permissions in 240. The user can view the result output by the execution device 210 on the client device 240, and the specific presentation form may be a specific manner such as display, sound, and action.
  • the client device 240 can also be used as a data collection terminal to collect the input data of the input I/O interface 212 and the output result of the output I/O interface 212 as new sample data, and store it in the database 230 as shown in the figure.
  • the I/O interface 212 directly uses the input data input to the I/O interface 212 and the output result of the output I/O interface 212 as a new sample as shown in the figure. The data is stored in the database 230.
  • FIG. 4 is only a schematic diagram of a system architecture provided by an embodiment of the present application, and the positional relationship among the devices, devices, modules, etc. shown in the figure does not constitute any limitation.
  • the data storage system 250 is an external memory relative to the execution device 210. In other cases, the data storage system 250 may also be placed in the execution device 210.
  • the target model/rule 201 is obtained by training according to the training device 220.
  • the target model/rule 201 may be the target processing model in the embodiment of the present application.
  • the target processing model provided in the embodiment of the present application may be Neural network model.
  • it can be CNN, deep convolutional neural network (deep convolutional neural network, DCNN).
  • CNN is a very common neural network
  • the structure of CNN will be introduced in detail below in conjunction with Figure 5.
  • a convolutional neural network is a deep neural network with a convolutional structure. It is a deep learning architecture.
  • the deep learning architecture refers to the algorithm of machine learning. Multi-level learning is carried out on the abstract level of.
  • CNN is a feed-forward artificial neural network. Each neuron in the feed-forward artificial neural network can respond to the input image.
  • a convolutional neural network (CNN) 300 may include an input layer 310, a convolutional layer/pooling layer 320 (the pooling layer is optional), and a neural network layer 330.
  • CNN convolutional neural network
  • the convolutional layer/pooling layer 320 may include layers 321-326, for example: in one implementation, layer 321 is a convolutional layer, layer 322 is a pooling layer, and layer 323 is a convolutional layer. Layers, 324 is a pooling layer, 325 is a convolutional layer, and 326 is a pooling layer; in another implementation, 321 and 322 are convolutional layers, 323 is a pooling layer, and 324 and 325 are convolutional layers. Layer, 326 is the pooling layer. That is, the output of the convolutional layer can be used as the input of the subsequent pooling layer, or as the input of another convolutional layer to continue the convolution operation.
  • the convolution layer 321 can include many convolution operators.
  • the convolution operator is also called a kernel. Its role in natural language processing is equivalent to a filter that extracts specific information from the input speech or semantic information.
  • the operator can essentially be a weight matrix, which is usually predefined.
  • weight values in these weight matrices need to be obtained through a lot of training in practical applications.
  • Each weight matrix formed by the weight values obtained through training can extract information from the input data, thereby helping the convolutional neural network 300 to make correct predictions.
  • the initial convolutional layer (such as 321) often extracts more general features, which can also be called low-level features; with the convolutional neural network
  • the features extracted by the subsequent convolutional layers (for example, 326) become more and more complex, such as features such as high-level semantics, and features with higher semantics are more suitable for the problem to be solved.
  • pooling layer after the convolutional layer, that is, the 321-326 layers as illustrated by 320 in Figure 5, which can be a convolutional layer followed by a layer
  • the pooling layer can also be a multi-layer convolutional layer followed by one or more pooling layers.
  • the sole purpose of the pooling layer is to reduce the size of the data space.
  • Neural network layer 330
  • the convolutional neural network 300 After processing by the convolutional layer/pooling layer 320, the convolutional neural network 300 is not enough to output the required output information. Because as mentioned above, the convolutional layer/pooling layer 320 only extracts features and reduces the parameters brought by the input data. However, in order to generate the final output information (the required class information or other related information), the convolutional neural network 300 needs to use the neural network layer 330 to generate one or a group of required classes of output. Therefore, the neural network layer 330 may include multiple hidden layers (331, 332 to 33n as shown in FIG. 5) and an output layer 340. The parameters contained in the hidden layers may be based on specific task types. Relevant training data of is obtained through pre-training. For example, the task type may include speech or semantic recognition, classification or generation, and so on.
  • the output layer 340 has a loss function similar to the classification cross entropy, which is specifically used to calculate the prediction error.
  • the convolutional neural network 300 shown in FIG. 5 is only used as an example of a convolutional neural network. In specific applications, the convolutional neural network may also exist in the form of other network models.
  • a convolutional neural network (CNN) 300 may include an input layer 310, a convolutional layer/pooling layer 320 (the pooling layer is optional), and a neural network layer 330.
  • CNN convolutional neural network
  • Multiple convolutional layers/pooling layers in the convolutional layer/pooling layer 320 are in parallel, and the respectively extracted features are input to the full neural network layer 330 for processing.
  • FIG. 7 is a schematic diagram of the hardware structure of a chip provided by an embodiment of the application.
  • the chip includes a neural network processor (neural processing unit, NPU) 40.
  • the chip can be set in the execution device 110 as shown in FIG. 4 to complete the calculation work of the calculation module 111.
  • the chip can also be set in the training device 120 as shown in FIG. 4 to complete the training work of the training device 120 and output the target model/rule 101.
  • the algorithms of each layer in the convolutional neural network as shown in FIG. 5 and FIG. 6 can all be implemented in the chip as shown in FIG. 7.
  • the NPU 40 can be mounted on a host CPU, and the host CPU distributes tasks.
  • the core part of the NPU 40 is the arithmetic circuit 403.
  • the controller 404 in the NPU 40 can control the arithmetic circuit 403 to extract the data in the memory (weight memory or input memory) and perform calculations.
  • the arithmetic circuit 403 includes multiple processing units (process engines, PE). In some implementations, the arithmetic circuit 403 is a two-dimensional systolic array. The arithmetic circuit 403 may also be a one-dimensional systolic array or other electronic circuit capable of performing mathematical operations such as multiplication and addition. In some implementations, the arithmetic circuit 403 is a general-purpose matrix processor.
  • the arithmetic circuit fetches the data corresponding to matrix B from the weight memory 402 and caches it on each PE in the arithmetic circuit.
  • the arithmetic circuit fetches the matrix A data and matrix B from the input memory 401 to perform matrix operations, and the partial result or final result of the obtained matrix is stored in an accumulator 408.
  • the vector calculation unit 407 can perform further processing on the output of the arithmetic circuit, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison, and so on.
  • the vector calculation unit 407 can be used for network calculations in the non-convolutional/fully connected layers (FC) layers of the neural network, such as pooling, batch normalization, and partial response. Normalization (local response normalization), etc.
  • FC non-convolutional/fully connected layers
  • the vector calculation unit 407 can store the processed output vector to the unified buffer 406.
  • the vector calculation unit 407 may apply a nonlinear function to the output of the arithmetic circuit 403, such as a vector of accumulated values, to generate the activation value.
  • the vector calculation unit 407 generates a normalized value, a combined value, or both.
  • the processed output vector can be used as an activation input to the arithmetic circuit 403, for example for use in subsequent layers in a neural network.
  • the unified memory 406 is used to store input data and output data.
  • the weight data directly transfers the input data in the external memory to the input memory 401 and/or the unified memory 406 through the storage unit access controller 405 (direct memory access controller, DMAC), and stores the weight data in the external memory into the weight memory 402, And the data in the unified memory 406 is stored in the external memory.
  • DMAC direct memory access controller
  • the bus interface unit (BIU) 410 is used to implement interaction between the main CPU, the DMAC, and the fetch memory 409 through the bus.
  • An instruction fetch buffer 409 connected to the controller 404 is used to store instructions used by the controller 404;
  • the controller 404 is used to call the instructions cached in the memory 409 to control the working process of the computing accelerator.
  • the unified memory 406, the input memory 401, the weight memory 402, and the fetch memory 409 may all be on-chip memories.
  • the external memory of the NPU may be a memory external to the NPU, and the external memory may be a double data rate synchronous dynamic random access memory (double data rate synchronous dynamic random access memory, DDR SDRAM), high bandwidth memory (high bandwidth memory, HBM), or Other readable and writable memory.
  • DDR SDRAM double data rate synchronous dynamic random access memory
  • HBM high bandwidth memory
  • Other readable and writable memory it should be understood that the chip hardware result shown in FIG. 7 is only an exemplary illustration, and the application is not limited thereto.
  • FIG. 8 is a schematic diagram of a system structure in a translation scenario provided by an embodiment of the application. As shown in FIG. 8, the text processing method in the embodiment of the present application may be executed by a natural language understanding (NLU) cloud-side module.
  • NLU natural language understanding
  • the system includes vision module, OCR engine module, OCR recognition module, NLU module, NLU cloud side module, translation module and translation cloud module.
  • Vision module used to collect pictures.
  • the vision module can collect pictures by taking pictures.
  • OCR engine module used for scheduling of OCR tasks.
  • OCR recognition module used to realize character recognition based on OCR algorithm.
  • NLU module used for scheduling of NLU-related tasks.
  • NLU cloud side module used to correct wrong words/grammar in the received text.
  • Translation module used for scheduling translation tasks among multiple languages.
  • Translation cloud module used to translate the received text.
  • the vision module transmits the collected pictures to the OCR engine module.
  • the OCR engine module transmits the picture to the OCR recognition module through scheduling.
  • the OCR recognition module recognizes the text in the picture, that is, the original text, and returns the original text to the OCR engine module.
  • the OCR engine module transmits the original text to the NLU module.
  • the NLU module transmits the original text to the NLU cloud-side module through scheduling.
  • S6 The NLU cloud-side module corrects the wrong words/grammar in the original text to obtain the corrected original text.
  • the OCR engine module transmits the corrected original text to the translation module.
  • the translation module transmits the corrected original text to the translation cloud module through scheduling.
  • the translation cloud module performs translation, obtains the translation, and sends it back to the translation module.
  • the translation module returns the translation to the OCR engine module.
  • the text processing method is used for text error correction, that is, the text processing model can be a text error correction model. Input the text to be processed into the text processing model for error correction processing, and then the correction result of the text to be processed can be obtained.
  • FIG. 8 is only an example of the text processing method in the embodiment of the present application.
  • the text processing model may be a text translation model, and the text to be processed is input into the text translation model for correction. Error processing, and the result of error correction is translated to obtain the translated text of the text to be processed.
  • the text processing model in Figure 8 is deployed on a cloud server. It should be understood that the text processing model can also be deployed on smart terminal devices.
  • the smart terminal may be an electronic device with a camera.
  • the smart terminal may be a mobile phone with image processing function, a tablet personal computer (TPC), a media player, a smart TV, a laptop computer (LC) , A personal digital assistant (PDA), a personal computer (PC), or a vehicle-mounted terminal in an autonomous vehicle, etc., which are not limited in the embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a text processing method provided by an embodiment of the present application.
  • the text processing method shown in FIG. 9 can be executed by a text processing device, which can be the data processing device in FIG. 1, or the user equipment in FIG. 2, or the execution device 110 or the local device in FIG. , It may also be the execution device 210 in FIG. 4.
  • the method shown in FIG. 8 includes steps 510 to 530, and steps 510 to 530 are respectively described in detail below.
  • the text to be processed may be OCR output text, or may be text input by the user.
  • the OCR output text may include the OCR output text corresponding to the portable document format (pdf).
  • the OCR output text may also include a presentation (powerpoint, PPT) corresponding to the OCR output text.
  • the OCR output text may also include the OCR output text corresponding to the photographed picture.
  • the text input by the user may include content published in a social network, or may be content entered in a search box of a search engine, and so on.
  • the text to be processed may be any text that requires error correction, and the embodiment of the present application does not limit the specific form of the text to be processed.
  • Errors in the text to be processed may include non-word errors and real-word errors.
  • a non-word error means that the word in the text to be processed is not in the second preset vocabulary.
  • True word error means that the words in the text to be processed exist in the second preset vocabulary, but cause problems with context and semantics, and are not words required by the current context. That is to say, the wrong words in the text to be processed can include non-words and wrong true words.
  • non-word error detection may be performed on the text to be processed based on the second preset vocabulary to obtain the non-word in the text to be processed.
  • the second preset vocabulary can be used to distinguish true words from non-words.
  • True words refer to words that exist in the second preset word library, and correspondingly, non-words refer to words that do not exist in the first preset word library.
  • the vocabulary that can be used to detect non-word errors can be understood as the second preset vocabulary.
  • the second preset vocabulary may be an English vocabulary.
  • Non-words are words that do not exist in the English thesaurus, for example, wasld.
  • the embodiment of the present application does not limit the type of the second preset word database.
  • the "thesaurus” may also be referred to as a "dictionary” or a “vocabulary”.
  • the non-word in the text to be processed belongs to the first type of non-word, use the non-word in the text to be processed as a correction result of the non-word in the text to be processed, that is, do not process the non-word in the text to be processed.
  • the first category of non-words includes non-words with all capital letters, non-words with a character length within the preset character length range, and non-words belonging to the preset thesaurus.
  • the non-word in the text to be processed belongs to the second type of non-word, the third type of non-word or the fourth type of non-word, select the correction method that matches the category of the non-word in the text to be processed to perform the non-word in the text to be processed Correction to obtain the correction result of the non-word in the text to be processed.
  • the second category of non-words includes merging error non-words
  • the third category of non-words includes non-words that contain non-letter characters.
  • the fourth category of non-words includes other non-words except the first category of non-words, the second category of non-words, and the third category of non-words.
  • the fourth type of non-words may also be referred to as normal type non-words.
  • the first category of non-words includes non-words with a character length within the preset character length range.
  • the non-words whose word length is within the preset word length range may include non-words whose word length is greater than the first preset word length and/or the word length is less than the second preset word length.
  • non-words whose character length is within the preset character length range can include non-words that are too long or too short.
  • a non-word that is too long may be a URL.
  • Non-words that are too short may include only one or two characters, etc.
  • Too-long non-words are usually proper nouns, and generally do not need to be processed for this type of non-words. By detecting this type of non-words, avoid processing the non-words of this type and avoid correction errors.
  • Too short non-words have too little effective information, and the revised credibility is not high. By detecting this type of non-words, avoiding processing the type of non-words can improve the speed of text processing.
  • the first category of non-words includes non-words belonging to the first preset thesaurus.
  • the first preset vocabulary may include a preset low-frequency vocabulary.
  • the preset low-frequency vocabulary can be set according to application needs.
  • the preset low-frequency vocabulary may include names of people, places, etc.
  • the first preset thesaurus may also include other language thesaurus.
  • other language thesauruses may include Russian, French, German, Italian, Portuguese and other language thesaurus and/or pinyin thesaurus.
  • the second category of non-words includes merging wrong non-words.
  • a merged wrong non-word is a non-word that includes at least two true words.
  • the non-word in the text to be processed belongs to the second type of non-word
  • the non-word in the text to be processed is corrected, and at least two true words are obtained as the correction result of the non-word in the text to be processed. That is, replace the non-word with at least two true words obtained.
  • a space can be added at an appropriate position in the non-word to modify the non-word to at least two true words.
  • the non-word in the processed text is inChina, and the non-word belongs to the second type of non-word. Add a space in the non-word and modify the non-word to in China.
  • the modified at least two true words can be input into the language model, and if the modified at least two true words can reduce the perplexity, then the modified at least two true words can be used as the text in the text to be processed. Correction result for non-words.
  • the third category of non-words includes non-words that contain non-letter characters.
  • the third category of non-words includes non-words containing numbers.
  • numbers For example, a5ses.
  • the non-letter characters in the non-word in the text to be processed are modified to letters, and the modified word is used as the correction result of the non-word in the text to be processed. That is, replace the non-word with the modified word.
  • characters other than letters in the non-words in the text to be processed can be changed to letters through a preset character misjudgment dictionary, and the modified words are used as the correction result of the non-words in the text to be processed.
  • the non-word in the text to be processed can be used as the correction result of the non-word in the text to be processed through a preset character misjudgment vocabulary, that is, the non-word is not processed.
  • the character misjudgment lexicon may be determined according to the probability of the OCR recognition error.
  • the probability of OCR recognition error refers to the probability of misrecognizing a letter as a number.
  • the character misjudgment vocabulary database may be determined based on the recognition result of the OCR recognition module in FIG. 8 through historical experience.
  • the number 0 is similar to the letter O, and the probability of OCR recognition error is greater.
  • the number 5 and the letter s are similar, and the probability of OCR recognition error is greater.
  • you can replace 5 with s for example, a5sess can be replaced with assets.
  • the number 4 does not have a corresponding letter in the character misjudgment dictionary, so the non-word iphone4 may not be processed.
  • the modified word can be input into the language model, and if the modified word can reduce the degree of confusion, the modified word is used as the correction result of the non-word in the text to be processed.
  • the correction of the non-word in the text to be processed includes steps A1 to A3.
  • A1 generate candidate words corresponding to non-words in the text to be processed.
  • the similarity between the non-word in the text to be processed and the candidate word corresponding to the non-word in the text to be processed satisfies the first preset condition.
  • the similarity between the non-word in the text to be processed and the candidate word corresponding to the non-word in the text to be processed may include the edit distance between the non-word in the text to be processed and the candidate word corresponding to the non-word in the text to be processed And/or public strings. That is, the candidate words corresponding to the non-words in the text to be processed can be determined based on the edit distance and/or the common character string.
  • Edit distance refers to the number of editing operations required to convert one word to another. Editing operations include operations such as insertion, deletion, translocation, and replacement of characters in a word.
  • the common character string refers to the number of consecutive identical characters contained in two words.
  • a BK tree (Burkhard Keller tree) may be used to generate candidate words corresponding to non-words in the text to be processed.
  • the similarity between the non-word in the text to be processed and the candidate word corresponding to the non-word in the text to be processed includes the edit distance between the non-word in the text to be processed and the candidate word corresponding to the non-word in the text to be processed As an example, step A1 will be described.
  • the first preset condition may be that the edit distance is less than the first preset value.
  • the edit distance between the non-word in the text to be processed and the candidate word corresponding to the non-word in the text to be processed is restricted to be less than 3, and the candidate word corresponding to the non-word in the text to be processed is generated. That is, the number of operations in the process of generating candidate words corresponding to non-words in the text to be processed from non-words is less than three times.
  • the similarity between the non-word in the text to be processed and the candidate word corresponding to the non-word in the text to be processed includes the edit distance between the non-word in the text to be processed and the candidate word corresponding to the non-word in the text to be processed And the common character string as an example to describe step A1.
  • the first preset condition may include preset condition one and preset condition two.
  • the preset condition one can be that the edit distance is less than the preset value.
  • the second preset condition may be that the length of the largest common character string is greater than the preset length.
  • the candidate words corresponding to the non-words in the text to be processed can be respectively generated through the preset condition 1 and the preset condition 2, that is, between the non-words in the text to be processed and the candidate words corresponding to the non-words in the text to be processed.
  • the similarity meets the preset condition one or the preset condition two.
  • the candidate words corresponding to the non-words in the text to be processed can be generated simultaneously through the preset condition 1 and the preset condition 2, that is, the candidate words corresponding to the non-words in the text to be processed and the candidate words corresponding to the non-words in the text to be processed.
  • the similarity between the two meets the preset condition one and the preset condition two.
  • restrict the edit distance between the non-word in the text to be processed and the candidate word corresponding to the non-word in the text to be processed to be less than 3, and generate a candidate word A corresponding to the non-word in the text to be processed. That is, the number of operations in the process of generating candidate words corresponding to the non-words in the text to be processed from non-words is less than three times.
  • the maximum common character string length of the candidate words corresponding to the non-words in the text to be processed and the non-words in the text to be processed is restricted to be greater than 3, and a candidate word B corresponding to the non-words in the text to be processed is generated.
  • the candidate word corresponding to the non-word in the text to be processed and the non-word include more than three consecutive identical characters.
  • Candidate words corresponding to non-words in the text to be processed may include candidate word A and candidate word B.
  • the candidate words corresponding to the non-words in the text to be processed may include the same candidate words in the candidate word A and the candidate word B.
  • the similarity can also be other forms of similarity, such as character similarity, etc.
  • the similarity between the non-words in the text to be processed and the non-words in the text to be processed in this application is related to the similarity between multiple candidate words.
  • the determination method is not limited.
  • the non-word in the text to be processed is wasld
  • the candidate words corresponding to wereld determined based on the minimum edit distance and/or the maximum common character string may include world, word, sword, and so on.
  • A2 Determine the target candidate word corresponding to the non-word in the text to be processed among the candidate words corresponding to the non-word in the text to be processed.
  • step A2 may be to randomly determine the target candidate word corresponding to the non-word in the text to be processed from among the candidate words corresponding to the non-word in the text to be processed.
  • step A2 may include step A21 and step A22.
  • A21 according to the similarity between the non-words in the text to be processed and the candidate words corresponding to the non-words in the text to be processed and the perplexity of the candidate words corresponding to the non-words in the text to be processed
  • the candidate words are scored, where the perplexity of the candidate words corresponding to the non-words in the text to be processed is used to indicate the possibility of the candidate words corresponding to the non-words in the text to be processed appearing in the text to be processed.
  • the similarity between a non-word in the text to be processed and a candidate word corresponding to the non-word in the text to be processed may include: a non-word in the text to be processed and a candidate word corresponding to the non-word in the text to be processed Edit distance between.
  • the similarity between the non-words in the text to be processed and the candidate words corresponding to the non-words in the text to be processed can be scored based on the edit distance.
  • the perplexity of the candidate words corresponding to the non-words in the text to be processed can be scored by the language model.
  • the language model may be a statistical language model, for example, an n-gram model.
  • the statistical language model is more dominant in the extraction of semantic information of short and medium texts, and is suitable for scenarios that rely less on long-distance semantic information, such as text error correction in OCR scenarios.
  • the language model may also be a neural network model, for example, a recurrent neural network (recurrent neural network, RNN) model.
  • a recurrent neural network recurrent neural network, RNN
  • the score corresponding to each candidate word can be obtained by weighting the scores corresponding to the above items, that is, the weights are set for the scores corresponding to each item.
  • the weight can be preset or obtained through training.
  • Scoring is based on the similarity and perplexity of the text, while considering the similarity between non-words and candidate words and the semantic information of the text to be processed, more accurate scoring results can be obtained.
  • A22 Determine the candidate word with the highest score among the candidate words corresponding to the non-word in the text to be processed as the target candidate word corresponding to the non-word in the text to be processed.
  • step A1 the method for determining "similarity” in step A1 and the method for determining "similarity” in step A2 may be the same or different.
  • step A2 can be omitted, that is, the candidate word is directly used as the target candidate word corresponding to the non-word in the text to be processed.
  • A3 Correct the non-word in the text to be processed according to the target candidate word corresponding to the non-word in the text to be processed, and obtain the correction result of the non-word in the text to be processed.
  • Correcting non-words in the text to be processed may include replacing the non-words in the text to be processed with target candidate words corresponding to the non-words in the text to be processed, and also includes not processing non-words in the text to be processed, that is, not treating non-words in the text to be processed. Process non-words in the text for replacement.
  • step A3 may be step A31.
  • A31 Directly use the target candidate word corresponding to the non-word in the text to be processed to replace the non-word in the text to be processed as a correction result of the non-word in the text to be processed.
  • step A3 may be step A32.
  • A32 Detect the perplexity of the text containing the target candidate words corresponding to the non-words in the text to be processed through the language model. When the perplexity is lower than or equal to the first perplexity threshold, use the non-word correspondences in the text to be processed The target candidate word of replaces the non-word in the text to be processed as the correction result of the non-word in the text to be processed.
  • the non-words in the text to be processed are not corrected. That is, the target candidate word corresponding to the non-word in the text to be processed is not used to replace the non-word in the text to be processed. This can reduce time consumption and quickly implement text error correction.
  • the candidate word with the second highest score in step A21 can be used as the target candidate word corresponding to the non-word in the text to be processed, and step A32 is repeated until the The target candidate words whose perplexity meets the first perplexity threshold are used to replace the non-words in the text to be processed with the target candidate words corresponding to the non-words in the text to be processed.
  • the method 500 may further include step 540.
  • step 540 includes step 541 to step 543.
  • 541 Perform true word error detection on the true word in the text to be processed to obtain the false true word in the text to be processed.
  • the true word error detection can be performed on the true word in the text to be processed based on the language model to obtain the false true word in the text to be processed. For example, when the perplexity of the text corresponding to a word is higher than a set threshold, the word is judged to be a false true word.
  • the similarity between the false true word and the candidate word corresponding to the false true word satisfies the second preset condition.
  • the similarity between the false true word and the candidate word corresponding to the false true word may include the edit distance and/or the common character string between the false true word and the candidate word corresponding to the false true word. That is, the candidate word corresponding to the wrong true word can be determined based on the edit distance and/or the common character string.
  • the candidate words corresponding to the false true words can be generated through the BK tree.
  • Step 542 will be described by taking the similarity between the false true word and the candidate words corresponding to the false true word including the edit distance between the false true word and the candidate word corresponding to the false true word as an example.
  • the second preset condition may be that the edit distance is smaller than the second preset value.
  • the second preset value may be 2.
  • the edit distance of the candidate word corresponding to the false true word and the false true word is restricted to be less than 3, and the candidate word corresponding to the false true word is generated. That is, the number of operations in the process of generating the candidate word corresponding to the false true word from the false true word is less than three times.
  • Step 542 is described by taking the similarity between the false true word and the candidate words corresponding to the false true word including the edit distance between the false true word and the candidate word corresponding to the false true word and the common character string as an example.
  • the second preset condition may include preset condition three and preset condition four.
  • the third preset condition may be that the edit distance is less than the preset value.
  • the fourth preset condition may be that the length of the largest common character string is greater than the preset length.
  • the candidate words corresponding to the false true word can be generated through the preset condition 3 and the preset clause 4, that is, the similarity between the false true word and the candidate word corresponding to the false true word satisfies the preset condition 3 or the preset condition four.
  • the candidate words corresponding to the false true word can be generated simultaneously through the preset condition 3 and the preset condition 4, that is, the similarity between the false true word and the candidate word corresponding to the false true word satisfies the preset condition 3 and the preset Condition four.
  • limit the edit distance of the candidate word corresponding to the wrong true word to less than 3, and generate the candidate word C corresponding to the wrong true word. That is, the number of operations in the process of generating the candidate word corresponding to the false true word from the false true word is less than three times.
  • Limit the length of the maximum common character string of the candidate word corresponding to the false true word and the false true word to be greater than 3, and generate the candidate word D corresponding to the false true word. That is, the candidate word corresponding to the false true word and the false true word include more than three consecutive identical characters.
  • the candidate words corresponding to the false true word may include candidate word C and candidate word D.
  • the candidate word corresponding to the false true word may include the same candidate word in the candidate word C and the candidate word D.
  • the first preset condition and the second preset condition may be the same or different.
  • the first preset value and the second preset value may be the same or different.
  • the similarity can also be other forms of similarity, such as character similarity, etc.
  • the present application does not limit the manner of determining the similarity between the false true word and the candidate word corresponding to the false true word.
  • word is a false true word
  • candidate words may include: world, words, sword, etc.
  • step 543 may be to randomly determine the target candidate word corresponding to the false true word among the candidate words corresponding to the false true word.
  • step 543 may include step 543a and step 543b.
  • 543a Score the candidate words corresponding to the false true words according to the similarity between the false true words and the candidate words corresponding to the false true words and the perplexity of the candidate words corresponding to the false true words. Among them, the perplexity of the candidate word corresponding to the false true word is used to indicate the possibility of the candidate word corresponding to the false true word appearing in the text to be processed.
  • the similarity between the false true word and the candidate word corresponding to the false true word may include: the edit distance between the false true word and the candidate word corresponding to the false true word.
  • the similarity between the false true word and the candidate word corresponding to the false true word can be scored based on the edit distance.
  • the perplexity of the candidate words corresponding to the wrong true word can be scored by the language model.
  • the score corresponding to each candidate word can be obtained by weighting the scores corresponding to the above items, that is, the weights are set for the scores corresponding to each item.
  • the weight can be preset or obtained through training.
  • the score is based on the similarity and the perplexity of the text, and the similarity between the false true word and the candidate word and the semantic information of the text to be processed are also considered to obtain more accurate scoring results.
  • 543b Determine the candidate word with the highest score among the candidate words corresponding to the false true word as the target candidate word corresponding to the false true word.
  • step 543a the method for determining "similarity” in step 543a and the method for determining "similarity” in step 543b may be the same or different.
  • Correcting the false true word may include replacing the false true word with the target candidate word corresponding to the false true word, and also including not processing the false true word, that is, not replacing the false true word.
  • step 544 may be step 544a.
  • 544a Directly use the target candidate word corresponding to the false true word to replace the false true word as the correction result of the false true word.
  • step 544 may be step 544b.
  • 544b Detect the perplexity of the text containing the target candidate word corresponding to the false true word through the language model, and replace the false true word with the target candidate word corresponding to the false true word when the perplexity is lower than or equal to the second perplexity threshold , As the correction result of the wrong true word.
  • the first perplexity threshold and the second perplexity threshold may be the same or different.
  • the false true word is not corrected. That is, the target candidate word corresponding to the false true word is not used to replace the false true word. This can reduce time consumption and quickly implement text error correction.
  • the candidate word with the second highest score in step 543a can be used as the target candidate word corresponding to the false true word, and step 544b is repeated until the perplexity satisfies the first Two target candidate words with perplexity threshold, and replace the wrong true word with the target candidate word corresponding to the wrong true word.
  • multiple types of characters in the text to be processed can be detected and processed separately, which reduces the interference of multiple types of characters on the error correction process, and improves the accuracy of text error correction. Improve the robustness of the error correction method to the input text.
  • the similarity between the candidate word and the wrong word and the degree of confusion of the candidate word are used to score the candidate words, while the similarity between the wrong word and the candidate word and the semantic information of the text to be processed are considered, which can be more in line with the input text.
  • better candidate words can be obtained, which improves the accuracy of text error correction.
  • FIG. 10 is a schematic flowchart of a text processing method 600 provided by an embodiment of the present application.
  • the method 600 is an example of a method for processing normal non-words and false true words through the method 500.
  • the method 600 includes steps 610 to 6120. Steps 610 to 6120 will be described in detail below.
  • Step 620 Perform non-word error detection on the text to be processed. Step 620 corresponds to step 520 in method 500.
  • non-word error detection may be performed on the text to be processed based on the English thesaurus.
  • the English thesaurus is an example of the second preset thesaurus in the method 500.
  • Step 620 is used to obtain non-words and true words in the text to be processed.
  • Non-words are words that do not exist in the English thesaurus.
  • True words are words that exist in the English thesaurus.
  • Step 630 is performed for non-words in the text to be processed.
  • Step 680 is performed for the true words in the text to be processed.
  • This normal type of non-word is an example of the fourth type of non-word in the method 500.
  • Non-word 1# can include one non-word or multiple non-words.
  • the similarity between the candidate words corresponding to the non-word 1# and the non-word 1# satisfies the first preset condition.
  • the candidate word corresponding to the non-word 1# can be generated through the BK tree.
  • the detailed process is as described in step A1 in method 500, and will not be repeated here.
  • the candidate words corresponding to non-word 1# can be scored through language model and edit distance.
  • the detailed process is as described in step A21 in method 500, and will not be repeated here.
  • the candidate word with the highest score among the candidate words corresponding to non-word 1# may be determined as the target candidate word corresponding to non-word 1#.
  • step 650 can be omitted, that is, the candidate word corresponding to non-word 1# is directly used as the target candidate word corresponding to non-word 1#.
  • the perplexity of the text containing the target candidate word corresponding to non-word 1# can be detected through the language model, and when the perplexity is lower than or equal to the first perplexity threshold, the target candidate corresponding to non-word 1# can be used The word replacement non-word 1# is used as the correction result of the non-word 1#.
  • the detailed process is as described in step A32 in method 500.
  • Step 630 to step 670 correspond to step 530 in method 500.
  • the true word error detection is performed on the true word in the text to be processed based on the language model to obtain the false true word in the text to be processed. For example, when the perplexity of the text corresponding to a word is higher than a set threshold, the word is judged to be a false true word.
  • Step 680 corresponds to step 541 in method 500.
  • Step 690 is executed for the wrong true word in the text to be processed.
  • the wrong true word in the processed text is called wrong true word 1#.
  • the false true word 1# can include one false true word or multiple false true words.
  • the candidate word corresponding to the candidate true word 1# can be generated through the BK tree.
  • the detailed process is as described in step 542 in method 500, which will not be repeated here.
  • the candidate words corresponding to the wrong true word 1# can be scored through language model and edit distance.
  • the detailed process is as described in step 543a in the method 500, which will not be repeated here.
  • the candidate word with the highest score among the candidate words corresponding to the wrong true word 1# is determined as the target candidate word corresponding to the wrong true word 1#.
  • step 6110 can be omitted, that is, the candidate word corresponding to the wrong true word 1# is directly used as the target candidate word corresponding to the wrong true word 1#.
  • the perplexity of the text containing the target candidate word corresponding to the wrong true word 1# can be detected through the language model, and in the case that the perplexity is lower than or equal to the second perplexity threshold, the perplexity corresponding to the wrong true word 1# can be used
  • the target candidate word replaces the wrong true word 1# as the correction result of the wrong true word 1#.
  • the detailed process is as described in step 544 in method 500.
  • normal types of non-words are obtained through the judgment of non-word categories, so as to avoid the influence of other types of non-words on text error correction.
  • the candidate words of the wrong words are scored through language models and other methods, making full use of the semantic information of the input text, which can be more in line with the original intent of the input text, and further improve the accuracy of text error correction rate.
  • FIG. 11 is a schematic flowchart of a text processing method 700 provided by an embodiment of the present application.
  • the method 700 is an example of the method 500.
  • the method 700 includes steps 710 to 740. Steps 710 to 740 will be described in detail below.
  • method 700 may further include step 711.
  • the length of the text to be processed refers to the number of words in the text to be processed.
  • the preset length can be 2.
  • Step 720 Perform non-word error detection on the text to be processed based on the English thesaurus.
  • the English vocabulary is an example of the second preset vocabulary in the method 500.
  • Step 720 corresponds to step 520 in method 500.
  • Step 720 is used to obtain non-words and true words in the text to be processed.
  • Non-words are words that do not exist in the English thesaurus.
  • True words are words that exist in the English thesaurus.
  • Step 730 is performed for non-words in the text to be processed.
  • Step 740 is performed for the true words in the text to be processed.
  • Non-words include full-letter non-words and special non-words. All-letter non-words refer to non-words containing 52 uppercase and lowercase letters in English. The all-letter non-words may include the first-type non-words, the second-type non-words, and the fourth-type non-words in the method 500. Special non-words refer to non-words that contain non-letter characters. The special non-word may be an example of the third type of non-word in the method 500.
  • step 730 includes step 731 to step 732.
  • the non-word in the text to be processed belongs to a special non-word
  • the non-word can be processed in a targeted manner.
  • the non-letter characters in the non-words in the text to be processed are modified to letters, and the modified words are used as the correction result of the non-words in the text to be processed. That is, replace the non-word with the modified word.
  • characters other than letters in the non-words in the text to be processed can be changed to letters through a preset character misjudgment dictionary, and the modified words are used as the correction result of the non-words in the text to be processed.
  • the modified word can be input into the language model, and if the modified word can reduce the degree of confusion, the modified word is used as the correction result of the non-word in the text to be processed.
  • step 732 is executed.
  • the non-words in the text to be processed can be detected for uppercase and lowercase letters. If the non-words in the text to be processed belong to all uppercase non-words, that is, the non-words in the text to be processed belong to the first type of non-words, then The non-words in the text to be processed are used as the correction result of the non-words in the text to be processed, that is, the non-words in the text to be processed are not processed.
  • the word length of the non-word in the text to be processed can be determined. If the non-word in the text to be processed belongs to the non-word with the word length within the preset word length range, that is, the non-word in the text to be processed belongs to the first For non-words, the non-words in the text to be processed are regarded as the correction result of the non-words in the text to be processed, that is, the non-words in the text to be processed are not processed.
  • the non-words in the text to be processed belong to the Pinyin vocabulary, that is, whether the non-words in the text to be processed belong to the first preset vocabulary. Belonging to the pinyin dictionary can be called pinyin non-words. If the non-word in the text to be processed belongs to pinyin non-word, that is, the non-word in the text to be processed belongs to the first type of non-word, then the non-word in the text to be processed is taken as the correction result of the non-word in the text to be processed, namely Do not process non-words in the text to be processed.
  • the non-words in the text to be processed belong to the preset low-frequency vocabulary, that is, whether the non-words in the text to be processed belong to the first preset vocabulary.
  • Non-words belonging to the preset low-frequency lexicon can be called low-frequency non-words. If the non-word in the text to be processed belongs to the low-frequency non-word, that is, the non-word in the text to be processed belongs to the first type of non-word, then the non-word in the text to be processed is taken as the correction result of the non-word in the text to be processed, namely Do not process non-words in the text to be processed.
  • non-words in the text to be processed belong to other language lexicons, that is, whether the non-words in the text to be processed belong to the first preset lexicon.
  • Non-words belonging to other language thesaurus can be called other language non-words. If the non-word in the text to be processed belongs to non-words in other languages, that is, the non-word in the text to be processed belongs to the first type of non-word, then the non-word in the text to be processed is taken as the correction result of the non-word in the text to be processed, That is, the non-words in the text to be processed are not processed.
  • the non-word in the text to be processed is a merged error non-word, that is, the second type of non-word. If the non-word in the text to be processed belongs to the second type of non-word, the non-word can be targeted for processing. For example, add a space at an appropriate position in the non-word, and modify the non-word to at least two true words. Further, the modified at least two true words can be input into the language model, and if the modified at least two true words can reduce the perplexity, then the modified at least two true words can be used as the text in the text to be processed. Correction result for non-words.
  • the non-word in the text to be processed does not belong to the above-mentioned first type of non-word, second type of non-word, and third type of non-word, then the non-word in the text to be processed belongs to the normal type of non-word, that is, method 500
  • the fourth type of non-words can be corrected according to step A1 to step A3 in the method 500.
  • step 740 Perform true word error detection on the true word in the text to be processed based on the language model to obtain the false true word in the text to be processed.
  • the candidate words corresponding to the false true words are generated, and the false true words are corrected according to the candidate words corresponding to the false true words.
  • the detailed process can be corrected according to step 540 in the method 500 to correct the false true word.
  • multiple types of characters in the text to be processed can be detected and processed separately, which reduces the interference of multiple types of characters on the error correction process, and improves the accuracy of text error correction. Improve the robustness of the error correction method to the input text.
  • Fig. 12 is a schematic block diagram of a text processing apparatus provided by an embodiment of the present application. It should be understood that the text processing apparatus 1000 can execute the text processing method shown in FIG. 9, FIG. 10 or FIG. 11.
  • the text processing device 1000 includes: an acquiring unit 1010 and a processing unit 1020.
  • the obtaining unit 1010 is used to obtain the text to be processed.
  • the processing unit 1020 is configured to: perform error detection processing on the text to be processed to obtain non-words in the text to be processed; if the non-words in the text to be processed belong to the first type of non-words, use the non-words in the text to be processed as the non-words in the text to be processed Correction results of non-words in the text; if the non-words in the text to be processed belong to the second type of non-words, the third type of non-words, or the fourth type of non-words, select the correction that matches the non-words in the text to be processed In this way, the non-words in the text to be processed are corrected to obtain the correction results of the non-words in the text to be processed.
  • the first type of non-words include non-words with all capital letters, non-words with a word length within the preset word length, and non-words belonging to the first preset thesaurus, and the second type of non-words include non-words with merged errors.
  • the three types of non-words include non-words that contain non-letter characters, and the fourth type of non-words include non-words other than the first type, second type, and third type.
  • the processing unit 1020 is configured to: if the non-words in the text to be processed belong to the fourth category of non-words, generate candidate words corresponding to the non-words in the text to be processed; candidate words corresponding to the non-words in the text to be processed Determine the target candidate word corresponding to the non-word in the text to be processed; correct the non-word in the text to be processed according to the target candidate word corresponding to the non-word in the text to be processed, and obtain the correction result of the non-word in the text to be processed.
  • the processing unit 1020 is configured to: according to the similarity between the non-words in the text to be processed and the candidate words corresponding to the non-words in the text to be processed, and the perplexity of the candidate words corresponding to the non-words in the text to be processed Score the candidate words corresponding to the non-words in the text to be processed, where the perplexity of the candidate words corresponding to the non-words in the text to be processed is used to indicate that the candidate words corresponding to the non-words in the text to be processed appear in the text to be processed.
  • the candidate word with the highest score among the candidate words corresponding to the non-word in the text to be processed is determined as the target candidate word corresponding to the non-word in the text to be processed.
  • the similarity between the non-word in the text to be processed and the candidate word corresponding to the non-word in the text to be processed satisfies the first preset condition.
  • the processing unit 1020 is configured to: when the perplexity of the target candidate word corresponding to the non-word in the text to be processed is lower than or equal to the first perplexity threshold, use the target corresponding to the non-word in the text to be processed The candidate word replaces the non-word in the text to be processed as a correction result of the non-word in the text to be processed.
  • the merging wrong non-word is a non-word that includes at least two true words
  • the processing unit 1020 is configured to: if the non-word in the text to be processed belongs to the second type of non-word, correct the non-word in the text to be processed , Obtain at least two true words as the correction results of non-words in the text to be processed.
  • the processing unit 1020 is configured to: if the non-word in the text to be processed belongs to the third category of non-words, modify the non-letter characters in the non-word in the text to be processed into letters, and use the modified word as the non-word to be processed. Process the correction results of non-words in the text.
  • the processing unit 1020 is further configured to: perform true word error detection on the text to be processed to obtain false true words in the text to be processed; generate candidate words corresponding to the false true words; determine the error in the candidate words corresponding to the false true words The target candidate word corresponding to the true word; correct the false true word according to the target candidate word corresponding to the false true word.
  • the processing unit 1020 is configured to score the candidate words corresponding to the wrong true word according to the similarity between the wrong true word and the candidate word corresponding to the wrong true word and the perplexity of the candidate word corresponding to the wrong true word, where , The perplexity of the candidate word corresponding to the wrong true word is used to indicate the possibility that the candidate word corresponding to the wrong true word appears in the text to be processed; the candidate word with the highest score among the candidate words corresponding to the wrong true word is determined as the wrong true word The corresponding target candidate word.
  • the similarity between the false true word and the candidate word corresponding to the false true word satisfies the second preset condition.
  • the processing unit 1020 is configured to: in the case where the perplexity of the target candidate word corresponding to the false true word is lower than or equal to the second perplexity threshold, replace the false true word with the target candidate word corresponding to the false true word as Correction result of wrong true word.
  • a "unit” can be a software program, a hardware circuit, or a combination of the two that realize the above-mentioned functions.
  • the hardware circuit may include an application specific integrated circuit (ASIC), an electronic circuit, and a processor for executing one or more software or firmware programs (such as a shared processor, a dedicated processor, or a group processor). Etc.) and memory, merged logic circuits and/or other suitable components that support the described functions.
  • the units of the examples described in the embodiments of the present application can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • FIG. 13 is a schematic diagram of the hardware structure of a text processing device provided by an embodiment of the present application.
  • the text processing apparatus 1200 shown in FIG. 13 includes a memory 1201, a processor 1202, a communication interface 1203, and a bus 1204.
  • the memory 1201, the processor 1202, and the communication interface 1203 implement communication connections between each other through the bus 1204.
  • the memory 1201 may be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM).
  • the memory 1201 may store a program.
  • the processor 1202 is configured to execute each step of the text processing method of the embodiment of the present application, for example, execute the steps shown in FIG. 9, FIG. 10 or FIG. The various steps shown.
  • the text processing apparatus shown in the embodiment of the present application may be a smart terminal or a chip configured in the smart terminal.
  • the text processing method disclosed in the foregoing embodiments of the present application may be applied to the processor 1202 or implemented by the processor 1202.
  • the processor 1202 may be an integrated circuit chip with signal processing capabilities.
  • the steps of the above-mentioned text processing method can be completed by an integrated logic circuit of hardware in the processor 1202 or instructions in the form of software.
  • the processor 1202 may be a chip including the NPU shown in FIG. 7.
  • the aforementioned processor 1202 may be a central processing unit (CPU), a graphics processing unit (GPU), a general-purpose processor, a digital signal processor (DSP), and an application specific integrated circuit (application integrated circuit).
  • CPU central processing unit
  • GPU graphics processing unit
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in random access memory (RAM), flash memory, read-only memory (read-only memory, ROM), programmable read-only memory or electrically erasable programmable memory, registers, etc. mature in the field Storage medium.
  • the storage medium is located in the memory 1201, and the processor 1202 reads the information in the memory 1201, and combines its hardware to complete the functions required by the units included in the text processing device shown in FIG. 12 in the implementation of this application, or execute the method of this application
  • the communication interface 1203 uses a transceiver device such as but not limited to a transceiver to implement communication between the device 1200 and other devices or a communication network.
  • a transceiver device such as but not limited to a transceiver to implement communication between the device 1200 and other devices or a communication network.
  • the bus 1204 may include a path for transferring information between various components of the text processing apparatus 1200 (for example, the memory 1201, the processor 1202, and the communication interface 1203).
  • the text processing apparatus 1200 only shows a memory, a processor, and a communication interface, in the specific implementation process, those skilled in the art should understand that the text processing apparatus 1200 may also include other necessary for normal operation. Device. At the same time, according to specific needs, those skilled in the art should understand that the above-mentioned text processing apparatus 1200 may also include hardware devices that implement other additional functions. In addition, those skilled in the art should understand that the above-mentioned text processing apparatus 1200 may also only include the necessary devices for implementing the embodiments of the present application, and not necessarily all the devices shown in FIG. 13.
  • the embodiment of the present application also provides a chip, which includes a transceiver unit and a processing unit.
  • the transceiver unit may be an input/output circuit or a communication interface;
  • the processing unit is a processor, microprocessor, or integrated circuit integrated on the chip.
  • the chip can execute the method in the above method embodiment.
  • the embodiment of the present application also provides a computer-readable storage medium on which an instruction is stored, and the method in the foregoing method embodiment is executed when the instruction is executed.
  • the embodiments of the present application also provide a computer program product containing instructions, which execute the methods in the above method embodiments when the instructions are executed.
  • the memory may include a read-only memory and a random access memory, and provide instructions and data to the processor.
  • a part of the processor may also include a non-volatile random access memory.
  • the processor may also store device type information.
  • the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention se rapporte au traitement du langage naturel dans le domaine de l'intelligence artificielle. Elle concerne un procédé et un dispositif de traitement de texte. Le procédé consiste à : acquérir un texte à traiter (510) ; mettre en oeuvre un traitement de détection d'erreur sur le texte, et obtenir un non-mot dans le texte (520) ; et si le non-mot dans le texte appartient à une première catégorie de non-mots, s'abstenir de réaliser une correction sur le non-mot dans le texte, ou si le non-mot dans le texte appartient à une deuxième catégorie de non-mots, à une troisième catégorie de non-mots ou à une quatrième catégorie de non-mots, sélectionner un système de correction correspondant à la catégorie à laquelle appartient le non-mot dans le texte pour effectuer une correction sur le non-mot dans le texte, et obtenir un résultat de correction du non-mot dans le texte (530). Le procédé peut être utilisé pour mettre en oeuvre une détection et un filtrage sur de multiples types de chaînes de caractères afin d'augmenter la précision de correction d'erreurs de texte.
PCT/CN2020/135636 2019-12-23 2020-12-11 Procédé et dispositif de traitement de texte WO2021129411A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20905268.7A EP4060526A4 (fr) 2019-12-23 2020-12-11 Procédé et dispositif de traitement de texte
US17/788,052 US20230065965A1 (en) 2019-12-23 2020-12-11 Text processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911335070.1A CN113095072B (zh) 2019-12-23 文本处理方法及装置
CN201911335070.1 2019-12-23

Publications (1)

Publication Number Publication Date
WO2021129411A1 true WO2021129411A1 (fr) 2021-07-01

Family

ID=76572944

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/135636 WO2021129411A1 (fr) 2019-12-23 2020-12-11 Procédé et dispositif de traitement de texte

Country Status (3)

Country Link
US (1) US20230065965A1 (fr)
EP (1) EP4060526A4 (fr)
WO (1) WO2021129411A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113779970A (zh) * 2021-09-24 2021-12-10 北京字跳网络技术有限公司 一种文本纠错方法及其相关设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541342B (zh) * 2020-12-08 2022-07-22 北京百度网讯科技有限公司 文本纠错方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101655837A (zh) * 2009-09-08 2010-02-24 北京邮电大学 一种对语音识别后文本进行检错并纠错的方法
US20100138210A1 (en) * 2008-12-02 2010-06-03 Electronics And Telecommunications Research Institute Post-editing apparatus and method for correcting translation errors
CN103136196A (zh) * 2008-04-18 2013-06-05 上海触乐信息科技有限公司 用于向电子设备输入文本和纠错的方法
CN105975625A (zh) * 2016-05-26 2016-09-28 同方知网数字出版技术股份有限公司 一种面向英文搜索引擎的中式英文查询纠错方法和系统
CN106202153A (zh) * 2016-06-21 2016-12-07 广州智索信息科技有限公司 一种es搜索引擎的拼写纠错方法及系统
CN107577668A (zh) * 2017-09-15 2018-01-12 电子科技大学 基于语义的社交媒体非规范词纠正方法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003223437A (ja) * 2002-01-29 2003-08-08 Internatl Business Mach Corp <Ibm> 正解語の候補の表示方法、スペルチェック方法、コンピュータ装置、プログラム
US7406201B2 (en) * 2003-12-04 2008-07-29 International Business Machines Corporation Correcting segmentation errors in OCR
US20080022198A1 (en) * 2006-07-19 2008-01-24 Brian Lee King System and Method for Adding Proper Names and Email Addresses to a Spell Check Definition List
US8341520B2 (en) * 2007-09-24 2012-12-25 Ghotit Ltd. Method and system for spell checking
US20090254817A1 (en) * 2008-04-03 2009-10-08 International Business Machines Corporation Enhanced spell checking utilizing a social network
TWI391832B (zh) * 2008-09-09 2013-04-01 Inst Information Industry 中文文章偵錯裝置、中文文章偵錯方法以及儲存媒體
US9489372B2 (en) * 2013-03-15 2016-11-08 Apple Inc. Web-based spell checker
US10255273B2 (en) * 2017-06-15 2019-04-09 Microsoft Technology Licensing, Llc Method and system for ranking and summarizing natural language passages
US20200356626A1 (en) * 2019-05-07 2020-11-12 Microsoft Technology Licensing, Llc Enhanced spelling correction
US10936813B1 (en) * 2019-05-31 2021-03-02 Amazon Technologies, Inc. Context-aware spell checker

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136196A (zh) * 2008-04-18 2013-06-05 上海触乐信息科技有限公司 用于向电子设备输入文本和纠错的方法
US20100138210A1 (en) * 2008-12-02 2010-06-03 Electronics And Telecommunications Research Institute Post-editing apparatus and method for correcting translation errors
CN101655837A (zh) * 2009-09-08 2010-02-24 北京邮电大学 一种对语音识别后文本进行检错并纠错的方法
CN105975625A (zh) * 2016-05-26 2016-09-28 同方知网数字出版技术股份有限公司 一种面向英文搜索引擎的中式英文查询纠错方法和系统
CN106202153A (zh) * 2016-06-21 2016-12-07 广州智索信息科技有限公司 一种es搜索引擎的拼写纠错方法及系统
CN107577668A (zh) * 2017-09-15 2018-01-12 电子科技大学 基于语义的社交媒体非规范词纠正方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4060526A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113779970A (zh) * 2021-09-24 2021-12-10 北京字跳网络技术有限公司 一种文本纠错方法及其相关设备

Also Published As

Publication number Publication date
CN113095072A (zh) 2021-07-09
EP4060526A4 (fr) 2022-12-28
EP4060526A1 (fr) 2022-09-21
US20230065965A1 (en) 2023-03-02

Similar Documents

Publication Publication Date Title
WO2021047286A1 (fr) Procédé d&#39;apprentissage de modèle de traitement de texte ainsi que procédé et appareil de traitement de texte
WO2022007823A1 (fr) Procédé et dispositif de traitement de données de texte
US20230016365A1 (en) Method and apparatus for training text classification model
WO2020228376A1 (fr) Procédé de traitement de texte et procédé et appareil d&#39;instruction de modèle
WO2021233112A1 (fr) Procédé de traduction basé sur l&#39;apprentissage automatique multimodal, dispositif, équipement et support d&#39;enregistrement
Zhang et al. Top-down tree long short-term memory networks
US20220180073A1 (en) Linguistically rich cross-lingual text event embeddings
CN111984766B (zh) 缺失语义补全方法及装置
CN111930942B (zh) 文本分类方法、语言模型训练方法、装置及设备
WO2022068627A1 (fr) Procédé de traitement de données et dispositif associé
CN113239700A (zh) 改进bert的文本语义匹配设备、系统、方法及存储介质
US11106873B2 (en) Context-based translation retrieval via multilingual space
CN111898636B (zh) 一种数据处理方法及装置
WO2021129411A1 (fr) Procédé et dispositif de traitement de texte
WO2020192523A1 (fr) Procédé et appareil de détection de qualité de traduction, système de traduction automatique et support d&#39;informations
US20240152770A1 (en) Neural network search method and related device
EP4390753A1 (fr) Procédé de traitement de données de texte, procédé d&#39;entraînement de réseau de neurones et dispositifs associés
CN116432019A (zh) 一种数据处理方法及相关设备
CN114782722A (zh) 图文相似度的确定方法、装置及电子设备
CN110781666A (zh) 基于生成式对抗网络的自然语言处理文本建模
CN116821307B (zh) 内容交互方法、装置、电子设备和存储介质
CN116757195B (zh) 一种基于提示学习的隐性情感识别方法
WO2021129410A1 (fr) Procédé et dispositif de traitement de texte
Ding et al. Event extraction with deep contextualized word representation and multi-attention layer
WO2023116572A1 (fr) Procédé de génération de mots ou de phrases et dispositif associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20905268

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020905268

Country of ref document: EP

Effective date: 20220614

NENP Non-entry into the national phase

Ref country code: DE