CN111914540A - Statement identification method and device, storage medium and processor - Google Patents

Statement identification method and device, storage medium and processor Download PDF

Info

Publication number
CN111914540A
CN111914540A CN201910390785.0A CN201910390785A CN111914540A CN 111914540 A CN111914540 A CN 111914540A CN 201910390785 A CN201910390785 A CN 201910390785A CN 111914540 A CN111914540 A CN 111914540A
Authority
CN
China
Prior art keywords
statement
type
processed
parameters
error detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910390785.0A
Other languages
Chinese (zh)
Inventor
包祖贻
李辰
刘恒友
李林琳
司罗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910390785.0A priority Critical patent/CN111914540A/en
Publication of CN111914540A publication Critical patent/CN111914540A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Machine Translation (AREA)
  • Document Processing Apparatus (AREA)

Abstract

The invention discloses a sentence identification method and device, a storage medium and a processor. Wherein, the method comprises the following steps: obtaining a statement to be processed; inputting the statement to be processed into a target encoder, and processing to obtain semantic representation of the statement to be processed, wherein the target encoder is an encoder shared by a syntax error detection task and a syntax error correction task; inputting the semantic representation of the statement to be processed into a processing model to obtain a processing result, wherein the processing result comprises one of the following: syntax error detection results and syntax error correction results. The invention solves the technical problem of poor performance of system processing tasks due to the fact that the labeling data of different tasks are different and are difficult to be directly mixed and used.

Description

Statement identification method and device, storage medium and processor
Technical Field
The invention relates to the technical field of information processing, in particular to a statement identification method and device, a storage medium and a processor.
Background
In the grammar error detection and correction task, the labeled data has certain difference due to different task division and specific requirements in the past. In a grammar error detection scenario, similar to spell checking in Word, only errors present in the user's input need to be discovered and identified, and prompts given. On this basis, some data are labeled as to the location and type of errors, such as the sentence "i like to eat the fruit" and the 4 th word "fruit" error. In another scenario of grammar error correction, the input of the user needs to be corrected, and therefore, as a requirement, another set of grammatical correct forms of data is labeled, such as a sentence "i like eating peaceful fruit" and a correct form "i like eating apple". The two types of labeled data are different from each other and cannot be directly mixed for use, so the labeled data are called heterogeneous labeled data. However, the detection and error correction of syntax errors for the two pieces of data are interrelated, and the performance of the system processing task is poor because the labeling data of different tasks are difficult to be directly mixed and used.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a statement identification method and device, a storage medium and a processor, which are used for at least solving the technical problem of poor performance of system processing tasks caused by the fact that labeled data of different tasks are different and are difficult to directly mix and use.
According to an aspect of an embodiment of the present invention, there is provided a sentence evaluation method, including: obtaining a statement to be processed; inputting the statement to be processed into a target encoder, and processing to obtain semantic representation of the statement to be processed, wherein the target encoder is an encoder shared by a syntax error detection task and a syntax error correction task; inputting the semantic representation of the statement to be processed into a processing model to obtain a processing result, wherein the processing result comprises one of the following: syntax error detection results and syntax error correction results.
Further, the method further comprises: before the sentence to be processed is input to a target encoder and processed to obtain a semantic representation of the sentence to be processed, acquiring a plurality of labeled sentences, wherein the plurality of labeled sentences comprise: the first type of statement marked with syntax errors and the second type of statement corrected with syntax errors; inputting the first type of statement into the target encoder, processing to obtain semantic representation of the first type of statement, inputting the second type of statement into the target encoder, processing to obtain semantic representation of the second type of statement, inputting the semantic representation of the first type of statement into a syntax error detection sub-model to obtain syntax error detection results of the first type of statement, inputting the semantic representation of the second type of statement into a target decoder to obtain syntax error correction results of the second type of statement; comparing the grammar error detection result of the first type of statement with the marked grammar error to obtain a first comparison result, and comparing the grammar error correction result of the second type of statement with the error-corrected grammar error to obtain a second comparison result; and adjusting parameters of the target encoder according to the first comparison result and the second comparison result.
Further, the method further comprises: adjusting parameters of the syntax error detection submodel according to the first comparison result; and adjusting the parameters of the target decoder according to the second comparison result.
Further, adjusting parameters of the target encoder according to the first comparison result and the second comparison result includes: calculating a first loss value according to the first comparison result and a cross entropy loss function; calculating a second loss value according to the second comparison result and the cross entropy loss function; calculating a first parameter to be adjusted by the first loss value through a back propagation algorithm, and calculating a second parameter to be adjusted by the second loss value through the back propagation algorithm; and adjusting the parameters of the target encoder according to the first parameters and the second parameters.
Further, adjusting the parameters of the target encoder according to the first parameters and the second parameters comprises: determining a first weight value of the syntax error detection task and a second weight value of the syntax error correction task; adjusting parameters of the target encoder according to the first parameter, the first weight value, the second parameter and the second weight value.
Further, the target encoder is a multi-layer Bi-LSTM encoder, and the target decoder is a multi-layer Bi-LSTM decoder.
Further, if the to-be-processed statement is a statement to be subjected to syntax error detection, the processing model is a syntax error detection submodel, and if the to-be-processed statement is a statement to be subjected to syntax error correction, the processing model is a multilayer Bi-LSTM decoder.
According to another aspect of the embodiments of the present invention, there is also provided a sentence evaluation apparatus, including: the first acquisition unit is used for acquiring the statement to be processed; the first processing unit is used for inputting the statement to be processed into a target encoder and processing the statement to be processed to obtain semantic representation of the statement to be processed, wherein the target encoder is an encoder shared by a syntax error detection task and a syntax error correction task; a second processing unit, configured to input the semantic representation of the to-be-processed statement to a processing model, and obtain a processing result, where the processing result includes one of: syntax error detection results and syntax error correction results.
Further, the apparatus further comprises: a second obtaining unit, configured to obtain a plurality of labeled sentences before the to-be-processed sentences are input to a target encoder and processed to obtain a semantic representation of the to-be-processed sentences, where the plurality of labeled sentences include: the first type of statement marked with syntax errors and the second type of statement corrected with syntax errors; a third processing unit, configured to input the first type of statement to the target encoder, process the first type of statement to obtain a semantic representation of the first type of statement, input the second type of statement to the target encoder, and process the second type of statement to obtain a semantic representation of the second type of statement, and a third obtaining unit, configured to input the semantic representation of the first type of statement to a syntax error detection submodel to obtain a syntax error detection result of the first type of statement, and input the semantic representation of the second type of statement to a target decoder to obtain a syntax error correction result of the second type of statement; the comparison unit is used for comparing the grammar error detection result of the first type of statement with the marked grammar error to obtain a first comparison result, and comparing the grammar error correction result of the second type of statement with the error-corrected grammar error to obtain a second comparison result; and the first adjusting unit is used for adjusting the parameters of the target encoder according to the first comparison result and the second comparison result.
Further, the apparatus further comprises: a second adjusting unit, configured to adjust parameters of the syntax error detection submodel according to the first comparison result; and the third adjusting unit is used for adjusting the parameters of the target decoder according to the second comparison result.
Further, the first adjusting unit includes: the first calculation module is used for calculating a first loss value according to the first comparison result and a cross entropy loss function and calculating a second loss value according to the second comparison result and the cross entropy loss function; the second calculation module is used for calculating a first parameter to be adjusted according to the first loss value through a back propagation algorithm and calculating a second parameter to be adjusted according to the second loss value through the back propagation algorithm; and the adjusting module is used for adjusting the parameters of the target encoder according to the first parameters and the second parameters.
Further, the adjustment module includes: the determining submodule is used for determining a first weight value of the grammar error detection task and a second weight value of the grammar error correction task; an adjusting submodule, configured to adjust a parameter of the target encoder according to the first parameter, the first weight value, the second parameter, and the second weight value.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium, which is characterized in that the storage medium includes a stored program, wherein when the program runs, the apparatus on which the storage medium is located is controlled to execute any one of the above statement identification methods.
According to another aspect of the embodiments of the present invention, there is further provided a processor, wherein the processor is configured to execute a program, and the program executes to perform the statement identification method according to any one of the above.
In the embodiment of the invention, the statement to be processed is obtained; inputting the statement to be processed into a target encoder, and processing to obtain semantic representation of the statement to be processed, wherein the target encoder is an encoder shared by a syntax error detection task and a syntax error correction task; inputting the semantic representation of the statement to be processed into a processing model to obtain a processing result, wherein the processing result comprises one of the following: the syntax error detection result and the syntax error correction result achieve the aim that heterogeneous data can mutually obtain information of the other party through an encoder shared by the syntax error detection task and the syntax error correction task, so that the technical effect of improving the system performance is realized, and the technical problem that the system processing task performance is poor due to the fact that different labeled data of different tasks are difficult to directly mix and use is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a computer terminal provided according to an embodiment of the present invention;
FIG. 2 is a flow chart of a sentence evaluation method provided according to an embodiment of the invention;
FIG. 3 is a diagram illustrating a sentence evaluation method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a sentence evaluation apparatus provided in accordance with an embodiment of the present invention; and
fig. 5 is a block diagram of an alternative computer terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
heterogeneous labeling data: refers to data that is labeled for the same or related tasks and has certain relevance, but the labeled content of the data itself is different. Taking grammar error correction as an example, one part of data marks some sentences and corresponding grammar correct forms, one part of data marks the sentences and the positions of grammar errors, and the two parts of data are heterogeneous marked data of grammar error correction.
Syntax error detection: refers to detecting grammatical errors that exist in a sentence.
Syntax error correction: refers to correcting grammatical errors present in a sentence.
Example 1
In accordance with an embodiment of the present invention, there is provided a method embodiment of statement identification, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Fig. 1 shows a hardware configuration block diagram of a computer terminal (or mobile device) for implementing the sentence evaluation method. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 104 for storing data, and a transmission device for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the sentence identification method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, that is, implementing the sentence identification method of the application program. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
Under the above operating environment, the present application provides a sentence identification method as shown in fig. 2. Fig. 2 is a flowchart of a sentence evaluation method according to a first embodiment of the present invention.
Step S201, a to-be-processed sentence is acquired.
For example, the pending statement is: i like eating the peaceful fruit.
Step S202, inputting the statement to be processed into a target encoder, and processing to obtain semantic representation of the statement to be processed, wherein the target encoder is an encoder shared by a syntax error detection task and a syntax error correction task.
The above-mentioned sentence to be processed may be a sentence with syntax error detection or error correction data, and the semantic representation of each word in the sentence is obtained through a shared target decoder (e.g., a multi-layer Bi-LSTM encoder). The multilayer Bi-LSTM encoder internally comprises: 1) the words of the sentence are mapped into vector representation through a word vector matrix to obtain a word vector sequence of the sentence; 2) the word vector sequence is subjected to a multi-layer Bi-LSTM network to obtain semantic representation of each word.
Namely, the statement to be processed is input into the target encoder, and a matrix with the semantic expression of N x D dimension is obtained, wherein N and D are natural numbers. For example, the sentence to be processed as "i like to eat peaceful fruit" includes four words: i like, eat, and have flat fruits, and the semantic representation of the matrix is a 4 x 100 dimensional matrix.
Optionally, in the statement identification method provided in the embodiment of the present invention, the method further includes: before a sentence to be processed is input to a target encoder and processed to obtain a semantic representation of the sentence to be processed, acquiring a plurality of labeled sentences, wherein the plurality of labeled sentences comprise: the first type of statement marked with syntax errors and the second type of statement corrected with syntax errors; inputting the first type of statement into a target encoder, processing to obtain semantic representation of the first type of statement, inputting the second type of statement into the target encoder, processing to obtain semantic representation of the second type of statement, inputting the semantic representation of the first type of statement into a syntax error detection sub-model to obtain a syntax error detection result of the first type of statement, and inputting the semantic representation of the second type of statement into a target decoder to obtain a syntax error correction result of the second type of statement; comparing the grammar error detection result of the first type of sentences with the marked grammar errors to obtain a first comparison result, and comparing the grammar error correction result of the second type of sentences with the corrected grammar errors to obtain a second comparison result; and adjusting parameters of the target encoder according to the first comparison result and the second comparison result.
The above-mentioned multiple labeled sentences include the first type sentences labeled with syntax errors and the second type sentences corrected with syntax errors, for example, the multiple labeled sentences include: statement one, namely ' today's weather is true Qinlang ', grammar errors are marked as ' Qinlang ' errors ' of the 4 th word, and grammar errors are corrected to ' today's weather is true Qinlang '; statement two, "poor signal causes short-in-middle call", syntax error is labeled as "short-in-middle" error of the 5 th word, and syntax error is corrected to "poor signal causes call interruption". Inputting a statement I marked with a syntax error as a 4 th word ' Qinlang ' error by the syntax error into a target encoder, processing to obtain a semantic representation I of the statement I, inputting the statement I ' today's weather true Qinlang ' corrected by the syntax error into the target encoder, processing to obtain a semantic representation II of the statement I, inputting the semantic representation I of the statement I into a syntax error detection submodel to obtain a syntax error detection result of the statement I, and inputting the semantic representation II of the statement I into a target decoder to obtain a syntax error correction result of the statement I; comparing the grammar error detection result of the statement I with the marked grammar error to obtain a first comparison result, and comparing the grammar error correction result of the statement I with the corrected grammar error to obtain a second comparison result; and adjusting parameters of the target encoder according to the first comparison result and the second comparison result.
According to the scheme, the parameters of the target encoder are adjusted according to the first comparison result of the statement I and the second comparison result of the statement I, the parameters of the target encoder are adjusted according to the first comparison result of the statement II and the second comparison result of the statement II, and the parameters of the target encoder are adjusted according to the first comparison result of the statement N and the second comparison result of the statement I by … …. And calculating a cross entropy loss function according to the first comparison result and the second comparison result, and updating the parameters of the target encoder by inverse gradient propagation.
As shown in FIG. 3, the method uses a shared multilayer Bi-LSTM encoder to extract semantic representation of sentences on data of syntax error detection and syntax error correction, and trains the encoder with heterogeneous data together, so that the encoder is fully learned and can simultaneously extract error detection and correction information. The syntax error detection submodel and the multi-layer Bi-LSTM decoder which are independent of different data ensure the difference of different data outputs. Meanwhile, the two grammar error detection tasks and the grammar error correction task of the scheme have no cascade relation, so that error accumulation is avoided, different data can be trained jointly, heterogeneous data can mutually obtain information of the other party, and finally system performance is improved.
Step S203, inputting the semantic representation of the sentence to be processed into a processing model to obtain a processing result, wherein the processing result comprises one of the following: syntax error detection results and syntax error correction results.
In the sentence identification method provided in the embodiment of the present invention, if the to-be-processed sentence is a sentence to be subjected to syntax error detection, the processing model is a syntax error detection submodel, and if the to-be-processed sentence is a sentence to be subjected to syntax error correction, the processing model is a multilayer Bi-LSTM decoder.
If a 4-100 dimensional matrix is input to the grammar error detection submodel after the semantic representation of the sentence to be processed, the grammar error detection result of the sentence to be processed, namely the 4 th word 'Pingguo' error, is obtained. If a 4-100 dimensional matrix is input to a multi-layer Bi-LSTM decoder after the semantic expression of the sentence to be processed, the grammar error correction result of the sentence to be processed, i.e., the sentence, i likes to eat flat fruit, is 'i like to eat apples'.
In summary, the sentence identification method provided in the embodiment of the present invention obtains the sentence to be processed; inputting the statement to be processed into a target encoder, and processing to obtain semantic representation of the statement to be processed, wherein the target encoder is an encoder shared by a syntax error detection task and a syntax error correction task; inputting the semantic representation of the statement to be processed into a processing model to obtain a processing result, wherein the processing result comprises one of the following: the syntax error detection result and the syntax error correction result achieve the aim that heterogeneous data can mutually obtain information of the other party through an encoder shared by the syntax error detection task and the syntax error correction task, so that the technical effect of improving the system performance is realized, and the technical problem that the system processing task performance is poor due to the fact that different labeled data of different tasks are difficult to directly mix and use is solved.
Optionally, in the statement identification method provided in the embodiment of the present invention, the method further includes: adjusting parameters of the syntax error detection submodel according to the first comparison result; and adjusting the parameters of the target decoder according to the second comparison result.
In the above scheme, the parameters of the submodel are detected by adjusting syntax errors according to the first comparison result; and adjusting parameters of the target decoder according to the second comparison result, continuously updating the syntax error detection submodel and the parameters of the target decoder, and ensuring the accuracy of the output results of the syntax error detection submodel and the target decoder.
Optionally, in the sentence identification method provided in the embodiment of the present invention, adjusting a parameter of the target encoder according to the first comparison result and the second comparison result includes: calculating a first loss value according to the first comparison result and the cross entropy loss function; calculating a second loss value according to the second comparison result and the cross entropy loss function; calculating a first parameter to be adjusted by the first loss value through a back propagation algorithm, and calculating a second parameter to be adjusted by the second loss value through the back propagation algorithm; and adjusting the parameters of the target encoder according to the first parameters and the second parameters.
In the above scheme, the first parameter to be adjusted and the second parameter to be adjusted include a parameter to be adjusted and a value to be updated, and the parameter of the target encoder is adjusted through the determined first parameter to be adjusted and the determined second parameter to be adjusted, thereby playing a role in jointly training the target encoder.
Optionally, in the sentence evaluation method provided in the embodiment of the present invention, adjusting the parameter of the target encoder according to the first parameter and the second parameter includes: determining a first weight value of a grammar error detection task and a second weight value of a grammar error correction task; and adjusting parameters of the target encoder according to the first parameter, the first weight value, the second parameter and the second weight value.
For example, the first weight value of the syntax error detection task is 0.5, the second weight value of the syntax error correction task is 1.0, and the parameter of the final adjustment target encoder is obtained by adding 0.5+ 1.0 of the first parameter to be adjusted. By presetting the weight values of different tasks, the training of the trained target encoder is more targeted, and the accuracy of the output result of the target encoder at the training position is ensured.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
According to an embodiment of the present invention, there is also provided an apparatus for implementing the above sentence identification method, as shown in fig. 4, the apparatus includes:
a first obtaining unit 401, configured to obtain a statement to be processed;
a first processing unit 402, configured to input the to-be-processed statement to a target encoder, and process the to-be-processed statement to obtain a semantic representation of the to-be-processed statement, where the target encoder is an encoder shared by a syntax error detection task and a syntax error correction task;
a second processing unit 403, configured to input the semantic representation of the to-be-processed sentence into a processing model, and obtain a processing result, where the processing result includes one of: syntax error detection results and syntax error correction results.
In summary, in the sentence identification method provided in the embodiment of the present invention, the first obtaining unit 401 obtains the sentence to be processed; the first processing unit 402 inputs the to-be-processed sentence into a target encoder, and the semantic representation of the to-be-processed sentence is obtained through processing, wherein the target encoder is an encoder shared by a syntax error detection task and a syntax error correction task; the second processing unit 403 inputs the semantic representation of the to-be-processed sentence into a processing model, and obtains a processing result, where the processing result includes one of: the syntax error detection result and the syntax error correction result achieve the aim that heterogeneous data can mutually obtain information of the other party through an encoder shared by the syntax error detection task and the syntax error correction task, so that the technical effect of improving the system performance is realized, and the technical problem that the system processing task performance is poor due to the fact that different labeled data of different tasks are difficult to directly mix and use is solved.
Optionally, in the sentence identification apparatus provided in the embodiment of the present invention, the apparatus further includes: a second obtaining unit, configured to obtain a plurality of labeled sentences before the to-be-processed sentences are input to a target encoder and processed to obtain a semantic representation of the to-be-processed sentences, where the plurality of labeled sentences include: the first type of statement marked with syntax errors and the second type of statement corrected with syntax errors; a third processing unit, configured to input the first type of statement to the target encoder, process the first type of statement to obtain a semantic representation of the first type of statement, input the second type of statement to the target encoder, and process the second type of statement to obtain a semantic representation of the second type of statement, and a third obtaining unit, configured to input the semantic representation of the first type of statement to a syntax error detection submodel to obtain a syntax error detection result of the first type of statement, and input the semantic representation of the second type of statement to a target decoder to obtain a syntax error correction result of the second type of statement; the comparison unit is used for comparing the grammar error detection result of the first type of statement with the marked grammar error to obtain a first comparison result, and comparing the grammar error correction result of the second type of statement with the error-corrected grammar error to obtain a second comparison result; and the first adjusting unit is used for adjusting the parameters of the target encoder according to the first comparison result and the second comparison result.
Optionally, in the sentence identification apparatus provided in the embodiment of the present invention, the apparatus further includes: a second adjusting unit, configured to adjust parameters of the syntax error detection submodel according to the first comparison result; and the third adjusting unit is used for adjusting the parameters of the target decoder according to the second comparison result.
Optionally, in a sentence identification apparatus provided in an embodiment of the present invention, the first adjusting unit includes: the first calculation module is used for calculating a first loss value according to the first comparison result and a cross entropy loss function and calculating a second loss value according to the second comparison result and the cross entropy loss function; the second calculation module is used for calculating a first parameter to be adjusted according to the first loss value through a back propagation algorithm and calculating a second parameter to be adjusted according to the second loss value through the back propagation algorithm; and the adjusting module is used for adjusting the parameters of the target encoder according to the first parameters and the second parameters.
Optionally, in the sentence identification apparatus provided in the embodiment of the present invention, the adjusting module includes: the determining submodule is used for determining a first weight value of the grammar error detection task and a second weight value of the grammar error correction task; an adjusting submodule, configured to adjust a parameter of the target encoder according to the first parameter, the first weight value, the second parameter, and the second weight value.
Optionally, in the statement identification apparatus provided in this embodiment of the present invention, the target encoder is a multi-layer Bi-LSTM encoder, and the target decoder is a multi-layer Bi-LSTM decoder.
Optionally, in the statement identifying apparatus according to the embodiment of the present invention, if the to-be-processed statement is a statement to be subjected to syntax error detection, the processing model is a syntax error detection sub-model, and if the to-be-processed statement is a statement to be subjected to syntax error correction, the processing model is a multilayer Bi-LSTM decoder.
It should be noted here that the first acquiring unit 401, the first processing unit 402, and the second processing unit 403 correspond to step S201, step S202 to step S203 in embodiment 1, and the two modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in the first embodiment. It should be noted that the modules described above as part of the apparatus may be run in the computer terminal 10 provided in the first embodiment.
Example 3
The embodiment of the invention can provide a computer terminal which can be any computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute the program code of the following steps in the sentence identification method of the application program: obtaining a statement to be processed; inputting the statement to be processed into a target encoder, and processing to obtain semantic representation of the statement to be processed, wherein the target encoder is an encoder shared by a syntax error detection task and a syntax error correction task; inputting the semantic representation of the statement to be processed into a processing model to obtain a processing result, wherein the processing result comprises one of the following: syntax error detection results and syntax error correction results.
The computer terminal may further execute program codes of the following steps in the sentence identification method of the application program: the method further comprises the following steps: before a sentence to be processed is input to a target encoder and processed to obtain a semantic representation of the sentence to be processed, acquiring a plurality of labeled sentences, wherein the plurality of labeled sentences comprise: the first type of statement marked with syntax errors and the second type of statement corrected with syntax errors; inputting the first type of statement into a target encoder, processing to obtain semantic representation of the first type of statement, inputting the second type of statement into the target encoder, processing to obtain semantic representation of the second type of statement, inputting the semantic representation of the first type of statement into a syntax error detection sub-model to obtain a syntax error detection result of the first type of statement, and inputting the semantic representation of the second type of statement into a target decoder to obtain a syntax error correction result of the second type of statement; comparing the grammar error detection result of the first type of sentences with the marked grammar errors to obtain a first comparison result, and comparing the grammar error correction result of the second type of sentences with the corrected grammar errors to obtain a second comparison result; and adjusting parameters of the target encoder according to the first comparison result and the second comparison result.
The computer terminal may further execute program codes of the following steps in the sentence identification method of the application program: the method further comprises the following steps: adjusting parameters of the syntax error detection submodel according to the first comparison result; and adjusting the parameters of the target decoder according to the second comparison result.
The computer terminal may further execute program codes of the following steps in the sentence identification method of the application program: according to the first comparison result and the second comparison result, adjusting parameters of the target encoder comprises: calculating a first loss value according to the first comparison result and the cross entropy loss function; calculating a second loss value according to the second comparison result and the cross entropy loss function; calculating a first parameter to be adjusted by the first loss value through a back propagation algorithm, and calculating a second parameter to be adjusted by the second loss value through the back propagation algorithm; and adjusting the parameters of the target encoder according to the first parameters and the second parameters.
The computer terminal may further execute program codes of the following steps in the sentence identification method of the application program: adjusting the parameters of the target encoder according to the first parameters and the second parameters comprises: determining a first weight value of a grammar error detection task and a second weight value of a grammar error correction task; and adjusting parameters of the target encoder according to the first parameter, the first weight value, the second parameter and the second weight value.
The computer terminal may further execute program codes of the following steps in the sentence identification method of the application program: the target encoder is a multi-layer Bi-LSTM encoder and the target decoder is a multi-layer Bi-LSTM decoder.
The computer terminal may further execute program codes of the following steps in the sentence identification method of the application program: and if the sentence to be processed is the sentence to be subjected to the syntax error detection, the processing model is a syntax error detection submodel, and if the sentence to be processed is the sentence to be subjected to the syntax error correction, the processing model is a multilayer Bi-LSTM decoder.
Alternatively, fig. 5 is a block diagram of a computer terminal according to an embodiment of the present invention. As shown in fig. 5, the computer terminal a may include: one or more processors (only one shown) and memory.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the sentence identification method and apparatus in the embodiments of the present invention, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, so as to implement the sentence identification method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory remotely located from the processor, and these remote memories may be connected to terminal a through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: obtaining a statement to be processed; inputting the statement to be processed into a target encoder, and processing to obtain semantic representation of the statement to be processed, wherein the target encoder is an encoder shared by a syntax error detection task and a syntax error correction task; inputting the semantic representation of the statement to be processed into a processing model to obtain a processing result, wherein the processing result comprises one of the following: syntax error detection results and syntax error correction results.
Optionally, the processor may further execute the program code of the following steps: the method further comprises the following steps: before a sentence to be processed is input to a target encoder and processed to obtain a semantic representation of the sentence to be processed, acquiring a plurality of labeled sentences, wherein the plurality of labeled sentences comprise: the first type of statement marked with syntax errors and the second type of statement corrected with syntax errors; inputting the first type of statement into a target encoder, processing to obtain semantic representation of the first type of statement, inputting the second type of statement into the target encoder, processing to obtain semantic representation of the second type of statement, inputting the semantic representation of the first type of statement into a syntax error detection sub-model to obtain a syntax error detection result of the first type of statement, and inputting the semantic representation of the second type of statement into a target decoder to obtain a syntax error correction result of the second type of statement; comparing the grammar error detection result of the first type of sentences with the marked grammar errors to obtain a first comparison result, and comparing the grammar error correction result of the second type of sentences with the corrected grammar errors to obtain a second comparison result; and adjusting parameters of the target encoder according to the first comparison result and the second comparison result.
Optionally, the processor may further execute the program code of the following steps: the method further comprises the following steps: adjusting parameters of the syntax error detection submodel according to the first comparison result; and adjusting the parameters of the target decoder according to the second comparison result.
Optionally, the processor may further execute the program code of the following steps: according to the first comparison result and the second comparison result, adjusting parameters of the target encoder comprises: calculating a first loss value according to the first comparison result and the cross entropy loss function; calculating a second loss value according to the second comparison result and the cross entropy loss function; calculating a first parameter to be adjusted by the first loss value through a back propagation algorithm, and calculating a second parameter to be adjusted by the second loss value through the back propagation algorithm; and adjusting the parameters of the target encoder according to the first parameters and the second parameters.
Optionally, the processor may further execute the program code of the following steps: adjusting the parameters of the target encoder according to the first parameters and the second parameters comprises: determining a first weight value of a grammar error detection task and a second weight value of a grammar error correction task; and adjusting parameters of the target encoder according to the first parameter, the first weight value, the second parameter and the second weight value.
Optionally, the processor may further execute the program code of the following steps: the target encoder is a multi-layer Bi-LSTM encoder and the target decoder is a multi-layer Bi-LSTM decoder.
Optionally, the processor may further execute the program code of the following steps: and if the sentence to be processed is the sentence to be subjected to the syntax error detection, the processing model is a syntax error detection submodel, and if the sentence to be processed is the sentence to be subjected to the syntax error correction, the processing model is a multilayer Bi-LSTM decoder.
By adopting the embodiment of the invention, a statement identification scheme is provided, and the statements to be processed are obtained; inputting the statement to be processed into a target encoder, and processing to obtain semantic representation of the statement to be processed, wherein the target encoder is an encoder shared by a syntax error detection task and a syntax error correction task; inputting the semantic representation of the statement to be processed into a processing model to obtain a processing result, wherein the processing result comprises one of the following: the syntax error detection result and the syntax error correction result achieve the aim that heterogeneous data can mutually obtain information of the other party through an encoder shared by the syntax error detection task and the syntax error correction task, so that the technical effect of improving the system performance is realized, and the technical problem that the system processing task performance is poor due to the fact that different labeled data of different tasks are difficult to directly mix and use is solved.
It can be understood by those skilled in the art that the structure shown in fig. 5 is only an illustration, and the computer terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 5 is a diagram illustrating a structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 5, or have a different configuration than shown in FIG. 5.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 4
The embodiment of the invention also provides a storage medium. Optionally, in this embodiment, the storage medium may be configured to store a program code executed by the sentence evaluation method provided in the first embodiment.
Optionally, in this embodiment, the storage medium may be located in any one of computer terminals in a computer terminal group in a computer network, or in any one of mobile terminals in a mobile terminal group.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: obtaining a statement to be processed; inputting the statement to be processed into a target encoder, and processing to obtain semantic representation of the statement to be processed, wherein the target encoder is an encoder shared by a syntax error detection task and a syntax error correction task; inputting the semantic representation of the statement to be processed into a processing model to obtain a processing result, wherein the processing result comprises one of the following: syntax error detection results and syntax error correction results.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: the method further comprises the following steps: before a sentence to be processed is input to a target encoder and processed to obtain a semantic representation of the sentence to be processed, acquiring a plurality of labeled sentences, wherein the plurality of labeled sentences comprise: the first type of statement marked with syntax errors and the second type of statement corrected with syntax errors; inputting the first type of statement into a target encoder, processing to obtain semantic representation of the first type of statement, inputting the second type of statement into the target encoder, processing to obtain semantic representation of the second type of statement, inputting the semantic representation of the first type of statement into a syntax error detection sub-model to obtain a syntax error detection result of the first type of statement, and inputting the semantic representation of the second type of statement into a target decoder to obtain a syntax error correction result of the second type of statement; comparing the grammar error detection result of the first type of sentences with the marked grammar errors to obtain a first comparison result, and comparing the grammar error correction result of the second type of sentences with the corrected grammar errors to obtain a second comparison result; and adjusting parameters of the target encoder according to the first comparison result and the second comparison result.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: the method further comprises the following steps: adjusting parameters of the syntax error detection submodel according to the first comparison result; and adjusting the parameters of the target decoder according to the second comparison result.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: according to the first comparison result and the second comparison result, adjusting parameters of the target encoder comprises: calculating a first loss value according to the first comparison result and the cross entropy loss function; calculating a second loss value according to the second comparison result and the cross entropy loss function; calculating a first parameter to be adjusted by the first loss value through a back propagation algorithm, and calculating a second parameter to be adjusted by the second loss value through the back propagation algorithm; and adjusting the parameters of the target encoder according to the first parameters and the second parameters.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: adjusting the parameters of the target encoder according to the first parameters and the second parameters comprises: determining a first weight value of a grammar error detection task and a second weight value of a grammar error correction task; and adjusting parameters of the target encoder according to the first parameter, the first weight value, the second parameter and the second weight value.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: the target encoder is a multi-layer Bi-LSTM encoder and the target decoder is a multi-layer Bi-LSTM decoder.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: and if the sentence to be processed is the sentence to be subjected to the syntax error detection, the processing model is a syntax error detection submodel, and if the sentence to be processed is the sentence to be subjected to the syntax error correction, the processing model is a multilayer Bi-LSTM decoder.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (14)

1. A sentence authentication method, comprising:
obtaining a statement to be processed;
inputting the statement to be processed into a target encoder, and processing to obtain semantic representation of the statement to be processed, wherein the target encoder is an encoder shared by a syntax error detection task and a syntax error correction task;
inputting the semantic representation of the statement to be processed into a processing model to obtain a processing result, wherein the processing result comprises one of the following: syntax error detection results and syntax error correction results.
2. The method of claim 1, further comprising:
before the sentence to be processed is input to a target encoder and processed to obtain a semantic representation of the sentence to be processed, acquiring a plurality of labeled sentences, wherein the plurality of labeled sentences comprise: the first type of statement marked with syntax errors and the second type of statement corrected with syntax errors;
inputting the first type of statement into the target encoder, processing to obtain semantic representation of the first type of statement, inputting the second type of statement into the target encoder, processing to obtain semantic representation of the second type of statement,
inputting the semantic representation of the first type of statement into a syntax error detection submodel to obtain a syntax error detection result of the first type of statement, and inputting the semantic representation of the second type of statement into a target decoder to obtain a syntax error correction result of the second type of statement;
comparing the grammar error detection result of the first type of statement with the marked grammar error to obtain a first comparison result, and comparing the grammar error correction result of the second type of statement with the error-corrected grammar error to obtain a second comparison result;
and adjusting parameters of the target encoder according to the first comparison result and the second comparison result.
3. The method of claim 2, further comprising:
adjusting parameters of the syntax error detection submodel according to the first comparison result;
and adjusting the parameters of the target decoder according to the second comparison result.
4. The method of claim 2, wherein adjusting parameters of the target encoder according to the first alignment result and the second alignment result comprises:
calculating a first loss value according to the first comparison result and a cross entropy loss function;
calculating a second loss value according to the second comparison result and the cross entropy loss function;
calculating a first parameter to be adjusted by the first loss value through a back propagation algorithm, and calculating a second parameter to be adjusted by the second loss value through the back propagation algorithm;
and adjusting the parameters of the target encoder according to the first parameters and the second parameters.
5. The method of claim 4, wherein adjusting the parameters of the target encoder according to the first parameters and the second parameters comprises:
determining a first weight value of the syntax error detection task and a second weight value of the syntax error correction task;
adjusting parameters of the target encoder according to the first parameter, the first weight value, the second parameter and the second weight value.
6. The method of claim 2, wherein the target encoder is a multi-layer Bi-LSTM encoder and the target decoder is a multi-layer Bi-LSTM decoder.
7. The method of claim 6, wherein if the to-be-processed sentence is a sentence to be subjected to syntax error detection, the processing model is a syntax error detection submodel, and if the to-be-processed sentence is a sentence to be subjected to syntax error correction, the processing model is a multi-layer Bi-LSTM decoder.
8. A sentence authentication apparatus, comprising:
the first acquisition unit is used for acquiring the statement to be processed;
the first processing unit is used for inputting the statement to be processed into a target encoder and processing the statement to be processed to obtain semantic representation of the statement to be processed, wherein the target encoder is an encoder shared by a syntax error detection task and a syntax error correction task;
a second processing unit, configured to input the semantic representation of the to-be-processed statement to a processing model, and obtain a processing result, where the processing result includes one of: syntax error detection results and syntax error correction results.
9. The apparatus of claim 8, further comprising:
a second obtaining unit, configured to obtain a plurality of labeled sentences before the to-be-processed sentences are input to a target encoder and processed to obtain a semantic representation of the to-be-processed sentences, where the plurality of labeled sentences include: the first type of statement marked with syntax errors and the second type of statement corrected with syntax errors;
a third processing unit, configured to input the first type of statement into the target encoder, process the first type of statement to obtain a semantic representation of the first type of statement, input the second type of statement into the target encoder, and process the second type of statement to obtain a semantic representation of the second type of statement,
a third obtaining unit, configured to input the semantic representation of the first type of statement to a syntax error detection submodel to obtain a syntax error detection result of the first type of statement, and input the semantic representation of the second type of statement to a target decoder to obtain a syntax error correction result of the second type of statement;
the comparison unit is used for comparing the grammar error detection result of the first type of statement with the marked grammar error to obtain a first comparison result, and comparing the grammar error correction result of the second type of statement with the error-corrected grammar error to obtain a second comparison result;
and the first adjusting unit is used for adjusting the parameters of the target encoder according to the first comparison result and the second comparison result.
10. The apparatus of claim 9, further comprising:
a second adjusting unit, configured to adjust parameters of the syntax error detection submodel according to the first comparison result;
and the third adjusting unit is used for adjusting the parameters of the target decoder according to the second comparison result.
11. The apparatus of claim 9, wherein the first adjusting unit comprises:
the first calculation module is used for calculating a first loss value according to the first comparison result and a cross entropy loss function and calculating a second loss value according to the second comparison result and the cross entropy loss function;
the second calculation module is used for calculating a first parameter to be adjusted according to the first loss value through a back propagation algorithm and calculating a second parameter to be adjusted according to the second loss value through the back propagation algorithm;
and the adjusting module is used for adjusting the parameters of the target encoder according to the first parameters and the second parameters.
12. The apparatus of claim 11, wherein the adjustment module comprises:
the determining submodule is used for determining a first weight value of the grammar error detection task and a second weight value of the grammar error correction task;
an adjusting submodule, configured to adjust a parameter of the target encoder according to the first parameter, the first weight value, the second parameter, and the second weight value.
13. A storage medium comprising a stored program, wherein the program, when executed, controls a device on which the storage medium is located to execute the sentence identification method of any of claims 1 to 7.
14. A processor, for running a program, wherein the program runs to perform the statement identification method of any one of claims 1 to 7.
CN201910390785.0A 2019-05-10 2019-05-10 Statement identification method and device, storage medium and processor Pending CN111914540A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910390785.0A CN111914540A (en) 2019-05-10 2019-05-10 Statement identification method and device, storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910390785.0A CN111914540A (en) 2019-05-10 2019-05-10 Statement identification method and device, storage medium and processor

Publications (1)

Publication Number Publication Date
CN111914540A true CN111914540A (en) 2020-11-10

Family

ID=73242930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910390785.0A Pending CN111914540A (en) 2019-05-10 2019-05-10 Statement identification method and device, storage medium and processor

Country Status (1)

Country Link
CN (1) CN111914540A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822044A (en) * 2021-09-29 2021-12-21 深圳市木愚科技有限公司 Grammar error correction data generating method, device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1269923A (en) * 1997-02-07 2000-10-11 诺基亚流动电话有限公司 Information coding method and device using error correction and error detection
WO2013191662A1 (en) * 2012-06-22 2013-12-27 National University Of Singapore Method for correcting grammatical errors of an input sentence
WO2014025135A1 (en) * 2012-08-10 2014-02-13 에스케이텔레콤 주식회사 Method for detecting grammatical errors, error detecting apparatus for same, and computer-readable recording medium having the method recorded thereon
CN109243433A (en) * 2018-11-06 2019-01-18 北京百度网讯科技有限公司 Audio recognition method and device
WO2019024050A1 (en) * 2017-08-03 2019-02-07 Lingochamp Information Technology (Shanghai) Co., Ltd. Deep context-based grammatical error correction using artificial neural networks
CN109461438A (en) * 2018-12-19 2019-03-12 合肥讯飞数码科技有限公司 A kind of audio recognition method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1269923A (en) * 1997-02-07 2000-10-11 诺基亚流动电话有限公司 Information coding method and device using error correction and error detection
WO2013191662A1 (en) * 2012-06-22 2013-12-27 National University Of Singapore Method for correcting grammatical errors of an input sentence
WO2014025135A1 (en) * 2012-08-10 2014-02-13 에스케이텔레콤 주식회사 Method for detecting grammatical errors, error detecting apparatus for same, and computer-readable recording medium having the method recorded thereon
WO2019024050A1 (en) * 2017-08-03 2019-02-07 Lingochamp Information Technology (Shanghai) Co., Ltd. Deep context-based grammatical error correction using artificial neural networks
CN109243433A (en) * 2018-11-06 2019-01-18 北京百度网讯科技有限公司 Audio recognition method and device
CN109461438A (en) * 2018-12-19 2019-03-12 合肥讯飞数码科技有限公司 A kind of audio recognition method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHEN LI ET AL: "A Hybrid System for Chinese Grammatical Error Diagnosis and Correction", 《PROCEEDINGS OF THE 5TH WORKSHOP ON NATURAL LANGUAGE PROCESSING TECHNIQUES FOR EDUCATIONAL APPLICATIONS》, 19 July 2018 (2018-07-19) *
陈珊珊: "自动作文评分模型及方法研究", 《中国优秀硕士学位论文全文数据库》, 15 February 2018 (2018-02-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822044A (en) * 2021-09-29 2021-12-21 深圳市木愚科技有限公司 Grammar error correction data generating method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US10691887B2 (en) Techniques for automatic proofing of textual data
CN104318259A (en) Target picture identifying device and method for and computing device
CN111914571B (en) Statement segmentation method and device, storage medium, processor and terminal equipment
CN109597745B (en) Abnormal data processing method and device
CN111914540A (en) Statement identification method and device, storage medium and processor
CN113315571A (en) Monitoring method and device of silicon optical module
CN111340911A (en) Method and device for determining connecting line in k-line graph and storage medium
US20170192750A1 (en) Numeric conversion method and electronic device
CN111291561B (en) Text recognition method, device and system
US20180260303A1 (en) Method and device for determining usage log
CN110837562A (en) Case processing method, device and system
CN112749150B (en) Error labeling data identification method, device and medium
CN111625628B (en) Information processing method and device, storage medium and processor
CN110929866A (en) Training method, device and system of neural network model
CN112329424A (en) Service data processing method and device, storage medium and electronic equipment
CN110609701A (en) Method, apparatus and storage medium for providing service
CN110704289A (en) Method, device and storage medium for monitoring kol account
CN114781331A (en) Text generation method and device, storage medium and processor
CN110826582A (en) Image feature training method, device and system
CN112906512B (en) Method, device and storage medium for determining joints of human body
CN110881001A (en) Electronic red packet detection method, system and terminal equipment
CN110096255B (en) Rule degradation processing method, device and system and data processing method
CN112633955B (en) Advertisement conversion abnormity detection method and system and computer readable storage medium
CN111737550B (en) Search result processing method and device, storage medium and processor
CN112818127A (en) Method, device and medium for detecting corpus conflict in knowledge base

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination