CN111104881A - Image processing method and related device - Google Patents

Image processing method and related device Download PDF

Info

Publication number
CN111104881A
CN111104881A CN201911251353.8A CN201911251353A CN111104881A CN 111104881 A CN111104881 A CN 111104881A CN 201911251353 A CN201911251353 A CN 201911251353A CN 111104881 A CN111104881 A CN 111104881A
Authority
CN
China
Prior art keywords
image
answer
examinee
answering
area image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911251353.8A
Other languages
Chinese (zh)
Other versions
CN111104881B (en
Inventor
何孟华
何春江
曾金舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN201911251353.8A priority Critical patent/CN111104881B/en
Publication of CN111104881A publication Critical patent/CN111104881A/en
Application granted granted Critical
Publication of CN111104881B publication Critical patent/CN111104881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a method for processing an image and a related device, wherein the method comprises the following steps: matching the examinee answering image with an answer template image obtained by inputting a preset generation model based on standard answer content, standard answer positions and error correction image by using a matching algorithm to obtain an examinee answering area image and a corresponding answer template area image; inputting the examinee answering area image and the corresponding answer template area image into a preset classification model to obtain a category label of the examinee answering area image; and determining the review result of the image of the answer area of the examinee based on the first judgment score corresponding to the category label. The examinee answering image and the answer template image comprising the standard answer content and the standard answer position are matched through a matching algorithm and then are evaluated in a classification mode through a preset classification model, handwritten answering words, error correcting symbols and handwritten answering positions in the examinee answering image can be comprehensively evaluated, and the misjudgment condition is avoided so that the accuracy and the practicability of automatic evaluation of a machine are improved.

Description

Image processing method and related device
Technical Field
The present application relates to the field of image technologies, and in particular, to an image processing method and a related apparatus.
Background
In daily examinations, correction of error questions mainly considers the interpretation of chapters and the mastering of context and grammar knowledge by examinees, and requires the examinees to correctly judge and correct errors. Specifically, the examinee needs to complete the error correction by adding, deleting or modifying the error in the error correction question in a handwriting way.
With the rapid development of science and technology, the correction error problem is changed from a traditional manual evaluation mode into automatic machine evaluation, namely, aiming at an examinee answering image, firstly, an image detection algorithm is utilized to detect a single-point examinee handwriting area image, then, an image recognition model is utilized to carry out image recognition on the single-point examinee handwriting area image to obtain recognition content, and finally, automatic evaluation is carried out by utilizing an evaluation model according to the recognition content and corresponding standard answer content.
However, the inventor finds out through research that the correction error review actually relates to the handwritten answering content and the handwritten answering position corresponding to the handwritten answering of the examinee, and the handwritten answering content comprises handwritten answering words and correction symbols; the single-point examinee handwriting area image obtained by the automatic machine review method only obtains the handwriting words, cannot identify and obtain the wrong signs, cannot determine the handwriting answering positions, and is likely to have the condition that the judgment is correct due to the fact that the handwriting words are correct, the wrong signs are incorrect or the handwriting answering positions have deviation, so that the automatic machine review method is low in accuracy and practicability.
Disclosure of Invention
In view of this, the embodiments of the present application provide an image processing method and a related apparatus, which can comprehensively review handwritten answering words, wrong-correction symbols and handwritten answering positions in an examinee answering image, thereby greatly improving the accuracy and practicability of review.
In a first aspect, an embodiment of the present application provides an image processing method, where the method includes:
based on the examinee answering image and the answer template image, obtaining an examinee answering area image and a corresponding answer template area image by using a matching algorithm; the answer template image is obtained by utilizing a corresponding preset generating model based on standard answer content, a standard answer position and an error correction image;
obtaining a category label of the examinee answering area image by using a preset classification model based on the examinee answering area image and the corresponding answer template area image;
and determining the review result of the image of the answer area of the examinee based on the first judgment score corresponding to the category label.
Optionally, the obtaining step of the answer template image includes:
obtaining a standard answer region image by using the preset generation model based on the standard answer content and the first random vector;
and obtaining the answer template image based on the standard answer area image, the standard answer position and the error correction question image.
Optionally, the obtaining of the preset generative model includes:
inputting the training answer content and a second random vector into a generation network to obtain a pseudo answer area image;
and pre-training a discrimination network and the generation network to obtain the preset generation model based on the pseudo answer area image, the training answer content, the real answer area image corresponding to the training answer content and the non-training answer content corresponding to the real answer area image.
Optionally, before the obtaining a standard answer region image by using the preset generation model based on the standard answer content and the first random vector, the method further includes:
obtaining a mapping rule from the pseudo answer region image to the second random vector by utilizing reverse learning of the preset generation model;
extracting the font style information of the examinees in the examinee answering image based on the mapping rule to obtain a target vector as the first random vector;
correspondingly, the standard answer area image comprises the examinee font style information, and the answer template image comprises the examinee font style information.
Optionally, the preset classification model is obtained by pre-training a classification network based on a training examinee answering area image and a training answer template area image, and a correct label, an error label or an invalid label of the training examinee answering area image relative to the training answer template area image; correspondingly, if the category label is a correct label, the first judgment score corresponding to the category label is 1; and if the category label is an error label or an invalid label, the first judgment score corresponding to the category label is 0.
Optionally, the method further includes:
matching the examinee answering image with the error correction image to obtain an examinee answering area image and a corresponding handwriting answering position;
identifying the image of the examinee answering area by using an identification algorithm to obtain handwritten answering contents;
obtaining a second judgment score of the examinee answering area image by using a judgment algorithm based on the handwritten answering content, the handwritten answering position, the standard answer content and the standard answer position;
correspondingly, the determining the review result of the image of the answer area of the examinee based on the first judgment score corresponding to the category tag specifically comprises:
and determining the evaluation result of the examinee answering area image based on the first judgment score and the corresponding second judgment score.
Optionally, the determining, based on the first decision score and the corresponding second decision score, a review result of the candidate answering area image includes:
if the first judgment score is the same as the second judgment score, directly determining the evaluation result of the image of the answer area of the examinee based on the first judgment score or the second judgment score;
and if the first judgment score is different from the second judgment score, determining the evaluation result of the image of the answer area of the examinee based on the first judgment score, the confidence coefficient of the second judgment score and the confidence coefficient of the second judgment score.
In a second aspect, an embodiment of the present application provides an apparatus for image processing, including:
the first obtaining unit is used for obtaining an examinee answering area image and a corresponding answer template area image by using a matching algorithm based on the examinee answering image and the answer template image; the answer template image is obtained by utilizing a corresponding preset generating model based on standard answer content, a standard answer position and an error correction image;
a second obtaining unit, configured to obtain a category label of the examinee answering area image by using a preset classification model based on the examinee answering area image and the corresponding answer template area image;
and the determining unit is used for determining the review result of the examinee answering area image based on the first judgment score corresponding to the category label.
In a third aspect, an embodiment of the present application provides a terminal device, where the terminal device includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of image processing according to any of the above first aspects according to instructions in the program code.
In a fourth aspect, the present application provides a computer-readable storage medium for storing program code for executing the method of image processing according to any one of the first aspect.
Compared with the prior art, the method has the advantages that:
according to the technical scheme of the embodiment of the application, firstly, a matching algorithm is used for matching the examinee answering image and an answer template image obtained by inputting a preset generation model based on standard answer content, a standard answer position and an error correction image, and an examinee answering area image and a corresponding answer template area image are obtained; then, inputting the image of the answer area of the examinee and the corresponding image of the answer template area into a preset classification model to obtain a class label of the image of the answer area of the examinee; and finally, determining the evaluation result of the image of the answer area of the examinee based on the first judgment score corresponding to the category label. Therefore, the answer template images comprising the standard answer content and the standard answer position are generated on the basis of the error correction image by utilizing the preset generation model, the examinee answer images and the answer template images are evaluated in a mode of matching the matching algorithm firstly and then classifying the preset classification model, handwritten answer words, error correction symbols and handwritten answer positions in the examinee answer images can be evaluated comprehensively, the misjudgment condition existing in the current automatic evaluation of the machine is effectively avoided, and the accuracy and the practicability of the automatic evaluation of the machine are greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a system framework related to an application scenario in an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for image processing according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an english error correction image according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an examinee answering image provided in an embodiment of the present application;
FIG. 5 is a diagram illustrating an answer template image according to an embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating another image processing method according to an embodiment of the present application
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Currently, the automatic review by a wrong answer machine means that an image detection algorithm is used for detecting an examinee answering image to obtain a single-point examinee handwriting area image, an image identification model is used for identifying the single-point examinee handwriting area image to obtain identification content, and an automatic review model is used for automatically reviewing according to the identification content and corresponding standard answer content. However, the inventor finds that the correction question evaluation needs to evaluate the handwritten answering content and the handwritten answering position, wherein the handwritten answering content consists of handwritten answering words and correction symbols; in the automatic evaluation method for the machine, the identification content is only handwritten answer words, error signs cannot be identified, and handwritten answer positions cannot be determined, so that the situations that the handwritten answer words are correct, the error signs are incorrect, the handwritten answer positions have deviation and the like and are judged to be correct easily occur, namely, the accuracy and the practicability of the automatic evaluation method for the machine are low at present.
In order to solve the problem, in the embodiment of the application, firstly, a matching algorithm is used for matching a candidate answering image and an answer template image obtained by inputting a preset generation model based on standard answer content, a standard answer position and an error answer image, so as to obtain a candidate answering area image and a corresponding answer template area image; then, inputting the image of the answer area of the examinee and the corresponding image of the answer template area into a preset classification model to obtain a class label of the image of the answer area of the examinee; and finally, determining the evaluation result of the image of the answer area of the examinee based on the first judgment score corresponding to the category label. Therefore, the answer template images comprising the standard answer content and the standard answer position are generated on the basis of the error correction image by utilizing the preset generation model, the examinee answer images and the answer template images are evaluated in a mode of matching the matching algorithm firstly and then classifying the preset classification model, handwritten answer words, error correction symbols and handwritten answer positions in the examinee answer images can be evaluated comprehensively, the misjudgment condition existing in the current automatic evaluation of the machine is effectively avoided, and the accuracy and the practicability of the automatic evaluation of the machine are greatly improved.
For example, one of the scenarios in the embodiment of the present application may be applied to the scenario shown in fig. 1, where the scenario includes a terminal device 101 and a processing device 102, the terminal device 101 sends an image of examinee response to the processing device 102, the processing device 102 stores a corresponding answer template image, and a review result of the image of examinee response area is obtained by using the embodiment of the present application and sent to the terminal device 101, so as to display the review result.
It is to be understood that, in the above application scenario, although the actions of the embodiment of the present application are described as being performed by the server 101, the actions may also be performed by the client 102, or may also be performed partially by the client 102 and partially by the server 101. The present application is not limited in terms of the execution subject as long as the actions disclosed in the embodiments of the present application are executed.
It is to be understood that the above scenario is only one example of a scenario provided in the embodiment of the present application, and the embodiment of the present application is not limited to this scenario.
The following describes in detail a specific implementation of the method for image processing and the related apparatus in the embodiments of the present application by way of embodiments, with reference to the accompanying drawings.
Exemplary method
Referring to fig. 2, a flowchart of a method of image processing in an embodiment of the present application is shown. In this embodiment, the method may include, for example, the steps of:
step 201: based on the examinee answering image and the answer template image, obtaining an examinee answering area image and a corresponding answer template area image by using a matching algorithm; the answer template image is obtained by utilizing a corresponding preset generation model based on standard answer content, a standard answer position and an error correction image. It should be noted that, because the current automatic machine review can only identify the handwritten answering words in the examinee answering image, and only compare and judge with the standard answer content, the wrong sign and handwritten answering position cannot be judged, so that the accuracy and practicability of the current automatic machine review are low. Therefore, in the embodiment of the application, not only standard answer content but also standard answer positions are considered, and an answer template image comprising the standard answer content and the standard answer positions is obtained by using the corresponding preset generation model in combination with the error correction question image; on the basis, the examinee answering image and the answer template image are calibrated and matched by using a matching algorithm, and an examinee answering area image comprising the handwritten answering content and the handwritten answering position and an answer template area image corresponding to the examinee answering area image are obtained. The handwritten answering content is obtained by the examinee through adding, deleting or modifying a handwritten answering mode, the error type of the handwritten answering content is adding, deleting or modifying, if the error type of the handwritten answering content is adding or modifying, the handwritten answering content comprises handwritten answering words and error symbols, and the examinee answering area image comprises a print body area and a handwritten form area; if the error type of the handwritten answering content is deletion and the handwritten answering content comprises error signs, the examinee answering area image comprises a print area.
Firstly, on the basis of standard answer content, combining a random vector (called as a first random vector) to input a corresponding preset generation model obtained by pre-training, and then obtaining a standard answer region image matched with the standard answer content; then, the error problem correction area image of the standard answer position in the error problem correction image is replaced with the standard answer area image, thereby obtaining an answer template image including standard answer content and the standard answer position. Therefore, in an alternative implementation manner of the embodiment of the present application, the obtaining step of the answer template image may include the following steps:
step A: and obtaining a standard answer region image by using the preset generation model based on the standard answer content and the first random vector.
For example, the standard answer content is input into the coding layer and the full-link layer, a smaller-dimension vector is obtained by activation with an activation function, then the smaller-dimension vector is spliced with a first random vector obtained in normal distribution to obtain a spliced vector, and finally the spliced vector is input into a plurality of deconvolution layers to obtain a standard answer area image.
And B: and obtaining the answer template image based on the standard answer area image, the standard answer position and the error correction question image.
The step of obtaining the preset generation model actually means that a generative confrontation network is trained in advance based on training answer content, a true answer area image corresponding to the training answer content and non-training answer content (representing that the true answer area image is not matched) corresponding to the true answer area image, and a generation network in the trained generative confrontation network is used as the preset generation model. And judging whether the answer area image generated by the network needing to be trained is matched with the input answer content in the generated countermeasure network. Specifically, firstly, on the basis of the content of a training answer, a random vector (called as a second random vector) is input into a generation network, and then an answer area image can be generated as a pseudo answer area image; secondly, three pairs of training samples (a pseudo answer area image, training answer content), (a true answer area image, training answer content) and (a true answer area image, non-training answer content) need to be input into a discrimination network, the discrimination network and the generation network are trained in advance until the training is finished, and the trained generation network is the preset generation model. Therefore, in an optional implementation manner of the embodiment of the present application, the obtaining step of the preset generative model may include, for example, the following steps:
and C: and inputting the content of the training answer and a second random vector into a generation network to obtain a pseudo answer area image.
For example, the content of the training answers is input into the coding layer and the full-link layer, a vector with a smaller dimension is obtained by activation through an activation function, then the vector is spliced with a second random vector obtained in normal distribution to obtain a spliced vector, and finally the spliced vector is input into a plurality of deconvolution layers to obtain a pseudo answer area image.
Step D: and pre-training a discrimination network and the generation network to obtain the preset generation model based on the pseudo answer area image, the training answer content, the real answer area image corresponding to the training answer content and the non-training answer content corresponding to the real answer area image.
For example, a training method of a generative countermeasure network is adopted, first, a forward operation is performed on the initialization of the generative network and the judgment network, then, parameters of each layer of the generative network and the judgment network are optimized by adopting a back propagation algorithm, and finally, the generative network which is in line with expectation is output as a preset generative model.
It should be noted that the above description indicates that the type of error correction of the handwritten answer content is addition, deletion, or modification, correspondingly, the type of error correction of the standard answer content is addition, deletion, or modification, and the type of error correction of the training answer content is addition, deletion, or modification; for the step C to the step D, when the preset generated models are obtained by training the training answer contents of different error correction types, the structures of the adopted generated networks and the adopted judgment networks are the same and the parameters are different, and the obtained preset generated models are the same in structure and different in parameters; for step a, when standard answer contents of different correction types are generated to obtain a standard answer region image, the structures of the adopted preset generation models are the same and the parameters are different.
It should be noted that based on the above steps C-D, it can be known that the random vector can capture style factors, such as color style, gesture style, font style, etc., and different random vectors can capture different style factors. On the basis that the examinee font style information of the examinee answering images is obvious and the examinee font style information of different examinee answering images is different, if the answer template images comprise the examinee font style information, the accuracy of automatic evaluation of a subsequent machine can be improved; that is, for the step a, on the basis of the standard answer content, the standard answer region image including the font style information of different examinees can be obtained by combining different first random vectors. Specifically, firstly, a preset generation model is inverted to learn a mapping rule from a pseudo answer region image to a second random vector; then, the mapping rule is used for extracting the style information of the examinee font in the examinee answering image to obtain a vector (called a target vector), and the target vector is used as a first random vector. In summary, the standard answer area image obtained in the step a includes the examinee font style information, and similarly, the answer template image in the step B includes the examinee font style information. Therefore, in an optional implementation manner of the embodiment of the present application, before step a, for example, the following steps may be further included:
step E: and obtaining a mapping rule from the pseudo answer region image to the second random vector by utilizing reverse learning of the preset generation model.
Step F: and extracting the font style information of the examinees in the examinee answering image based on the mapping rule to obtain a target vector as the first random vector.
As an example, the error correction image may be, for example, an english error correction image, and with respect to the standard answer content and the standard answer position shown in table 1 below, an english error correction image is shown in fig. 3, and correspondingly, an examinee answer image is shown in fig. 4, on this basis, a corresponding preset generation model is used to obtain an answer template image shown in fig. 5, where the answer template image includes not only the standard answer content and the standard answer position, but also the examinee font style information in the examinee answer image. Of course, the present application does not limit that the error correction image must be an english error correction image, and the error correction image may also be a chinese error correction image, and so on.
TABLE 1 contents of standard answers and positions of standard answers
Figure BDA0002309124000000091
Figure BDA0002309124000000101
The matching algorithm in the embodiment of the application is not limited to the algorithm adopting rules or models, and only needs to perform calibration matching based on the examinee answering image and the answer template image to obtain the examinee answering area image including the handwritten answering content and the handwritten answering position and the answer template area image corresponding to the examinee answering area image. For example, the matching algorithm may be an algorithm using a rule, and specifically, the examinee answer image and the answer template image are aligned in a horizontal projection manner. It should be noted that, usually, examinees answer by handwriting under the print, but some examinees do not answer normally and answer by handwriting over the print, taking into account the case that the type of error of the contents of the handwriting is increased or modified, and the contents of the handwriting include words and symbols of error, that is, when the image of the answer area of the examinee includes the print area and the print area above, not only the print area but also the print area below and the print area above are taken into account.
Step 202: and obtaining the category label of the examinee answering area image by utilizing a preset classification model based on the examinee answering area image and the corresponding answer template area image.
It should be noted that, after obtaining the examinee answering area image and the corresponding answer template area image in the step 201 through matching, it is necessary to determine whether the examinee answering area image is correct, wrong, or invalid according to the answer template area image corresponding to the examinee answering area image. In the embodiment of the application, based on a training examinee answering area image and a training answer template area image, and a correct label, an error label or an invalid label of the training examinee answering area image relative to the training answer template area image, a classification network is trained in advance to obtain a preset classification model, and the examinee answering area image and the corresponding answer template area image are input into the preset classification model, so that a category label of the examinee answering area image can be obtained, namely, the category label of the examinee answering area image is the correct label, the error label or the invalid label. The mode can comprehensively evaluate the handwritten answering content and the handwritten answering position of the examinee answering image, wherein the handwritten answering content comprises handwritten answering words and wrong signs, so that the misjudgment condition of automatic evaluation by a machine at present can be effectively avoided, the image of the examinee answering area can be determined to be correct or wrong, and whether the image of the examinee answering area is invalid or not can be determined.
In order to clarify the whole information and the local information of the examinee answering area image and the corresponding answer template area image and improve the accuracy and the efficiency of the preset classification model, before the class label of the examinee answering area image is obtained by using the preset classification model, image segmentation processing can be performed on the examinee answering area image and the corresponding answer template area image. In the embodiment of the present application, the image segmentation processing may adopt the following two specific embodiments:
first embodiment, in width W1Step length S1Respectively carrying out sliding window image segmentation processing on the examinee answering area image and the corresponding answer template area image by the window; for example, width W1Can be set to 2 times the average width of the character, step S1May be set to the character average width, of course, width W1And step size S1Other suitable widths may also be provided.
In the second embodiment, the answer area image of the examinee and the corresponding answer template area image are displayed with a width W2Subjecting the examinee's answer region image and the corresponding answer template region image to image segmentation processing, e.g., width W2It can be set to 2 times the average width of the character, of course, the width W2Other suitable widths may also be provided.
Step 203: and determining the review result of the image of the answer area of the examinee based on the first judgment score corresponding to the category label.
It can be understood that only when the category label of the examinee answering area image is the correct label, the handwritten answering content and the handwritten answering position included in the examinee answering area image are represented and completely consistent with the standard answer content and the standard answer position included in the corresponding answer template image, and the first judgment score is 1; correspondingly, when the category label of the examinee answering area image is an error label or an invalid label, the first judgment score is 0; the first decision score determines the review result of the examinee's answering area image. Therefore, in an optional implementation manner of the embodiment of the present application, if the category label is a correct label, the first decision score corresponding to the category label is 1; and if the category label is an error label or an invalid label, the first judgment score corresponding to the category label is 0.
According to various implementation modes provided by the embodiment, firstly, a matching algorithm is used for matching an examinee answering image and an answer template image obtained by inputting a preset generation model based on standard answer content, a standard answer position and an error correction image, so that an examinee answering area image and a corresponding answer template area image are obtained; then, inputting the image of the answer area of the examinee and the corresponding image of the answer template area into a preset classification model to obtain a class label of the image of the answer area of the examinee; and finally, determining the evaluation result of the image of the answer area of the examinee based on the first judgment score corresponding to the category label. Therefore, the answer template images comprising the standard answer content and the standard answer position are generated on the basis of the error correction image by utilizing the preset generation model, the examinee answer images and the answer template images are evaluated in a mode of matching the matching algorithm firstly and then classifying the preset classification model, handwritten answer words, error correction symbols and handwritten answer positions in the examinee answer images can be evaluated comprehensively, the misjudgment condition existing in the current automatic evaluation of the machine is effectively avoided, and the accuracy and the practicability of the automatic evaluation of the machine are greatly improved.
It should be noted that, because the accuracy of the preset classification model in the above method embodiment cannot reach 100%, when the preset classification model obtains the category label of the image of the test taker answering area, the confidence of the category label, that is, the confidence of the first decision score corresponding to the category label, may be obtained, and there is a case that the confidence of the first decision score is low; therefore, in order to improve the accuracy of automatic machine review under the condition, the embodiment of the application is based on the method embodiment, the examinee answering image and the error correction image are matched to obtain the examinee answering area image and the corresponding handwriting answering position, the examinee answering area image is identified to obtain the handwriting answering content, the handwriting answering content and the handwriting answering position are compared with the standard answer content and the standard answer position, the second judgment score of the examinee answering area image is judged and determined, and the review result of the examinee answering area image is comprehensively determined through the first judgment score and the second judgment score. A detailed implementation of another image processing method in the embodiment of the present application is described in detail below with reference to fig. 6.
Referring to fig. 6, a flow chart of another method of image processing in the embodiment of the present application is shown. In this embodiment, the method may include, for example, the steps of:
step 601: based on the examinee answering image and the answer template image, obtaining an examinee answering area image and a corresponding answer template area image by using a matching algorithm; the answer template image is obtained by utilizing a corresponding preset generation model based on standard answer content, a standard answer position and an error correction image.
Step 602: and obtaining the category label of the examinee answering area image by utilizing a preset classification model based on the examinee answering area image and the corresponding answer template area image.
It is understood that steps 601 to 602 are the same as steps 201 to 202 in the above method embodiment, and the detailed description thereof refers to the detailed description of steps 201 to 202 in the above method embodiment, which is not repeated herein.
Step 603: and matching the examinee answering image with the error correction image to obtain the examinee answering area image and the corresponding handwriting answering position.
Step 604: and identifying the image of the examinee answering area by using an identification algorithm to obtain the handwritten answering content.
It should be noted that, the embodiment of the present application does not limit the recognition algorithm, as long as the image of the examinee response area is recognized to obtain the handwritten response content including the handwritten response word and the error correction symbol. For example, the recognition algorithm may be an encoding-decoding algorithm, in which an encoding end encodes the image of the examinee response area, for example, using vgg16 network + bidirectional long-short term memory network, and a decoding end decodes the image to obtain the handwritten response content, for example, using bidirectional long-short term memory network.
Step 605: and obtaining a second judgment score of the examinee answering area image by using a judgment algorithm based on the handwritten answering content, the handwritten answering position, the standard answer content and the standard answer position.
It can be understood that, only when the handwritten answering content is consistent with the standard answer content and the handwritten answering position is consistent with the standard answer position, the second decision score of the examinee answering area image is 1; when the content of the handwritten answer is not consistent with the content of the standard answer or the position of the handwritten answer is not consistent with the position of the standard answer, the second judgment score of the image of the examinee answering area is 0.
It should be noted that, in the embodiment of the present application, the execution order of step 601-step 602 and step 603-step 605 is not limited, and step 601-step 602 may be executed first, and then step 603-step 605 may be executed; step 603 to step 605 may be performed first, and then step 601 to step 602 may be performed, or step 601 to step 602 and step 603 to step 605 may be performed simultaneously.
Step 606: and determining the review result of the examinee answering area image based on the first judgment score corresponding to the category label and the second judgment score corresponding to the category label.
It should be noted that, the first decision score and the corresponding second decision score are both decision scores of the image of the answer region of the examinee, and there are two cases that the first decision score and the corresponding second decision score are the same or different, when the first decision score and the corresponding second decision score are the same, it indicates that the first decision score and the corresponding second decision score are both credible, and no other operation is needed, and based on the first decision score or the corresponding second decision score, the evaluation result of the image of the answer region of the examinee can be directly determined; when the first judgment score is different from the corresponding second judgment score, the confidence degree of the first judgment score and the confidence degree of the second judgment score need to be considered, and the evaluation result of the image of the answer area of the examinee is comprehensively determined. Therefore, in an optional implementation manner of this embodiment of this application, the step 306 may include the following steps, for example:
step G: and if the first judgment score is the same as the second judgment score, directly determining the evaluation result of the image of the answer area of the examinee based on the first judgment score or the second judgment score.
Step H: and if the first judgment score is different from the second judgment score, determining the evaluation result of the image of the answer area of the examinee based on the first judgment score, the confidence coefficient of the second judgment score and the confidence coefficient of the second judgment score.
When the recognition algorithm recognizes the image of the answer area of the examinee to obtain the handwritten answer content in step 604, the confidence level of the handwritten answer content, that is, the confidence level of the second decision score of the image of the answer area of the examinee, can be obtained.
Specifically, step H may include, for example, the following steps:
step H1: and if the confidence coefficient of the first judgment score is greater than that of the second judgment score and the confidence coefficient of the first judgment score is greater than or equal to the preset confidence coefficient, determining the evaluation result of the image of the answer area of the examinee based on the first judgment score.
Step H2: and if the confidence coefficient of the second judgment score is greater than that of the first judgment score and the confidence coefficient of the second judgment score is greater than or equal to the preset confidence coefficient, determining the evaluation result of the image of the answer area of the examinee based on the second judgment score.
The preset confidence level may be 0.85, for example. It should be noted that, when the confidence of the first decision score is greater than the confidence of the second decision score, but the confidence of the first decision score is less than the preset confidence, the reviewer may be prompted to perform manual verification review to ensure the accuracy of the review. Similarly, when the confidence level of the second decision score is greater than the confidence level of the first decision score but the confidence level of the second decision score is less than the preset confidence level, the reviewer can be prompted to perform manual verification and review so as to ensure the accuracy of the review.
Through various implementation modes provided by the embodiment, on the basis of the method embodiment, the examinee answering image and the error correction image are matched to obtain an examinee answering area image and a corresponding handwriting answering position; identifying the image of the examinee answering area by using an identification algorithm to obtain handwritten answering contents; obtaining a second judgment score of the image of the answer area of the examinee by utilizing a judgment algorithm based on the handwritten answer content, the handwritten answer position, the standard answer content and the standard answer position; and determining the evaluation result of the image of the answer area of the examinee based on the first judgment score and the corresponding second judgment score. Therefore, the handwritten answer words, wrong signs and handwritten answer positions in the answer image of the examinee can be evaluated comprehensively, the misjudgment condition of automatic evaluation by the existing machine is effectively avoided, and the accuracy and the practicability of automatic evaluation by the machine are greatly improved; and the evaluation result of the image of the answer area of the examinee is comprehensively determined through the first judgment score and the second judgment score, and the accuracy of automatic evaluation by the machine is improved again by adopting a double judgment mechanism on the basis of the embodiment of the method.
Exemplary devices
Referring to fig. 7, a schematic structural diagram of an image processing apparatus in an embodiment of the present application is shown. In this embodiment, the apparatus may specifically include:
a first obtaining unit 701, configured to obtain an answer region image of the test taker and a corresponding answer template region image by using a matching algorithm based on the answer image of the test taker and the answer template image; the answer template image is obtained by utilizing a corresponding preset generating model based on standard answer content, a standard answer position and an error correction image;
a second obtaining unit 702, configured to obtain a category label of the candidate answering area image by using a preset classification model based on the candidate answering area image and the corresponding answer template area image;
a determining unit 703, configured to determine, based on the first decision score corresponding to the category tag, a review result of the image of the candidate answering area.
In an optional implementation manner of the embodiment of the present application, the apparatus further includes an answer template image obtaining unit; the answer template image obtaining unit includes:
the first obtaining subunit is used for obtaining a standard answer area image by using the preset generation model based on the standard answer content and the first random vector;
and the second obtaining subunit is used for obtaining the answer template image based on the standard answer area image, the standard answer position and the error correction question image.
In an optional implementation manner of the embodiment of the present application, the apparatus further includes a preset generative model obtaining unit; the preset generative model obtaining unit includes:
the third obtaining subunit is used for inputting the training answer content and the second random vector into a generation network to obtain a pseudo answer area image;
and the fourth obtaining subunit is configured to pre-train a discrimination network and the generation network to obtain the preset generation model based on the pseudo answer region image, the training answer content, the true answer region image corresponding to the training answer content, and the non-training answer content corresponding to the true answer region image.
In an optional implementation manner of the embodiment of the present application, the apparatus further includes a mapping rule obtaining unit and a first random vector obtaining unit;
the mapping rule obtaining unit is configured to utilize the preset generation model to perform inverse learning to obtain a mapping rule from the pseudo answer region image to the second random vector;
the first random vector obtaining unit is used for extracting the examinee font style information in the examinee answering image based on the mapping rule to obtain a target vector as the first random vector;
correspondingly, the standard answer area image comprises the examinee font style information, and the answer template image comprises the examinee font style information.
In an optional implementation manner of the embodiment of the present application, the preset classification model is obtained by pre-training a classification network based on a training examinee answering area image and a training answer template area image, and a correct label, an error label or an invalid label of the training examinee answering area image relative to the training answer template area image; correspondingly, if the category label is a correct label, the first judgment score corresponding to the category label is 1; and if the category label is an error label or an invalid label, the first judgment score corresponding to the category label is 0.
In an optional implementation manner of the embodiment of the present application, the apparatus further includes a third obtaining unit, a fourth obtaining unit, and a fifth obtaining unit;
the third obtaining unit is used for matching the examinee answering image with the error correction image to obtain the examinee answering area image and the corresponding handwriting answering position;
the fourth obtaining unit is used for identifying the examinee answering area image by using an identification algorithm to obtain handwritten answering contents;
the fifth obtaining unit is configured to obtain a second decision score of the examinee answering area image by using a decision algorithm based on the handwritten answering content, the handwritten answering position, the standard answer content, and the standard answer position;
correspondingly, the determining unit 703 is specifically configured to:
and determining the evaluation result of the examinee answering area image based on the first judgment score and the corresponding second judgment score.
In an optional implementation manner of the embodiment of the present application, the determining unit 703 includes:
a first determining subunit, configured to, if the first decision score is the same as the second decision score, directly determine, based on the first decision score or the second decision score, an evaluation result of the image of the candidate answering area;
and a second determining subunit, configured to determine, if the first decision score is different from the second decision score, a result of the review on the examinee answer region image based on the first decision score, the confidence of the first decision score, and the confidence of the second decision score and the second decision score.
According to various implementation modes provided by the embodiment, firstly, a matching algorithm is used for matching an examinee answering image and an answer template image obtained by inputting a preset generation model based on standard answer content, a standard answer position and an error correction image, so that an examinee answering area image and a corresponding answer template area image are obtained; then, inputting the image of the answer area of the examinee and the corresponding image of the answer template area into a preset classification model to obtain a class label of the image of the answer area of the examinee; and finally, determining the evaluation result of the image of the answer area of the examinee based on the first judgment score corresponding to the category label. Therefore, the answer template images comprising the standard answer content and the standard answer position are generated on the basis of the error correction image by utilizing the preset generation model, the examinee answer images and the answer template images are evaluated in a mode of matching the matching algorithm firstly and then classifying the preset classification model, handwritten answer words, error correction symbols and handwritten answer positions in the examinee answer images can be evaluated comprehensively, the misjudgment condition existing in the current automatic evaluation of the machine is effectively avoided, and the accuracy and the practicability of the automatic evaluation of the machine are greatly improved.
In addition, an embodiment of the present application further provides a terminal device, where the terminal device includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the method of image processing according to the method embodiments according to instructions in the program code.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium is configured to store a program code, where the program code is configured to execute the method for image processing according to the foregoing method embodiment.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a preferred embodiment of the present application and is not intended to limit the present application in any way. Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application. Those skilled in the art can now make numerous possible variations and modifications to the disclosed embodiments, or modify equivalent embodiments, using the methods and techniques disclosed above, without departing from the scope of the claimed embodiments. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present application still fall within the protection scope of the technical solution of the present application without departing from the content of the technical solution of the present application.

Claims (10)

1. A method of image processing, comprising:
based on the examinee answering image and the answer template image, obtaining an examinee answering area image and a corresponding answer template area image by using a matching algorithm; the answer template image is obtained by utilizing a corresponding preset generating model based on standard answer content, a standard answer position and an error correction image;
obtaining a category label of the examinee answering area image by using a preset classification model based on the examinee answering area image and the corresponding answer template area image;
and determining the review result of the image of the answer area of the examinee based on the first judgment score corresponding to the category label.
2. The method of claim 1, wherein the obtaining of the answer template image comprises:
obtaining a standard answer region image by using the preset generation model based on the standard answer content and the first random vector;
and obtaining the answer template image based on the standard answer area image, the standard answer position and the error correction question image.
3. The method according to claim 2, wherein the obtaining of the predetermined generative model comprises:
inputting the training answer content and a second random vector into a generation network to obtain a pseudo answer area image;
and pre-training a discrimination network and the generation network to obtain the preset generation model based on the pseudo answer area image, the training answer content, the real answer area image corresponding to the training answer content and the non-training answer content corresponding to the real answer area image.
4. The method according to claim 3, wherein before the obtaining a standard answer region image using the preset generation model based on the standard answer content and the first random vector, further comprises:
obtaining a mapping rule from the pseudo answer region image to the second random vector by utilizing reverse learning of the preset generation model;
extracting the font style information of the examinees in the examinee answering image based on the mapping rule to obtain a target vector as the first random vector;
correspondingly, the standard answer area image comprises the examinee font style information, and the answer template image comprises the examinee font style information.
5. The method according to claim 1, wherein the preset classification model is obtained by pre-training a classification network based on a training examinee answering area image and a training answer template area image, and a correct label, an error label or an invalid label of the training examinee answering area image relative to the training answer template area image; correspondingly, if the category label is a correct label, the first judgment score corresponding to the category label is 1; and if the category label is an error label or an invalid label, the first judgment score corresponding to the category label is 0.
6. The method of claim 1, further comprising:
matching the examinee answering image with the error correction image to obtain an examinee answering area image and a corresponding handwriting answering position;
identifying the image of the examinee answering area by using an identification algorithm to obtain handwritten answering contents;
obtaining a second judgment score of the examinee answering area image by using a judgment algorithm based on the handwritten answering content, the handwritten answering position, the standard answer content and the standard answer position;
correspondingly, the determining the review result of the image of the answer area of the examinee based on the first judgment score corresponding to the category tag specifically comprises:
and determining the evaluation result of the examinee answering area image based on the first judgment score and the corresponding second judgment score.
7. The method of claim 6, wherein determining the review result for the candidate answer region image based on the first decision score and the corresponding second decision score comprises:
if the first judgment score is the same as the second judgment score, directly determining the evaluation result of the image of the answer area of the examinee based on the first judgment score or the second judgment score;
and if the first judgment score is different from the second judgment score, determining the evaluation result of the image of the answer area of the examinee based on the first judgment score, the confidence coefficient of the second judgment score and the confidence coefficient of the second judgment score.
8. An apparatus for image processing, comprising:
the first obtaining unit is used for obtaining an examinee answering area image and a corresponding answer template area image by using a matching algorithm based on the examinee answering image and the answer template image; the answer template image is obtained by utilizing a corresponding preset generating model based on standard answer content, a standard answer position and an error correction image;
a second obtaining unit, configured to obtain a category label of the examinee answering area image by using a preset classification model based on the examinee answering area image and the corresponding answer template area image;
and the determining unit is used for determining the review result of the examinee answering area image based on the first judgment score corresponding to the category label.
9. A terminal device, comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of image processing according to any of claims 1-7 according to instructions in the program code.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium is configured to store a program code for executing the method of image processing according to any one of claims 1 to 7.
CN201911251353.8A 2019-12-09 2019-12-09 Image processing method and related device Active CN111104881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911251353.8A CN111104881B (en) 2019-12-09 2019-12-09 Image processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911251353.8A CN111104881B (en) 2019-12-09 2019-12-09 Image processing method and related device

Publications (2)

Publication Number Publication Date
CN111104881A true CN111104881A (en) 2020-05-05
CN111104881B CN111104881B (en) 2023-12-01

Family

ID=70422421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911251353.8A Active CN111104881B (en) 2019-12-09 2019-12-09 Image processing method and related device

Country Status (1)

Country Link
CN (1) CN111104881B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5594809A (en) * 1995-04-28 1997-01-14 Xerox Corporation Automatic training of character templates using a text line image, a text line transcription and a line image source model
WO2014174932A1 (en) * 2013-04-26 2014-10-30 オリンパス株式会社 Image processing device, program, and image processing method
CN107729936A (en) * 2017-10-12 2018-02-23 科大讯飞股份有限公司 One kind corrects mistakes to inscribe reads and appraises method and system automatically
CN107967318A (en) * 2017-11-23 2018-04-27 北京师范大学 A kind of Chinese short text subjective item automatic scoring method and system using LSTM neutral nets
US20180268733A1 (en) * 2017-03-15 2018-09-20 International Business Machines Corporation System and method to teach and evaluate image grading performance using prior learned expert knowledge base
CN108932508A (en) * 2018-08-13 2018-12-04 杭州大拿科技股份有限公司 A kind of topic intelligent recognition, the method and system corrected
CN109740515A (en) * 2018-12-29 2019-05-10 科大讯飞股份有限公司 One kind reading and appraising method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5594809A (en) * 1995-04-28 1997-01-14 Xerox Corporation Automatic training of character templates using a text line image, a text line transcription and a line image source model
WO2014174932A1 (en) * 2013-04-26 2014-10-30 オリンパス株式会社 Image processing device, program, and image processing method
US20180268733A1 (en) * 2017-03-15 2018-09-20 International Business Machines Corporation System and method to teach and evaluate image grading performance using prior learned expert knowledge base
CN107729936A (en) * 2017-10-12 2018-02-23 科大讯飞股份有限公司 One kind corrects mistakes to inscribe reads and appraises method and system automatically
CN107967318A (en) * 2017-11-23 2018-04-27 北京师范大学 A kind of Chinese short text subjective item automatic scoring method and system using LSTM neutral nets
CN108932508A (en) * 2018-08-13 2018-12-04 杭州大拿科技股份有限公司 A kind of topic intelligent recognition, the method and system corrected
CN109740515A (en) * 2018-12-29 2019-05-10 科大讯飞股份有限公司 One kind reading and appraising method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KAVEH TAGHIPOUR ET AL.: "A Neural Approach to Automated Essay Scoring" *
MOHAMED ABDELLATIF HUSSEIN ET AL.: "Automated language essay scoring systems: a literature review" *
竺博 等: "人工智能在手写文档识别分析中的技术演进" *
竺博 等: "人工智能在教育考试评测中的应用探索" *

Also Published As

Publication number Publication date
CN111104881B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
US10769487B2 (en) Method and device for extracting information from pie chart
CN109670504B (en) Handwritten answer recognition and correction method and device
US11790641B2 (en) Answer evaluation method, answer evaluation system, electronic device, and medium
WO2016041423A1 (en) Intelligent scoring method and system for text objective question
CN111144191B (en) Font identification method, font identification device, electronic equipment and storage medium
CN109284355B (en) Method and device for correcting oral arithmetic questions in test paper
CN109189895B (en) Question correcting method and device for oral calculation questions
CN106951832A (en) A kind of verification method and device based on Handwritten Digits Recognition
CN116543404A (en) Table semantic information extraction method, system, equipment and medium based on cell coordinate optimization
JP7077483B2 (en) Problem correction methods, devices, electronic devices and storage media for mental arithmetic problems
CN113177435A (en) Test paper analysis method and device, storage medium and electronic equipment
CN114255159A (en) Handwritten text image generation method and device, electronic equipment and storage medium
CN111079641A (en) Answering content identification method, related device and readable storage medium
CN113361396B (en) Multi-mode knowledge distillation method and system
CN114885216A (en) Exercise pushing method and system, electronic equipment and storage medium
CN110852071A (en) Knowledge point detection method, device, equipment and readable storage medium
CN113505786A (en) Test question photographing and judging method and device and electronic equipment
CN112686263A (en) Character recognition method and device, electronic equipment and storage medium
CN111079489B (en) Content identification method and electronic equipment
WO2023024898A1 (en) Problem assistance method, problem assistance apparatus and problem assistance system
CN111104881B (en) Image processing method and related device
CN111832550B (en) Data set manufacturing method and device, electronic equipment and storage medium
CN115661836A (en) Automatic correction method, device and system and readable storage medium
CN109582971B (en) Correction method and correction system based on syntactic analysis
CN113850235B (en) Text processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant