CN112507879A - Evaluation method, evaluation device, electronic equipment and storage medium - Google Patents

Evaluation method, evaluation device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112507879A
CN112507879A CN202011444199.9A CN202011444199A CN112507879A CN 112507879 A CN112507879 A CN 112507879A CN 202011444199 A CN202011444199 A CN 202011444199A CN 112507879 A CN112507879 A CN 112507879A
Authority
CN
China
Prior art keywords
combined
graph
answer
component
answering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011444199.9A
Other languages
Chinese (zh)
Inventor
章继东
何孟华
何春江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN202011444199.9A priority Critical patent/CN112507879A/en
Publication of CN112507879A publication Critical patent/CN112507879A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/414Extracting the geometrical structure, e.g. layout tree; Block segmentation, e.g. bounding boxes for graphics or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a review method, a review device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining an image to be evaluated; if the reading image to be evaluated has the combined answering graph, carrying out component splitting on the combined answering graph to obtain each answering component and component relation in the combined answering graph; matching each answer component and component relation in the combined answer graph of the answer image corresponding to the image to be reviewed with each answering component and component relation in the combined answering graph to obtain a combined graph matching result; and determining the evaluation result of the image to be evaluated based on the combined graph matching result. The method, the device, the electronic equipment and the storage medium provided by the embodiment of the invention reduce the review workload, shorten the review time and improve the review efficiency.

Description

Evaluation method, evaluation device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a review method, a review device, electronic equipment and a storage medium.
Background
With the progress of Artificial Intelligence (AI), online education has been rapidly developed. AI intelligence review student work or paper is being gradually applied to educational scenes.
In the prior art, because a large number of complex graphs exist in student homework or test paper of subjects such as mathematics, physics, chemistry and the like, the homework still adopts a manual paper reading mode, the reading workload of teachers is increased, and the reading efficiency is low.
Disclosure of Invention
The embodiment of the invention provides an evaluation method, an evaluation device, electronic equipment and a storage medium, which are used for solving the problems of large workload, long evaluation time and low evaluation efficiency of the evaluation method for student homework or test paper in the prior art.
The embodiment of the invention provides a review method, which comprises the following steps:
determining an image to be evaluated;
if the reading image to be evaluated has the combined answering graph, carrying out component splitting on the combined answering graph to obtain each answering component and component relation in the combined answering graph;
matching each answer component and component relation in the combined answer graph of the answer image corresponding to the image to be reviewed with each answering component and component relation in the combined answering graph to obtain a combined graph matching result;
and determining the evaluation result of the image to be evaluated based on the combined graph matching result.
According to the review method of one embodiment of the present invention, the matching of each answer component and component relationship in the combined answer graph of the answer image corresponding to the image to be reviewed with each answering component and component relationship in the combined answering graph is performed to obtain a combined graph matching result, including:
matching each answer component of the combined answer graph with each answering component of the combined answering graph;
and if each answer component corresponds to each answering component one to one, matching the component relation of each answer component with the component relation of each answering component to obtain a combined graph matching result of the image to be evaluated.
According to the review method of one embodiment of the invention, the step of matching the component relationship of each answer component with the component relationship of each answer component to obtain the combined graph matching result of the image to be reviewed comprises the following steps:
matching the component relation two-dimensional table of the combined answer graph and the combined answer graph to obtain a combined graph matching result of the image to be evaluated;
the two-dimensional table of component relations of the combined answer graph is determined based on the component relations of the answer components, and the two-dimensional table of component relations of the combined answer graph is determined based on the component relations of the answer components.
According to the review method of one embodiment of the present invention, the matching of each answer component and component relationship in the combined answer graph of the answer image corresponding to the image to be reviewed with each answering component and component relationship in the combined answering graph is performed to obtain a combined graph matching result, and then the method further comprises:
if the matching result of the combined graphs is not matched, the combined answering graphs and the combined answer graphs are rotated for multiple times at a preset angle respectively to obtain a combined answering graph set and a combined answer graph set;
and carrying out overall pattern matching on the combined answer patterns in the combined answer pattern set and the combined answer patterns in the combined answer pattern set, and updating the combined pattern matching result based on the overall pattern matching result.
According to the review method of an embodiment of the present invention, the component splitting is performed on the combined response graph to obtain each response component and component relationship in the combined response graph, and the method further includes:
preprocessing the combined answering graph and/or the combined answer graph to enable the difference of the graph sizes of the preprocessed combined answering graph and the combined answer graph to be smaller than a preset threshold value; the pre-treatment comprises rotation and/or stretching.
According to the review method of one embodiment of the invention, if the image to be reviewed has answer text and/or basic answer graph, the determining the review result of the image to be reviewed based on the matching result of the combined graph comprises the following steps:
determining the evaluation result of the image to be evaluated based on the combined graph matching result and the answering text matching result and/or the basic graph matching result;
wherein the answer text matching result is determined after matching the answer text with the answer text of the answer image; and the basic pattern matching result is determined after pattern type recognition is carried out on the basic answer pattern and the basic answer pattern of the answer image is matched with the pattern type of the basic answer pattern.
According to the review method of one embodiment of the invention, the determining of the image to be reviewed comprises the following steps:
inputting the image to be reviewed to a region detection model to obtain at least one of a response text, a basic response graph and a combined response graph output by the region detection model;
the region detection model is obtained after training based on sample images, and the sample images comprise at least one of sample answering texts, sample basic answering graphs and sample combination answering graphs.
The embodiment of the present invention further provides a review device, including:
the determining unit is used for determining the image to be reviewed;
the splitting unit is used for splitting components of the combined answering graph to obtain each answering component and component relation in the combined answering graph if the combined answering graph exists in the reading image to be evaluated;
the matching unit is used for matching each answer component and component relation in the combined answer graph of the to-be-evaluated reader pair answer case image with each answering component and component relation in the combined answering graph to obtain a combined graph matching result;
and the evaluation unit is used for determining the evaluation result of the image to be evaluated based on the combined graph matching result.
The embodiment of the present invention further provides an electronic device, which includes a processor, a communication interface, a memory and a bus, wherein the processor and the communication interface, the memory completes mutual communication through the bus, and the processor can call a logic command in the memory to execute any of the above steps of the review method.
Embodiments of the present invention further provide a non-transitory computer readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the review method as described in any of the above.
According to the review method, the review device, the electronic equipment and the storage medium provided by the embodiment of the invention, the components of the combined answer graph in the image to be reviewed are split to obtain each answer component and component relation in the combined answer graph, each answer component and component relation in the combined answer graph of the image to be reviewed and the answer graph are matched with each answer component and component relation in the combined answer graph to obtain a combined graph matching result, so that the review result of the image to be reviewed is determined, and the complex combined answer graph is structurally decomposed by adopting a component splitting mode, so that automatic review operation or test paper by a graph comparison mode is realized, the subjectivity of manual review is avoided, the review workload is reduced, the review time is shortened, and the review efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a review method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating the generation of the matching result of the combined graph according to the embodiment of the present invention;
FIG. 3 is a flowchart illustrating an overall pattern matching method according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an overall matching model according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a region detection model according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating region detection of an image to be reviewed according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating a method for online handwriting recognition and modification according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a review device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the prior art, the abilities of subjects such as mathematics, physics, chemistry and the like are evaluated through different question type combinations, including selection questions, blank filling questions, answering questions and the like, and the answering contents in the question types comprise characters, formulas and graphs. Due to the complexity of mathematical and chemical formulas and graphs, a manual evaluation mode is still adopted so far, and automatic AI modification cannot be realized, so that the method is time-consuming, labor-consuming and low in efficiency.
In view of the deficiency of the prior art, fig. 1 is a schematic flow chart of a review method provided by an embodiment of the present invention, as shown in fig. 1, the method includes:
step 110, determining an image to be reviewed.
Specifically, the image to be reviewed is an image in which the content of the answer is described, and may be a homework or a test paper of a student. The image to be reviewed can be acquired through a handwriting electronic screen or acquired after scanning paper operation or paper test paper.
For example, the image to be reviewed is a math test paper finished by a student on a handwriting electronic screen, and comprises answer characters, math formulas and geometric figures handwritten by the student. And the handwriting electronic screen acquires track point information of the student during writing and renders the track point information into a picture format, so that an image to be evaluated is obtained.
And 120, if the combined answering graph exists in the figure to be evaluated, splitting the components of the combined answering graph to obtain each answering component and component relation in the combined answering graph.
Specifically, the image to be reviewed is identified to distinguish answer content contained in the image to be reviewed. The answering content comprises at least one of characters, formulas, basic answering graphs and combined answering graphs.
The basic answering graph is a geometric graph formed by combining a vertex, a straight line segment, a curve segment, a plane and a curved surface. The combined response graph is a geometric graph formed by combining a plurality of basic response graphs. The basic answering graph and the combined answering graph can be plane graphs or three-dimensional graphs, and the dimension types of the basic answering graph and the combined answering graph are not specifically limited in the embodiment of the invention.
The definition of the combined answering figure can be determined according to answering subjects and/or review standards, for example, for elementary mathematics, triangles, parallelograms, squares, rectangles, rhombuses, circles, cones, cylinders, cuboids, cubes, and spheres can be determined as the basic answering figure, and the combination of a plurality of basic answering figures can be determined as the combined answering figure.
If the image to be evaluated has the combined answering graph, the assembly is split for the combined answering graph. The answering component is a basic unit forming a combined answering graph and can be arranged according to a component splitting method. The component splitting method comprises point splitting, surface splitting, graph splitting and the like. Wherein the point splitting is split based on the vertex, and the answering component is defined as a straight line segment and a curve segment which form a combined answering graph. The surface splitting is based on the surface splitting, and the answering component is defined as a plane and a curved surface which form a combined answering graph. The graph is split based on the basic graph, and the answering component is defined as the basic answering graph.
And after the components of the combined answering graph are split, obtaining all answering components forming the combined answering graph and the component relation among all the answering components. The component relation is the connection relation among all the answering components and is used for representing the positions of all the answering components in the combined answering graph. The connection relationship can be an up-down relationship, a left-right relationship, a front-back relationship, a back-and-forth relationship, and the like.
Taking a point splitting method as an example for explanation, the answering components are defined as a straight line segment and a curve segment, and the combined answering graph is split according to the vertex to obtain each answering component of the combined answering graph and the component relation among the answering components. Here, the vertex is a connection point between the line segments, and does not include a point included inside the line segment, i.e., a point inside the line segment and a response component of the line segment as a whole. The component relationship between the answering components can be recorded by a numerical numbering method. For example, 0 represents no connection, 1 represents left-left connection, 2 represents left-right connection, 3 represents right-left connection, and 4 represents right-right connection.
And step 130, matching each answer component and component relation in the combined answer graph of the to-be-evaluated reader image and the answer case image with each answering component and component relation in the combined answering graph to obtain a combined graph matching result.
Specifically, the answer image is an image for recording standard answers and performing content consistency judgment on the image to be reviewed. The type and the obtaining mode of the answer image are the same as those of the reading image to be evaluated. The answer image comprises at least one of answer characters, answer formulas, basic answer graphs and combined answer graphs, and the answer characters correspond to the reader-images to be evaluated in content type one to one. The definition of the basic answer graph is consistent with that of the basic answer graph, and the definition and the splitting method of the answer components in the combined answer graph are consistent with that of the combined answer graph.
Therefore, the combined answer graph in the answer image can be split in advance to obtain each answer component and component relation in the combined answer graph. And matching each answer component and component relationship with each answering component and component relationship to obtain a combined graph matching result. The combined graph matching result is used for measuring the matching degree and the matching relation between the combined answer graph and the combined answer graph.
For example, the combined pattern matching result may be the number or proportion of component matches between the combined answer pattern and the combined answer pattern. Component matching here can be understood as: if answer components corresponding to the answer components in the combined answer graph exist in the combined answer graph, and the component relation of the answer components is consistent with that of the corresponding answer components, the answer components are matched with the corresponding answer components.
And step 140, determining the evaluation result of the image to be evaluated based on the combined graph matching result.
Specifically, the review result of the image to be reviewed is used to indicate the degree to which the image to be reviewed satisfies the review criterion. The scoring result can be represented by a score or a scoring level. For example, the review results can be classified into three levels of error, partial correct, and full correct.
For example, the number of component matches between the combined answer graph and the combined answer graph is taken as the combined graph matching result. If the matching result of the combined graph is zero, the combined answer graph error in the image to be evaluated can be determined; if the matching result of the combined graphs is a positive integer less than the number of the answer components, the combined answer graph part in the image to be evaluated can be determined to be correct; if the matching result of the combined graphs is equal to the number of the answer components, the combined answer graphs in the image to be evaluated can be determined to be completely correct.
The evaluation result of the image to be evaluated can be determined by weighting and summing the matching result of the combined graph. And determining the evaluation result of the image to be evaluated according to the proportion of the combined answering graph part in the image to be evaluated. For example, the evaluation result is determined by 10 points, the proportion of the combined response graph part in the image to be evaluated is 3 points, and the score of the combined response graph part in the evaluation result is obtained by multiplying the combined graph matching result by 0.3.
In addition, the confidence of the combined pattern matching result can be calculated and used for determining the credibility of the combined pattern matching result. The preset confidence threshold value is set to be 0.85, when the confidence coefficient is larger than or equal to 0.85, the matching result of the combined graph is credible, and when the confidence coefficient is smaller than 0.85, the matching result of the combined graph is not credible, and at the moment, manual review can be combined, so that the accuracy of review is improved.
According to the review method provided by the embodiment of the invention, each answering component and component relation in the combined answering graph is obtained by splitting the component of the combined answering graph in the image to be reviewed, and each answer component and component relation in the combined answering graph of the image to be reviewed and the answering graph are matched with each answering component and component relation in the combined answering graph to obtain a combined graph matching result, so that the review result of the image to be reviewed is determined.
Based on the foregoing embodiment, fig. 2 is a schematic flowchart of a process of generating a matching result of a combined graph according to an embodiment of the present invention, and as shown in fig. 2, step 130 includes:
step 1301, matching each answer component of the combined answer graph with each answering component of the combined answering graph.
Specifically, the number of components in the combined answer graphic and the combined answer graphic may be matched before component matching. If the point splitting method is adopted, the number of vertexes and the number of components in the combined answer graph and the combined answer graph can be matched. If the number of the vertexes and the number of the components in the combined answer graph are consistent with those in the combined answer graph, continuing to perform component matching; if not, the combined answer graph and the combined answer graph are judged not to be matched, and the combined graph matching result is not matched.
And matching each answer component of the combined answer graph with each answering component of the combined answering graph, wherein the corresponding relation between the answer components and the answering components can be determined by adopting at least one method of one-matching-one, one-matching-many and multiple-matching-many.
And one matching one is to match each answering component in the combined answering graph with each answer component in the combined answer graph to obtain the corresponding relation among the components. And one matching is to match each answering component in the combined answering graph with all answer components in the combined answering graph to obtain the corresponding relation among the components. The multiple matching is that all answer components in the combined answer graph are matched with all answer components in the combined answer graph to obtain the corresponding relation among the components.
The matching of the corresponding relationship between the components may be processed by using a neural network model, which is not specifically limited in the embodiment of the present invention. When a neural network model and a multi-matching multi-method are adopted, a relationship graph among all answer components and/or all answer components can be obtained.
In matching the response component and the answer component, the following component matching principles may be employed:
if one or more answer components in the combined answer graph cannot find the corresponding answer component in the combined answer graph, or one or more answer components in the combined answer graph cannot find the corresponding answer component in the combined answer graph, the matching result of the combined graph is not matched;
if each answering component in the combined answering graph can find an answer component in the combined answering graph to uniquely correspond to the answering component, each answer component corresponds to each answering component one by one;
if a plurality of answering components in the combined answering graph can respectively find a plurality of answer components in the combined answering graph, and the plurality of answering components are integrally matched with the plurality of answer components, the plurality of answering components correspond to the plurality of answer components one by one. For example, when the answering component z1 in the combined answering graph component has the same answer components a1 and a2 corresponding thereto in the combined answer graph, the answering component z2 has the same answer components a1 and a2 corresponding thereto in the combined answer graph, and z1 and z2 match a1 and a2 as a whole, z1 and z2 correspond to a1 and a2 one-to-one, that is, when the answering components z1 and z2 match the answer components a1 and a2, the matching order is not distinguished.
And 132, if each answer component corresponds to each answering component, matching the component relation of each answer component with the component relation of each answering component to obtain a combined graph matching result of the image to be evaluated.
Specifically, in addition to determining the corresponding relationship between each answer component and each answering component, the component relationship of each answer component and the component relationship of each answering component need to be matched, so as to determine whether the combined answer graph and the combined answering graph are matched.
According to the review method provided by the embodiment of the invention, each answer component of the combined answer graph is matched with each answering component of the combined answering graph, the corresponding relation between the answer components and the answering components is determined, and the matching accuracy between the combined answering graph and the combined answer graph is improved.
Based on any of the above embodiments, step 1302 includes:
matching the two-dimensional table of the component relation between the combined answer graph and the combined answer graph to obtain a combined graph matching result of the image to be evaluated;
the component relation two-dimensional table of the combined answer graph is determined based on the component relation of each answer component, and the component relation two-dimensional table of the combined answer graph is determined based on the component relation of each answer component.
Specifically, each response component of the combined response graph may be numbered, and a component relationship two-dimensional table of the combined response graph may be established according to the component relationship of each response component. The abscissa and ordinate of the component relation two-dimensional table are numbers of the answering components, and the values in the component relation two-dimensional table are component relations among the answering components. For example, the combined response graphic A includes 3 response elements a1, a2, and a 3. The element relations among the answering elements are no connection relation, left connection relation and right connection relation, which can be respectively represented by numbers 0, 1 and 2, and here, any connection relation between the answering element and the answering element itself can be defined as number 4. A two-dimensional table of component relationships for the combined response graph a may be established, as shown in table 1:
TABLE 1 component relation two-dimensional table
Assembly Answering component a1 Answering component a2 Answering component a3
Answering component a1 4 1 2
Answering component a2 1 4 0
Answering component a3 2 0 4
The same method can be adopted to establish a component relation two-dimensional table of the combined answer graph, and then the sequence of row elements and column elements in the combined answer graph or the component relation two-dimensional table of the combined answer graph is adjusted according to the one-to-one corresponding relation of each answer component and each answering component, so as to ensure that the component relation corresponding to the current row and the current column in the component relation two-dimensional table of the combined answer graph can correspond to the component relation corresponding to the current row and the current column in the component relation two-dimensional table of the combined answer graph.
And finally, matching the two component relation two-dimensional tables of the combined answer graph and the combined answer graph, namely comparing the values in the two component relation two-dimensional tables one by one, and determining that the values in the two component relation two-dimensional tables correspond to each other if the values of the intersection positions of the same row coordinate and the same column coordinate are the same. At this time, if there are several groups of corresponding values, it is indicated that there are several answering components in the combined answering graph and the component relationship of the answer component corresponding to the answering component in the combined answering graph is consistent, and the number of several answering components can be used as the combined graph matching result. And if no corresponding value exists in the two component relation two-dimensional tables, the matching result of the combined graph is mismatching.
According to the evaluation method provided by the embodiment of the invention, the comparison of the component relation between the answer component and the answering component is simplified by establishing the component relation two-dimensional table, the evaluation workload is reduced, the evaluation time is shortened, and the evaluation efficiency is improved.
Based on any of the above embodiments, fig. 3 is a schematic flow chart of the overall pattern matching method according to the embodiment of the present invention, as shown in fig. 3, after step 130, the method further includes:
and 131, if the matching result of the combined graphs is not matched, respectively rotating the combined answering graph and the combined answer graph for multiple times at a preset angle to obtain a combined answering graph set and a combined answer graph set.
Specifically, the combined graph matches the result of a mismatch, but the combined response graph still has the possibility of being the correct answer. The result of the matching of the combined pattern is not matched, and the combined answering pattern and/or the combined answer pattern can be changed in shape, or the splitting effect is not ideal.
The combined answering graph and the combined answer graph can be rotated for multiple times at preset angles respectively, the preset angle can be set to be 5 degrees, the rotation direction can be left-right rotation, and the rotation angle range can also be set, for example, the rotation angle range can be set to be 45 degrees from left to right, and a series of combined answering graphs and a series of combined answer graphs obtained after rotation are respectively used as a combined answering graph set and a combined answer graph set.
And 132, performing overall pattern matching on the combined answer patterns in the combined answer pattern set and the combined answer patterns in the combined answer pattern set, and updating the combined pattern matching result based on the overall pattern matching result.
Specifically, the combined answer graph set and the combined answer graph set may be input into an overall matching model, and the overall matching model performs graph feature matching on the combined answer graphs in the combined answer graph set and the combined answer graphs in the combined answer graph set to obtain an overall graph matching result. And the overall pattern matching result is used for measuring the overall matching degree of the combined answer pattern and the combined answer pattern.
Before step 132 is executed, the overall matching model may be obtained by pre-training, and specifically, the overall matching model may be obtained by the following training method: firstly, a large number of sample combination answer graphs and sample combination answer graphs corresponding to the sample combination answer graphs are collected. And marking the overall pattern matching result corresponding to the sample combination answering pattern in a manual mode to obtain an overall pattern matching result label of the sample combination answering pattern. Then, a large number of sample combination answering graphs, sample combination answer graphs and overall graph matching result labels of the sample combination answering graphs are input into the initial model for training, so that the recognition capability of the overall graph similarity degree of the combination answering graphs and the combination answer graphs is improved as a target, and an overall matching model is obtained.
The overall matching model may use a full Convolutional neural network (FCN), which is not specifically limited in this embodiment of the present invention.
Fig. 4 is a schematic structural diagram of an overall matching model provided in an embodiment of the present invention, and as shown in fig. 4, the overall matching model includes a feature extraction layer, a feature recognition layer, and a result output layer. Accordingly, step 132 includes:
inputting the combined answering graph set and the combined answer graph set into a feature extraction layer of the overall matching model to obtain feature expression vectors of the combined answering graph and the combined answer graph output by the feature extraction layer;
respectively inputting the feature representation vectors of the combined answer graphs and the combined answer graphs into a feature recognition layer of the overall matching model to obtain graph similarity vectors output by the feature recognition layer;
and inputting the graph similarity vector to a result output layer of the overall matching model to obtain an overall graph matching result output by the result representation layer.
According to the evaluation method provided by the embodiment of the invention, the combined answer graph and the combined answer graph are rotated for multiple times at a preset angle and then are subjected to overall graph matching, so that the evaluation accuracy is improved.
Based on any of the above embodiments, step 120 further includes:
preprocessing the combined answering graph and/or the combined answer graph to enable the difference of the graph sizes of the preprocessed combined answering graph and the combined answer graph to be smaller than a preset threshold value; the pre-treatment comprises rotation and/or stretching.
Specifically, before the component splitting is performed on the combined answer graph, the combined answer graph and/or the combined answer graph may be preprocessed, that is, the graph is rotated and/or stretched, and the graph size of the combined answer graph and/or the combined answer graph, such as the length and the width of the graph, is changed, so that the combined answer graph and the combined answer graph are similar as small as possible.
The size of the preset threshold value can be set according to actual conditions, so that the difference between the sizes of the preprocessed combined answer graphs and the combined answer graphs is smaller than the preset threshold value, the outline is similar, comparison errors are reduced, and the matching accuracy is improved.
According to the review method provided by the embodiment of the invention, the combined answer graphs and/or the combined answer graphs are preprocessed before the components are disassembled, so that the size error among the graphs is reduced, and the review accuracy is improved.
Based on any of the above embodiments, if there is answer text and/or basic answer graphic in the image to be reviewed, step 140 accordingly includes:
determining the evaluation result of the image to be evaluated based on the combined graph matching result and the answering text matching result and/or the basic graph matching result;
wherein, the answer text matching result is determined after matching the answer text with the answer text of the answer image; the basic pattern matching result is determined by performing pattern type recognition on the basic answer pattern and matching the pattern type of the basic answer pattern of the answer image.
Specifically, if the image to be reviewed has the answer text and/or the basic answer graphic in addition to the combined answer graphic, the review result of the image to be reviewed needs to be determined according to the combined graphic matching result, the answer text matching result and/or the basic graphic matching result.
The evaluation result of the image to be evaluated can be determined by performing weighted summation according to the respective score weights of the combined graph, the answering text and the basic graph in the evaluation standard and combining the combined graph matching result, the answering text matching result and the basic graph matching result.
The answer text matching result may be determined by matching the answer text with the answer text of the answer image. For example, for simple question types such as a choice question and a blank filling question, the answer text and the answer text are matched according to a scoring rule, if the answer text and the answer text can be matched, the score is 1, otherwise, the score is 0. The first three in the descending order of the recognition rate can be selected as candidate items of the recognition result of the answer text, and the candidate items are matched with the answer text one by one, so that the evaluation accuracy is improved. For complex question types such as answering questions and the like, knowledge of semantic understanding needs to be used, the answering text needs to be decomposed into a plurality of answering steps, comparison is carried out one by one, and the final score is determined according to the correctness of each answering step.
The answer text includes words and formulas. For the identification of the text region to be reviewed in the image to be reviewed, a deep learning model, such as an Encoder-Decoder model, may be used for the identification. And inputting the answering text area in the image to be reviewed into the deep learning model to obtain characters and a formula, wherein the formula can be identified as a latex formula result. For example, 1/2 recognizes the latex formula as \ frac {1} {2}, then this contains 5 elements, respectively "\\ frac", "{", "}", "1", and "2", where "\\ frac", "{", "}" is a virtual body, and "1" and "2" are entities. Chemical equation AgNO3+HCl=AgCl↓+HNO3The identified latex formula is AgNO _ {3} + HCl ↓ + HNO _ {3}, where "_", "{", "}" is a virtual body, and the others are entities. When the Encoder-Decoder model is used for identification, the text region to be reviewed can be preprocessed, the preprocessing can be height adjustment, and then the preprocessed text region is sent to the Encoder-Decoder model for identification. The method comprises the following steps that a model Encoder end adopts VGG (Visual Geometry Group, computer vision Group) or ResNet + BilSTM (Bi-directional Long Short Term Memory Network) or BiGRU (Bi-directional Gate Recurred Unit) to encode and extract character formula features, a model Decoder end adopts Attention and BiLSTM or BiGRU to decode, a Beam Search algorithm is selected in the decoding process, and the first three in identification rate descending order arrangement are reserved as identification results.
The basic pattern matching result can be determined by performing pattern type recognition on the basic answer pattern and matching the basic answer pattern of the answer image. The basic answer pattern may be determined according to subject information, for example, for mathematics, the basic answer pattern may include a triangle, a square, a rectangle, a trapezoid, a parallelogram, a rhombus, a pentagon, a hexagon, a circle, an ellipse, a sector, a cylinder, a cone, a cube, a cuboid, a triangular prism, a triangular pyramid, a sphere, a proportional function, a linear function, an inverse proportional function, an exponential function, a logarithmic function, a quadratic function, a sine function, a cosine function, a tangent function, a cotangent function, and the like. With respect to physics, basic response patterns may include bell, power, light, switches, current meters, voltage meters, motors, slide varistors, and the like. For chemistry, basic response patterns can include benzene, ring, jar, erlenmeyer flask, round bottom flask, separatory funnel, and the like.
And for the basic answering graph, directly adopting a scoring rule for matching, if the basic answering graph can be matched, scoring to be 1, and otherwise scoring to be 0. The first three in the descending order of the recognition rate can be selected as candidate items by the recognition result of the basic answer graph, and the candidate items are matched with the basic answer graph one by one, so that the evaluation accuracy is improved.
According to the evaluation method provided by the embodiment of the invention, the evaluation result of the image to be evaluated is determined according to the combined image matching result, the answering text matching result and the basic image matching result, so that the evaluation accuracy and the comprehensiveness of the evaluation result are improved.
Based on any of the above embodiments, step 110 includes:
inputting the to-be-evaluated reading image into the area detection model to obtain at least one of a response text, a basic response graph and a combined response graph output by the area detection model;
the area detection model is obtained after training based on sample images, and the sample images comprise at least one of sample answering texts, sample basic answering graphs and sample combination answering graphs.
Specifically, fig. 5 is a schematic structural diagram of a region detection model provided in an embodiment of the present invention, and as shown in fig. 5, the region detection model includes a region feature extraction layer, a region feature recognition layer, and a region result output layer.
Inputting the to-be-evaluated reading image into a region feature extraction layer of a region detection model to obtain a region representation vector of the to-be-evaluated reading image output by the region feature extraction layer; inputting the region expression vector into a region feature recognition layer of a region detection model to obtain a region feature vector output by the region feature recognition layer; and inputting the region characteristic vector into a region result output layer of the region detection model to obtain a region detection result output by the region result representation layer, wherein the region detection result comprises at least one of a response text, a basic response graph and a combined response graph.
Before the above steps are performed, the region detection model may be obtained by training in advance, and specifically, the region detection model may be obtained by the following training method: first, a plurality of sample images are collected, wherein the sample images comprise at least one of sample answering texts, sample basic answering graphs and sample combination answering graphs. And marking the sample answering text, the sample basic answering graph and the sample combined answering graph in the sample image by adopting a manual mode to respectively obtain a sample answering text label, a sample basic answering graph label and a sample combined answering graph label. Then, a large number of sample images are input into the initial model for training so as to improve the recognition capability of answering texts, basic answering graphs and combined answering graphs and obtain the region detection model.
The region detection model may be FCIS (full volumetric instant-aware Segment), DBNet, Cascade RCNN (cascaded region Convolutional neural network), which is not specifically limited in this embodiment of the present invention.
Fig. 6 is a schematic diagram illustrating the region detection of the image to be reviewed according to the embodiment of the present invention, and as shown in fig. 6, the image to be reviewed is divided into a region 610 and a region 620 after being detected by the region detection model. Area 610 is the answering text and area 620 is the combined answering graphic.
According to the evaluation method provided by the embodiment of the invention, the region detection model is adopted to carry out region detection on the image to be evaluated, and the image to be evaluated is decomposed into the answering text, the basic answering graph and the combined answering graph, so that the comprehensiveness of the evaluation result is improved.
Based on any of the above embodiments, fig. 7 is a schematic flow chart of a mathematical and chemical online handwriting recognition modification method provided by the embodiment of the present invention, and as shown in fig. 7, the method is performed based on seven models:
the first model renders the online handwriting points of the mathematical questions into pictures by adopting a rendering algorithm;
the second model adopts a detection algorithm to obtain a character formula area block and a graphic area block aiming at the picture;
the third model identifies the character formula area block to obtain a character formula identification result;
the fourth model identifies the graphic region block to obtain a basic graphic identification result and a combined graphic identification result;
comparing the character formula recognition result, the basic graph recognition result and the basic answer part of the standard answer by the fifth model to determine a character formula region score and a basic graph region score;
the sixth model splits the combined image recognition result and the standard answer combined answering part to obtain each answering component and component relation and each answer component and component relation;
and matching each answering component and each answer component by the seventh model to obtain component matching information.
In conjunction with the component matching information and the component relationships, a combined graph region score may be determined. And determining the final score of the question according to the text formula region score, the basic graph region score and the combined graph region score.
Based on any of the above embodiments, fig. 8 is a schematic structural diagram of a review device provided in an embodiment of the present invention, as shown in fig. 8, the review device includes:
a determination unit 810 for determining an image to be reviewed;
the splitting unit 820 is used for splitting components of the combined answer graph to obtain each answer component and component relation in the combined answer graph if the combined answer graph exists in the figure to be evaluated;
the matching unit 830 is configured to match each answer component and component relationship in the combined answer graph of the to-be-evaluated reader-writer response case image with each answering component and component relationship in the combined answering graph to obtain a combined graph matching result;
and the review unit 840 is used for determining the review result of the image to be reviewed based on the combined graph matching result.
Specifically, the determination unit 810 is configured to determine an image to be reviewed, and may be acquired through a handwriting electronic screen or a scanning device. The splitting unit 820 is configured to split the combined answering graph existing in the image to be reviewed, so as to obtain each answering component and component relationship in the combined answering graph. The matching unit 830 is configured to match each answer component and component relationship in the combined answer graph of the answer image with each answering component and component relationship in the combined answering graph, so as to obtain a combined graph matching result. The review unit 840 is configured to determine a review result of the image to be reviewed based on the combined graph matching result determined by the matching unit 830.
According to the evaluation device provided by the embodiment of the invention, each answering component and component relation in the combined answering graph of the image to be evaluated are obtained by splitting the components of the combined answering graph, each answer component and component relation in the combined answering graph of the image to be evaluated and the answering graph are matched with each answering component and component relation in the combined answering graph to obtain a combined graph matching result, so that the evaluation result of the image to be evaluated is determined, and the complex combined answering graph is structurally decomposed by adopting a component splitting mode, so that automatic evaluation operation or test paper is realized by means of graph comparison, the subjectivity of manual evaluation is avoided, the evaluation workload is reduced, the evaluation time is shortened, and the evaluation efficiency is improved.
Based on any of the above embodiments, the matching unit 830 includes:
the component matching subunit is used for matching each answer component of the combined answer graph with each answering component of the combined answering graph;
and the relationship matching subunit is used for matching the component relationship of each answer component with the component relationship of each answering component to obtain a combined graph matching result of the image to be evaluated if each answer component corresponds to each answering component one to one.
Based on any embodiment above, the relationship matching subunit includes:
the two-dimensional table matching module is used for matching the two-dimensional table of the component relation between the combined answer graph and the combined answer graph to obtain a combined graph matching result of the image to be evaluated;
the component relation two-dimensional table of the combined answer graph is determined based on the component relation of each answer component, and the component relation two-dimensional table of the combined answer graph is determined based on the component relation of each answer component.
Based on any embodiment above, the apparatus further comprises:
the integral matching unit is used for respectively rotating the combined answering graph and the combined answer graph for multiple times at a preset angle to obtain a combined answering graph set and a combined answer graph set if the combined graph matching result is not matched;
and carrying out overall pattern matching on the combined answer patterns in the combined answer pattern set and the combined answer patterns in the combined answer pattern set, and updating the combined pattern matching result based on the overall pattern matching result.
Based on any embodiment above, the apparatus further comprises:
the preprocessing unit is used for preprocessing the combined answering graph and/or the combined answer graph to enable the difference of the graph sizes of the preprocessed combined answering graph and the combined answer graph to be smaller than a preset threshold value; the pre-treatment comprises rotation and/or stretching.
Based on any of the above embodiments, if the image to be reviewed further has answer text and/or basic answer graphics, the review unit 840 is further configured to:
determining the evaluation result of the image to be evaluated based on the combined graph matching result and the answering text matching result and/or the basic graph matching result;
wherein, the answer text matching result is determined after matching the answer text with the answer text of the answer image; the basic pattern matching result is determined by performing pattern type recognition on the basic answer pattern and matching the pattern type of the basic answer pattern of the answer image.
Based on any of the above embodiments, the determining unit 810 is configured to:
inputting the to-be-evaluated reading image into the area detection model to obtain at least one of a response text, a basic response graph and a combined response graph output by the area detection model;
the area detection model is obtained after training based on sample images, and the sample images comprise at least one of sample answering texts, sample basic answering graphs and sample combination answering graphs.
Based on any of the above embodiments, fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention, and as shown in fig. 9, the electronic device may include: a Processor (Processor)910, a communication Interface (Communications Interface)920, a Memory (Memory)930, and a communication Bus (Communications Bus)940, wherein the Processor 910, the communication Interface 920, and the Memory 930 are configured to communicate with each other via the communication Bus 940. Processor 910 may invoke logical commands in memory 930 to perform the following method:
determining an image to be evaluated; if the combined answering graph exists in the figure to be evaluated, carrying out component splitting on the combined answering graph to obtain each answering component and component relation in the combined answering graph; matching each answer component and component relation in a combined answer graph of the to-be-evaluated reader image and the answer case image with each answering component and component relation in the combined answering graph to obtain a combined graph matching result; and determining the evaluation result of the image to be evaluated based on the combined graph matching result.
In addition, the logic commands in the memory 930 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic commands are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes a plurality of commands for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Embodiments of the present invention further provide a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to perform the method provided in the foregoing embodiments when executed by a processor, and the method includes:
determining an image to be evaluated; if the combined answering graph exists in the figure to be evaluated, carrying out component splitting on the combined answering graph to obtain each answering component and component relation in the combined answering graph; matching each answer component and component relation in a combined answer graph of the to-be-evaluated reader image and the answer case image with each answering component and component relation in the combined answering graph to obtain a combined graph matching result; and determining the evaluation result of the image to be evaluated based on the combined graph matching result.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes commands for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A review method, comprising:
determining an image to be evaluated;
if the reading image to be evaluated has the combined answering graph, carrying out component splitting on the combined answering graph to obtain each answering component and component relation in the combined answering graph;
matching each answer component and component relation in the combined answer graph of the answer image corresponding to the image to be reviewed with each answering component and component relation in the combined answering graph to obtain a combined graph matching result;
and determining the evaluation result of the image to be evaluated based on the combined graph matching result.
2. The review method of claim 1, wherein the matching of each answer component and component relationship in the combined answer graph corresponding to the answer image to be reviewed with each answering component and component relationship in the combined answering graph to obtain a combined graph matching result comprises:
matching each answer component of the combined answer graph with each answering component of the combined answering graph;
and if each answer component corresponds to each answering component one to one, matching the component relation of each answer component with the component relation of each answering component to obtain a combined graph matching result of the image to be evaluated.
3. The review method of claim 2, wherein the matching the component relationship of each answer component with the component relationship of each answer component to obtain the combined graph matching result of the image to be reviewed comprises:
matching the component relation two-dimensional table of the combined answer graph and the combined answer graph to obtain a combined graph matching result of the image to be evaluated;
the two-dimensional table of component relations of the combined answer graph is determined based on the component relations of the answer components, and the two-dimensional table of component relations of the combined answer graph is determined based on the component relations of the answer components.
4. The review method as claimed in any one of claims 1 to 3, wherein the matching of each answer component and component relationship in the combined answer graph corresponding to the answer image of the image to be reviewed with each answering component and component relationship in the combined answer graph is performed to obtain a combined graph matching result, and then further comprising:
if the matching result of the combined graphs is not matched, the combined answering graphs and the combined answer graphs are rotated for multiple times at a preset angle respectively to obtain a combined answering graph set and a combined answer graph set;
and carrying out overall pattern matching on the combined answer patterns in the combined answer pattern set and the combined answer patterns in the combined answer pattern set, and updating the combined pattern matching result based on the overall pattern matching result.
5. The review method according to any one of claims 1 to 3, wherein the component splitting of the combined response graph to obtain each response component and component relationship in the combined response graph further comprises:
preprocessing the combined answering graph and/or the combined answer graph to enable the difference of the graph sizes of the preprocessed combined answering graph and the combined answer graph to be smaller than a preset threshold value; the pre-treatment comprises rotation and/or stretching.
6. The review method according to any one of claims 1 to 3, wherein if there is answer text and/or basic answer graphic in the image to be reviewed, the determining the review result of the image to be reviewed based on the combined graphic matching result comprises:
determining the evaluation result of the image to be evaluated based on the combined graph matching result and the answering text matching result and/or the basic graph matching result;
wherein the answer text matching result is determined after matching the answer text with the answer text of the answer image; and the basic pattern matching result is determined after pattern type recognition is carried out on the basic answer pattern and the basic answer pattern of the answer image is matched with the pattern type of the basic answer pattern.
7. The review method of claim 6, wherein the determining an image to be reviewed comprises:
inputting the image to be reviewed to a region detection model to obtain at least one of a response text, a basic response graph and a combined response graph output by the region detection model;
the region detection model is obtained after training based on sample images, and the sample images comprise at least one of sample answering texts, sample basic answering graphs and sample combination answering graphs.
8. An evaluation device, comprising:
the determining unit is used for determining the image to be reviewed;
the splitting unit is used for splitting components of the combined answering graph to obtain each answering component and component relation in the combined answering graph if the combined answering graph exists in the reading image to be evaluated;
the matching unit is used for matching each answer component and component relation in the combined answer graph of the to-be-evaluated reader pair answer case image with each answering component and component relation in the combined answering graph to obtain a combined graph matching result;
and the evaluation unit is used for determining the evaluation result of the image to be evaluated based on the combined graph matching result.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor realizes the steps of the review method of any of claims 1 to 7 when executing the computer program.
10. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the review method of any of claims 1 to 7.
CN202011444199.9A 2020-12-08 2020-12-08 Evaluation method, evaluation device, electronic equipment and storage medium Pending CN112507879A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011444199.9A CN112507879A (en) 2020-12-08 2020-12-08 Evaluation method, evaluation device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011444199.9A CN112507879A (en) 2020-12-08 2020-12-08 Evaluation method, evaluation device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112507879A true CN112507879A (en) 2021-03-16

Family

ID=74971131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011444199.9A Pending CN112507879A (en) 2020-12-08 2020-12-08 Evaluation method, evaluation device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112507879A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688273A (en) * 2021-10-26 2021-11-23 杭州智会学科技有限公司 Graphic question answering and judging method and device

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902947A (en) * 2011-07-27 2013-01-30 阿里巴巴集团控股有限公司 Image identification display method and device as well as user equipment
CN103310082A (en) * 2012-03-07 2013-09-18 爱意福瑞(北京)科技有限公司 Paper inspection method and device
CN103824482A (en) * 2014-03-18 2014-05-28 西华大学 Online examination system and method
CN103955889A (en) * 2013-12-31 2014-07-30 广东工业大学 Drawing-type-work reviewing method based on augmented reality technology
CN105069412A (en) * 2015-07-27 2015-11-18 中国地质大学(武汉) Digital scoring method
CN106033544A (en) * 2015-03-18 2016-10-19 成都理想境界科技有限公司 Test content area extraction method based on template matching
CN106874508A (en) * 2017-02-28 2017-06-20 江苏中育优教科技发展有限公司 Test paper generation and method to go over files based on gridding image procossing
CN106934767A (en) * 2017-02-28 2017-07-07 上海小闲网络科技有限公司 A kind of test paper generation and methods of marking and system
CN107729936A (en) * 2017-10-12 2018-02-23 科大讯飞股份有限公司 One kind corrects mistakes to inscribe reads and appraises method and system automatically
CN107783718A (en) * 2017-11-20 2018-03-09 宁波宁大教育设备有限公司 A kind of online assignment hand-written based on papery/examination input method and device
CN107977637A (en) * 2017-12-11 2018-05-01 上海启思教育科技服务有限公司 A kind of intelligently reading system of more topic types
CN109271945A (en) * 2018-09-27 2019-01-25 广东小天才科技有限公司 A kind of method and system of canbe used on line work correction
CN109461503A (en) * 2018-11-14 2019-03-12 科大讯飞股份有限公司 A kind of cognition appraisal procedure, device, equipment and the readable storage medium storing program for executing of object
CN109740515A (en) * 2018-12-29 2019-05-10 科大讯飞股份有限公司 One kind reading and appraising method and device
KR102095407B1 (en) * 2019-09-05 2020-03-31 김강 System for scoring collection of questions
CN111008594A (en) * 2019-12-04 2020-04-14 科大讯飞股份有限公司 Error correction evaluation method, related equipment and readable storage medium
CN111340020A (en) * 2019-12-12 2020-06-26 科大讯飞股份有限公司 Formula identification method, device, equipment and storage medium
CN111680688A (en) * 2020-06-10 2020-09-18 创新奇智(成都)科技有限公司 Character recognition method and device, electronic equipment and storage medium
CN111753767A (en) * 2020-06-29 2020-10-09 广东小天才科技有限公司 Method and device for automatically correcting operation, electronic equipment and storage medium
CN111767883A (en) * 2020-07-07 2020-10-13 北京猿力未来科技有限公司 Title correction method and device

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902947A (en) * 2011-07-27 2013-01-30 阿里巴巴集团控股有限公司 Image identification display method and device as well as user equipment
CN103310082A (en) * 2012-03-07 2013-09-18 爱意福瑞(北京)科技有限公司 Paper inspection method and device
CN103955889A (en) * 2013-12-31 2014-07-30 广东工业大学 Drawing-type-work reviewing method based on augmented reality technology
CN103824482A (en) * 2014-03-18 2014-05-28 西华大学 Online examination system and method
CN106033544A (en) * 2015-03-18 2016-10-19 成都理想境界科技有限公司 Test content area extraction method based on template matching
CN105069412A (en) * 2015-07-27 2015-11-18 中国地质大学(武汉) Digital scoring method
CN106874508A (en) * 2017-02-28 2017-06-20 江苏中育优教科技发展有限公司 Test paper generation and method to go over files based on gridding image procossing
CN106934767A (en) * 2017-02-28 2017-07-07 上海小闲网络科技有限公司 A kind of test paper generation and methods of marking and system
CN107729936A (en) * 2017-10-12 2018-02-23 科大讯飞股份有限公司 One kind corrects mistakes to inscribe reads and appraises method and system automatically
CN107783718A (en) * 2017-11-20 2018-03-09 宁波宁大教育设备有限公司 A kind of online assignment hand-written based on papery/examination input method and device
CN107977637A (en) * 2017-12-11 2018-05-01 上海启思教育科技服务有限公司 A kind of intelligently reading system of more topic types
CN109271945A (en) * 2018-09-27 2019-01-25 广东小天才科技有限公司 A kind of method and system of canbe used on line work correction
CN109461503A (en) * 2018-11-14 2019-03-12 科大讯飞股份有限公司 A kind of cognition appraisal procedure, device, equipment and the readable storage medium storing program for executing of object
CN109740515A (en) * 2018-12-29 2019-05-10 科大讯飞股份有限公司 One kind reading and appraising method and device
KR102095407B1 (en) * 2019-09-05 2020-03-31 김강 System for scoring collection of questions
CN111008594A (en) * 2019-12-04 2020-04-14 科大讯飞股份有限公司 Error correction evaluation method, related equipment and readable storage medium
CN111340020A (en) * 2019-12-12 2020-06-26 科大讯飞股份有限公司 Formula identification method, device, equipment and storage medium
CN111680688A (en) * 2020-06-10 2020-09-18 创新奇智(成都)科技有限公司 Character recognition method and device, electronic equipment and storage medium
CN111753767A (en) * 2020-06-29 2020-10-09 广东小天才科技有限公司 Method and device for automatically correcting operation, electronic equipment and storage medium
CN111767883A (en) * 2020-07-07 2020-10-13 北京猿力未来科技有限公司 Title correction method and device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HEKMAN K A 等: "automated grading of first year student CAD work", 《2013 ASEE ANNUAL CONFERENCE & EXPOSITION》, pages 1 - 11 *
储节磊 等: "矢量图形的比较识别研究", 《计算机应用与软件》, vol. 27, no. 12, pages 250 - 252 *
徐文胜 等: "AutoCAD水平考试的自动评阅系统研究", 《工程图学学报》, no. 1, pages 155 - 159 *
杨万里: "图学课程绘图平台开发及其作业批改功能的实现", 《中国优秀博硕士学位论文全文数据库 (硕士) 工程科技Ⅱ辑》, vol. 2006, no. 9, pages 5 *
燕永军 等: "作业图形的特征分析与知识表达", 《计算机工程与设计》, vol. 29, no. 4, pages 1 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688273A (en) * 2021-10-26 2021-11-23 杭州智会学科技有限公司 Graphic question answering and judging method and device

Similar Documents

Publication Publication Date Title
CN108399386B (en) Method and device for extracting information in pie chart
EP4044115A1 (en) Image processing method and apparatus based on artificial intelligence, and device and storage medium
CN111563502B (en) Image text recognition method and device, electronic equipment and computer storage medium
CN110009027B (en) Image comparison method and device, storage medium and electronic device
CN110929573A (en) Examination question checking method based on image detection and related equipment
US4773098A (en) Method of optical character recognition
CN111626297A (en) Character writing quality evaluation method and device, electronic equipment and recording medium
CN106021330A (en) A three-dimensional model retrieval method used for mixed contour line views
CN111401156B (en) Image identification method based on Gabor convolution neural network
CN113033711A (en) Title correction method and device, electronic equipment and computer storage medium
CN111767883A (en) Title correction method and device
CN109409388B (en) Dual-mode deep learning descriptor construction method based on graphic primitives
CN109766752B (en) Target matching and positioning method and system based on deep learning and computer
Lovett et al. Analogy with qualitative spatial representations can simulate solving Raven's Progressive Matrices
CN113762269A (en) Chinese character OCR recognition method, system, medium and application based on neural network
EP0131681A2 (en) Method of optical character recognition
Li et al. Braille recognition using deep learning
CN112507879A (en) Evaluation method, evaluation device, electronic equipment and storage medium
Xia et al. Texture characterization using shape co-occurrence patterns
Han et al. An interactive grading and learning system for chinese calligraphy
CN113033721A (en) Title correction method and computer storage medium
CN116612478A (en) Off-line handwritten Chinese character scoring method, device and storage medium
CN115984875A (en) Stroke similarity evaluation method and system for hard-tipped pen regular script copy work
CN115019396A (en) Learning state monitoring method, device, equipment and medium
CN115346225A (en) Writing evaluation method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination