CN110490157B - Character evaluation method, character learning method, device, equipment and storage medium - Google Patents

Character evaluation method, character learning method, device, equipment and storage medium Download PDF

Info

Publication number
CN110490157B
CN110490157B CN201910784175.9A CN201910784175A CN110490157B CN 110490157 B CN110490157 B CN 110490157B CN 201910784175 A CN201910784175 A CN 201910784175A CN 110490157 B CN110490157 B CN 110490157B
Authority
CN
China
Prior art keywords
stroke
character
template
evaluated
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910784175.9A
Other languages
Chinese (zh)
Other versions
CN110490157A (en
Inventor
李瑾
刘庆升
王晓斐
吴玉胜
高群
王忍宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Toycloud Technology Co Ltd
Original Assignee
Anhui Toycloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Toycloud Technology Co Ltd filed Critical Anhui Toycloud Technology Co Ltd
Priority to CN201910784175.9A priority Critical patent/CN110490157B/en
Publication of CN110490157A publication Critical patent/CN110490157A/en
Application granted granted Critical
Publication of CN110490157B publication Critical patent/CN110490157B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/333Preprocessing; Feature extraction
    • G06V30/347Sampling; Contour coding; Stroke extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/36Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Character Discrimination (AREA)

Abstract

The application provides a character evaluation method, a character learning method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring the characteristics of the stroke layer of the template characters; acquiring track point data of each stroke of the character to be evaluated corresponding to the template character, and determining the characteristics of the stroke layer of the character to be evaluated according to the track point data; according to the characteristics of the stroke layer of the character to be evaluated and the template character, performing stroke matching on the character to be evaluated and the template character to obtain a stroke matching result; and determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated according to the stroke matching result. The character evaluation method is suitable for learning characters by a user, and user experience is good.

Description

Character evaluation method, character learning method, device, equipment and storage medium
Technical Field
The present application relates to the field of intelligent education technologies, and in particular, to a text evaluation method, a text learning apparatus, a text evaluation device, and a storage medium.
Background
With the intelligent development of education, more and more enterprises are developing various education and learning systems, wherein a character learning system (such as a Chinese character learning system) is one of the education and learning systems. An important part of the character learning system is to evaluate characters written by a user, and currently, most of schemes for evaluating Chinese characters written by the user are evaluation schemes based on handwriting recognition, however, evaluation schemes based on handwriting recognition are not suitable for character learning.
Disclosure of Invention
In view of the above, the present application provides a method, an apparatus, a device and a storage medium for evaluating a character, a method for learning a character, so as to solve the problem that a character evaluation scheme in the prior art is not suitable for character learning, and the technical scheme is as follows:
a text evaluation method, comprising:
acquiring the characteristics of the stroke layer of the template characters;
acquiring track point data of each stroke of the character to be evaluated corresponding to the template character, and determining the characteristics of the stroke layer of the character to be evaluated according to the track point data;
according to the characteristics of the character to be evaluated and the stroke layer of the template character, performing stroke matching on the character to be evaluated and the template character to obtain a stroke matching result;
and determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated according to the stroke matching result.
Optionally, the stroke matching result includes: at least one matching stroke group;
wherein each matching stroke group comprises: a plurality of matched stroke pairs consisting of one stroke to be evaluated and a plurality of template strokes, or a plurality of matched stroke pairs consisting of a plurality of strokes to be evaluated and one template stroke, or one matched stroke pair consisting of one stroke to be evaluated and one template stroke;
any matched stroke pair comprises a template stroke and a stroke to be evaluated, wherein the template stroke is the stroke of the template character, and the stroke to be evaluated is the stroke of the character to be evaluated.
Optionally, the performing stroke matching on the text to be evaluated and the template text according to the features of the stroke levels of the text to be evaluated and the template text includes:
determining a shortest matching path of the stroke sequence of the character to be evaluated and the stroke sequence of the template character according to the characteristics of the stroke layers of the character to be evaluated and the template character, wherein the shortest matching path can represent the stroke matching condition of the character to be evaluated and the template character;
and determining a matching stroke group of the character to be evaluated and the template character according to the shortest matching path of the stroke sequence of the character to be evaluated and the stroke sequence of the template character.
Optionally, the determining, according to the features of the stroke levels of the text to be evaluated and the template text, a shortest matching path between the stroke sequence of the text to be evaluated and the stroke sequence of the template text includes:
determining the distance between each stroke of the character to be evaluated and each stroke of the template character as a first distance according to the characteristics of the stroke layers of the character to be evaluated and the template text, and forming a first distance matrix by all the obtained first distances according to the stroke sequence of the character to be evaluated and the template character;
determining a first accumulated distance corresponding to each element in the first distance matrix to obtain a first accumulated distance matrix, wherein the first accumulated distance corresponding to any element is the length of the shortest path from a first element in the first distance matrix to the element, and the first element in the first distance matrix is a first distance between a first stroke of the text to be evaluated and a first stroke of the template text;
and determining a shortest path from a first element to a last element in the first cumulative distance matrix as a shortest matching path of the stroke sequence of the character to be evaluated and the stroke sequence of the template character, wherein the first element and the last element in the first cumulative distance matrix are a first cumulative distance between a first stroke of the character to be evaluated and the first stroke of the template character, and a first cumulative distance between the last stroke of the character to be evaluated and the last stroke of the template character, respectively.
Optionally, the determining, according to the stroke matching result, a correct stroke and/or an incorrect stroke in the strokes of the text to be evaluated includes:
acquiring the segment segmentation characteristics of each stroke in the at least one matched stroke group, wherein the segment segmentation characteristics are characteristics of a track point layer, and the segment segmentation characteristics of any stroke can embody the trend and the number of the strokes;
and determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated according to the segment segmentation characteristics of each stroke in the at least one matched stroke group.
Optionally, obtaining a segment segmentation feature of any stroke includes:
acquiring a target segmentation point sequence according to the track point data of the stroke, wherein the segmentation points contained in the target segmentation point sequence are at least part of the track points contained in the stroke, and the segmentation points in the target segmentation point sequence are arranged according to the sequence of the acquisition time sequence of the track points;
and respectively determining the segment segmentation characteristics of each segmentation point in the target segmentation point sequence.
Optionally, the obtaining a target segmentation point sequence according to the trajectory point data of the stroke includes:
normalizing the stroke to an image with a preset aspect ratio to obtain a normalized stroke;
determining candidate segmentation points from the track points contained in the normalized stroke according to the direction angles of the track points contained in the normalized stroke, and forming a candidate segmentation point sequence by all the determined candidate segmentation points, wherein the direction angles of the track points are determined according to the track point data of the stroke;
and removing redundant segmentation points from the candidate segmentation point sequence according to the distance between adjacent candidate segmentation points in the candidate segmentation point sequence, wherein the segmentation point sequence obtained after the redundant segmentation points are removed is used as the target segmentation point sequence.
Optionally, the determining the segment segmentation characteristics of each segmentation point in the target segmentation point sequence respectively includes:
for each division point in other division points except the last division point in the target division point sequence, calculating the sum vector of the division point and the vector of the division point directly succeeding the division point, and forming the stroke division characteristics of the division point by the vector of the division point and the sum vector, wherein the vector of any division point is the vector starting from the division point and pointing to the division point directly succeeding the division point;
and for the last segmentation point in the target segmentation point sequence, forming the segment segmentation characteristic of the segmentation point by the vector of the segmentation point and the copy of the vector of the segmentation point.
Optionally, the determining, according to the segment segmentation feature of each stroke in the at least one matching stroke group, a correct stroke and/or an incorrect stroke in the strokes of the text to be evaluated includes:
for each matching stroke pair in each matching stroke group, determining the length of the shortest matching path of the segmentation point sequence of one stroke and the segmentation point sequence of the other stroke in the matching stroke pair according to the segment segmentation characteristics of the two strokes in the matching stroke pair, and taking the length of the shortest matching path corresponding to the matching stroke pair, wherein the shortest matching path can represent the matching condition of the segmentation point sequences of the two strokes in the matching stroke pair;
and determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated according to the corresponding shortest matching path length of each matching stroke pair in each matching stroke group.
Optionally, the determining, according to the segment segmentation characteristics of the two strokes in the matched stroke pair, the length of the shortest matching path between the segmentation point sequence of one stroke in the matched stroke pair and the segmentation point sequence of the other stroke in the matched stroke pair includes:
determining the distance between each segmentation point of one stroke and each segmentation point of the other stroke in the matched stroke pair as a second distance according to the segment segmentation characteristics of the two strokes in the matched stroke pair, and forming a second distance matrix by all the determined second distances according to the sequence of the segmentation points of the two strokes;
determining a second accumulated distance corresponding to each element in the second distance matrix to obtain a second accumulated distance matrix, wherein the second accumulated distance corresponding to any element is the length of the shortest path from the first element to the element in the second distance matrix, and the first element in the second distance matrix is a second distance between the first dividing point of one stroke and the first dividing point of the other stroke in the matched stroke pair;
and determining the last element in the second accumulated distance matrix as the length of the shortest matching path of the segmentation point sequence of one stroke and the segmentation point sequence of the other stroke in the matched stroke pair, wherein the last element in the second accumulated distance is the second accumulated distance of the last segmentation point of one stroke and the last segmentation point of the other stroke in the matched stroke pair.
Optionally, the determining, according to the shortest matching path length corresponding to each matching stroke pair in each matching stroke group, a correct stroke and/or an incorrect stroke in the strokes of the text to be evaluated includes:
for each matching stroke group:
if the matched stroke group comprises a plurality of matched stroke pairs consisting of a stroke to be evaluated and a plurality of template strokes, determining the matched stroke pair with the shortest matching path length as a target stroke pair;
if the matched stroke group comprises a plurality of matched stroke pairs consisting of a plurality of strokes to be evaluated and a template stroke, determining the matched stroke pair with the shortest matching path length and the matched stroke pair with the smallest second distance as a target stroke pair, wherein the second distance of one matched stroke pair is determined according to the stroke level characteristics of two strokes in the matched stroke pair;
if the matched stroke group comprises a matched stroke pair consisting of a stroke to be evaluated and a template stroke, determining whether the matched stroke pair is a target stroke pair according to the second distance of the matched stroke pair, and if the matched stroke pair cannot be determined to be the target stroke pair according to the second distance of the matched stroke pair, further determining whether the matched stroke pair is the target stroke pair according to the shortest matching path length of the matched stroke pair and the direction trends of two strokes in the matched stroke pair;
and determining the strokes to be evaluated in the target stroke pair as correct strokes and determining other strokes as wrong strokes in the strokes of the characters to be evaluated.
A method of word learning comprising:
acquiring template characters selected by a user;
acquiring character learning information corresponding to template characters selected by a user from character learning information corresponding to a plurality of template characters acquired in advance and playing the character learning information, wherein the character learning information corresponding to any template character is used for guiding the user to learn the writing of the template character;
when an evaluation instruction is received, evaluating the characters to be evaluated written by the user according to the template characters by adopting the character evaluation method;
and visualizing the evaluation result of the character to be evaluated.
Optionally, the characteristics of the stroke layer of any template character are obtained according to the track point data of the template character; the character learning information corresponding to any template character comprises a dynamic tracing picture of the writing process of the template character;
the process of acquiring dynamic tracing chart and track point data in the writing process of a template character in advance comprises the following steps:
decomposing a writing process dynamic graph crawled from a network aiming at the template characters according to frames to obtain an ordered image frame sequence;
making a target base map and a target template according to the last image frame in the ordered image frame sequence, wherein the target base map is a base map used for tracing each image frame, and the target template is used for extracting the region needing to be traced in each image frame;
making a tracing red image of each image frame in the ordered image frame sequence according to the target base image and the target template, and determining track point data according to the difference between adjacent tracing red images;
and synthesizing all the manufactured tracing graphs into a dynamic tracing graph in the writing process of the template characters, and forming the obtained all the tracing point data into the tracing point data of the template characters.
A text evaluation apparatus comprising: the device comprises a template character characteristic acquisition module, a track point data acquisition module, a character characteristic acquisition module to be evaluated, a stroke matching module and an evaluation result determination module;
the template character characteristic acquisition module is used for acquiring the characteristics of the stroke layer of the template character;
the track point data acquisition module is used for acquiring track point data of each stroke of the character to be evaluated corresponding to the template character;
the character feature acquisition module to be evaluated is used for acquiring the features of the stroke layer of the character to be evaluated according to the track point data;
the stroke matching module is used for performing stroke matching on the characters to be evaluated and the template characters according to the characteristics of the stroke layers of the characters to be evaluated and the template characters to obtain a stroke matching result;
and the evaluation result determining module is used for determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated according to the stroke matching result.
A character learning apparatus comprising: the system comprises a template character acquisition module, a character learning information acquisition module, a playing module, an instruction receiving module, the character evaluation device and a visualization module;
the template character acquisition module is used for acquiring template characters selected by a user;
the character learning information acquisition module is used for acquiring character learning information corresponding to the template characters selected by the user from character learning information corresponding to a plurality of template characters acquired in advance; the character learning information corresponding to any template character is used for guiding a user to learn the writing of the template character;
the playing module is used for playing the character learning information corresponding to the template character selected by the user and corresponding to the template character for guiding the user to learn the writing of the template character;
the character evaluation device is used for evaluating the characters to be evaluated written by the user according to the template characters when the instruction receiving module receives an evaluation instruction;
and the visualization module is used for visualizing the evaluation result of the character to be evaluated.
A text evaluation apparatus comprising: a memory and a processor;
the memory is used for storing programs;
the processor is configured to execute the program to implement each step of the character evaluation method described in any one of the above.
A readable storage medium, on which a computer program is stored, which, when executed by a processor, carries out the steps of the text evaluation method of any one of the above.
A character learning apparatus comprising: a memory and a processor;
the memory is used for storing programs; the processor is configured to execute the program to implement each step of the character learning method described in any one of the above.
A readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the character learning method of any one of the above.
According to the scheme, the character evaluation method provided by the application comprises the steps of firstly obtaining the characteristics of the stroke layer of the template character and the character to be evaluated, then carrying out stroke matching on the character to be evaluated and the template character according to the characteristics of the stroke layer of the character to be evaluated and the characteristics of the stroke layer of the template character, and finally determining the correct stroke and/or the wrong stroke in the stroke of the character to be evaluated according to the stroke matching result, so that the character evaluation method provided by the application takes the stroke of the character as a research object, and determines the stroke writing condition of the character to be evaluated by carrying out stroke matching on the template character and the character to be evaluated, the character evaluation method provided by the application enables a user to know which strokes in the character written by the user are correct and which strokes are wrong, and thus the character can be better learned, namely the character evaluation method provided by the application is suitable for the user to learn the character, the user experience is better.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart of a text evaluation method according to an embodiment of the present application;
fig. 2 is a schematic diagram that divides a 360-degree direction area into 8 direction areas equally according to an embodiment of the present application;
3 a-3 c are schematic diagrams of an example of obtaining an 8-way pattern diagram provided by an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating a process of performing stroke matching on a to-be-evaluated character and a template character according to characteristics of stroke layers of the to-be-evaluated character and the template character according to an embodiment of the present application;
FIG. 5 is a diagram illustrating an example of a shortest matching path between a stroke sequence of a template Chinese character and a stroke sequence of a character to be evaluated according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating a process of determining a correct stroke and/or an incorrect stroke in strokes of a text to be evaluated according to a stroke matching result according to an embodiment of the present application;
FIG. 7 is a schematic view of an angle of direction of a dividing point according to an embodiment of the present disclosure;
fig. 8a and 8b are diagrams illustrating the effect of segmenting the handwritten Chinese character by segments according to the embodiment of the present application;
fig. 9 is a schematic flowchart of a text learning method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a text evaluation apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a character learning apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a text evaluation device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a character learning apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The inventor discovers that in the process of implementing the application: most character evaluation schemes in the prior art are character evaluation schemes based on handwriting recognition, and the character evaluation schemes based on the handwriting recognition use the whole character as a research object, and the writing conditions of strokes and orders of the character cannot be evaluated by the character evaluation schemes, so that the character evaluation schemes are not suitable for character learning and cannot meet the functional requirements of users on character learning systems.
The present inventors have conducted extensive research to solve the problems of the text evaluation scheme in the prior art, and finally provide a text evaluation method suitable for text learning, wherein strokes of a text are taken as a research object, and correct strokes and incorrect strokes can be determined from the strokes of the text to be evaluated.
Referring to fig. 1, a flow chart of a text evaluation method provided in the embodiment of the present application is shown, where the method may include:
step S101: and acquiring the characteristics of the stroke layer of the template characters.
Optionally, the template characters in this embodiment may be chinese characters, may also be other chinese characters, and of course, may also be characters of other countries.
The characteristics of the stroke layer of the template characters are determined according to the track point data of each stroke of the template characters.
The method includes the steps that the characteristics of stroke layers of template characters are acquired in various ways, in a possible implementation way, the characteristics of the stroke layers of the template characters are considered to be unchanged, track point data of all strokes of the template characters can be acquired in advance before evaluation, the characteristics of the stroke layers of the template characters are determined and stored according to the track point data of all strokes of the template characters, and when the characters to be evaluated corresponding to the template characters need to be evaluated, the characteristics of the stroke layers of the template characters which are stored in advance are directly acquired; in another possible implementation manner, the characteristics of the stroke level of the template characters can be determined according to the track point data of each stroke of the template characters which are acquired in advance when the characters to be evaluated corresponding to the template characters need to be evaluated instead of acquiring the characteristics of the stroke level of the template characters in advance.
Step S102: track point data of each stroke of the character to be evaluated corresponding to the template character is obtained, and characteristics of the stroke layer of the character to be evaluated are determined according to the track point data.
The track point data of any stroke is data of each track point contained in the stroke, and the data of any track point can be a triple (x, y, status), wherein (x, y) is coordinates of the track point, x is a pixel abscissa of the track point in an image, y is a pixel ordinate of the track point in the image, and status indicates whether the track point is the last point of the stroke, if so, status is 1, and if not, status is 0.
Optionally, the characteristic of the character stroke layer to be evaluated may be 8-directional characteristic of the stroke, and if the characteristic of the character stroke layer to be evaluated is 8-directional characteristic of the stroke, the characteristic of the stroke layer of the template character is also 8-directional characteristic of the stroke. The process of determining the 8-directional character of a stroke from the trajectory point data of the stroke can be seen in the description of the subsequent embodiments.
Step S103: and performing stroke matching on the characters to be evaluated and the template characters according to the characteristics of the stroke layers of the characters to be evaluated and the template characters to obtain a stroke matching result.
In the embodiment, strokes are used as research objects, and stroke matching is performed on the characters to be evaluated and the template characters according to the characteristics of the stroke level of the characters to be evaluated and the characteristics of the stroke level of the template characters.
Step S104: and determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated according to the stroke matching result.
It should be noted that, according to the characteristics of the stroke layer of the text to be evaluated and the template text, stroke matching is performed on the text to be evaluated and the template text, and two situations may occur:
in the first case, matching fails, and no matched stroke group is obtained, at this time, it can be directly determined that no correct stroke exists in the strokes of the character to be evaluated, that is, all strokes of the character to be evaluated are wrongly written.
And in the second situation, matching is successful, at least one matched stroke group is obtained, if at least one matched stroke group is obtained, the fact that correct strokes possibly exist in the strokes of the character to be evaluated is indicated, and at the moment, correct strokes and/or wrong strokes in the strokes of the character to be evaluated are determined further according to the obtained at least one matched stroke group.
Wherein each matching stroke group comprises: the evaluation device comprises a plurality of matched stroke pairs consisting of a stroke to be evaluated and a plurality of template strokes, or a plurality of matched stroke pairs consisting of a plurality of strokes to be evaluated and a template stroke, or a matched stroke pair consisting of a stroke to be evaluated and a template stroke. It should be noted that the strokes to be evaluated refer to the strokes of the characters to be evaluated, and the template strokes refer to the strokes of the template characters.
It should be noted that, in the case that a plurality of matched stroke pairs are formed by one stroke to be evaluated and a plurality of template strokes, one stroke to be evaluated and each template stroke form a matched stroke pair, for example, the stroke to be evaluated is x, and the plurality of template strokes are m1, m2 and m3, then three matched stroke pairs (x, m1), (x, m2) and (x, m3) can be formed by x, m1, m2 and m 3; the case of forming a plurality of matched stroke pairs by a plurality of strokes to be evaluated and one template stroke is that the plurality of strokes to be evaluated respectively form the matched stroke pairs with one template stroke, for example, the plurality of strokes to be evaluated are x1, x2 and x3, and one template stroke is m, then the plurality of strokes to be evaluated can form three matched stroke pairs by x1, x2, x3 and m (x1, m), (x2, m) and (x3, m).
The character evaluation method provided by the embodiment of the application comprises the steps of firstly obtaining the characteristics of the stroke layer of the template character and the character to be evaluated, then conducting stroke matching on the character to be evaluated and the template character according to the characteristics of the stroke layer of the character to be evaluated and the template character, and finally determining the correct stroke and/or the wrong stroke in the stroke of the character to be evaluated according to the stroke matching result, so that the character evaluation method provided by the embodiment of the application takes the stroke of the character as a research object, and determines the stroke writing condition of the character to be evaluated by conducting stroke matching on the template character and the character to be evaluated, the character evaluation method provided by the embodiment of the application enables a user to know which strokes in the character written by the user are correct and which strokes are wrong, and thus the character can be better learned, namely the character evaluation method provided by the embodiment of the application is suitable for the user to learn the character, the user experience is better.
Next, description is given to "determining the characteristics of the stroke level of the character to be evaluated according to the trajectory point data" in step S102 of the above embodiment.
The following implementation process for determining the 8-direction characteristics of the stroke according to the track point data of any stroke is given by taking the characteristics of the stroke layer as the 8-direction characteristics of the stroke as an example:
step a1, normalizing the strokes.
Specifically, in order to reduce the difference caused by the difference between the writing screen size and the sampling frequency between different devices, after the strokes are obtained, the strokes are linearly normalized, and the strokes are normalized to the image with the preset size to obtain the stroke image with the preset size, for example, the strokes can be normalized to the square image with 64 x 64 to obtain the stroke image with 64 x 64.
And a2, extracting 8 direction characteristic vectors of each track point contained in the normalized stroke.
Suppose that in the image coordinates, the locus point PjHas the coordinates of (x)j,yj) Then its direction vector is defined as Vj
Figure BDA0002177493110000111
Will VjNormalized to unit vector
Figure BDA0002177493110000112
In addition, the 360 degree orientation region is passed through the orientation axis { D1,D2,D3,D4,D5,D6,D7,D8Divide into equal 8 blocks, as shown in fig. 2, so that the unit direction vector of each trace point must be located in a certain direction area, and thus can also be decomposed into two boundary direction axes of the area. Suppose a locus point PjIs located at D1And D2In the direction area in between, then the 8-way feature of the trace point can be taken as
Figure BDA0002177493110000113
Similarly, assume a locus point PjIs located at D5And D6In the direction area in between, then the 8-way feature of the trace point can be taken as
Figure BDA0002177493110000114
And a step a3, generating an 8-direction pattern diagram according to the 8-direction feature vectors of the track points.
And (3) forming a pattern diagram with the same size as the stroke image by using the same dimensional component in the 8-direction feature vector of each track point (the non-track point in the pattern diagram is 0), thus obtaining 8 pattern diagrams, namely 8 direction pattern diagrams.
Referring to fig. 3a to 3c, schematic diagrams of an example of obtaining an 8-direction pattern diagram are shown, where fig. 3a is a4 × 4 stroke image obtained by normalization, A, B, C in fig. 3a is three track points, D is a non-track point, fig. 3b is an 8-direction feature vector of A, B, C, D four points in fig. 3a, components of the 8-direction feature vector of A, B, C, D four points in the same dimension are combined into a4 × 4 pattern diagram, and an 8-width 4 × 4 pattern diagram, that is, an 8-direction pattern diagram, shown in fig. 3c is obtained.
Step a4, acquiring 8-direction high-dimensional feature vectors of the strokes according to the 8-direction pattern diagram.
Each pattern diagram is divided into N × N grids on average, the center point of each grid is used as a sampling point, a gaussian filter of M × M (e.g., 16 × 16) size is used for weighting, a weight value can be obtained for each grid, and thus, an N × N feature vector composed of N × N weight values can be obtained for each pattern diagram. After obtaining the characteristic vector of NxN for each mode image, firstly carrying out square operation on the characteristic vector, then carrying out module value normalization processing on the result of the square operation to obtain a final characteristic vector, after obtaining the final characteristic vector for each mode image, splicing all the characteristic vectors, and obtaining the high-dimensional characteristic vector of NxNx8, namely the high-dimensional characteristic vector in the 8 direction after splicing. For example, the size of 8 pattern diagrams is 64 × 64, each pattern diagram can be divided into 8 × 8 grids, each pattern image has 8 × 8 sampling points, and each stroke can finally obtain a feature vector of 8 × 8 × 8 dimensions.
The following is made to "step S103" in the above embodiment: and according to the characteristics of the stroke layers of the characters to be evaluated and the template characters, performing stroke matching on the characters to be evaluated and the template characters to obtain a stroke matching result, and introducing.
Referring to fig. 4, a schematic diagram illustrating a process of performing stroke matching on a text to be evaluated and a template text according to characteristics of stroke levels of the text to be evaluated and the template text is shown, where the process may include:
step S401: and determining the shortest matching path of the stroke sequence of the character to be evaluated and the stroke sequence of the template character according to the characteristics of the stroke layers of the character to be evaluated and the template character.
The stroke sequence of the character to be evaluated is obtained by arranging all strokes contained in the character to be evaluated according to the writing sequence of the strokes, and the stroke sequence of the template character is obtained by arranging all strokes contained in the template character according to the writing sequence of the strokes.
Specifically, the process of determining the shortest matching path between the stroke sequence of the text to be evaluated and the stroke sequence of the template text according to the characteristics of the stroke layers of the text to be evaluated and the template text may include:
step S4011, according to the characteristics of stroke layers of the character to be evaluated and the template text, determining the distance between each stroke of the character to be evaluated and each stroke of the template character as a first distance, and forming a first distance matrix by all the obtained first distances according to the sequence of the strokes of the character to be evaluated and the template text.
Optionally, for any stroke a of the text to be evaluated and any stroke b of the template text, a euclidean distance between the 8 directional features of stroke a and the 8 directional features of stroke b may be calculated as a first distance between stroke a and stroke b.
Assuming that the template text comprises m strokes and the text to be evaluated comprises n strokes, m × n first distances can be obtained, and the m × n first distances form a first distance matrix according to the stroke order of the text to be evaluated and the template text as follows:
Figure BDA0002177493110000131
step S4012, determining a first cumulative distance corresponding to each element in the first distance matrix, and obtaining a first cumulative distance matrix.
The first cumulative distance corresponding to any element in the first distance matrix is the length of the shortest path from the first element in the first distance matrix to the element, and the first element in the first distance matrix is the first distance between the first stroke of the character to be evaluated and the first stroke of the template character.
Illustratively, the text to be evaluated comprises two strokes, the template text comprises four strokes, the characteristics of the text to be evaluated and the stroke level of the template text are determined as follows:
Figure BDA0002177493110000132
the first element in the first distance matrix is (1,1), the first cumulative distance corresponding to the first element (1,1) is 0.722661, the first cumulative distance corresponding to the element (1,2) is the length of the shortest path ((1,1) → (1,2)) from the element (1,1) to (1,2), i.e., 0.722661+1.29664 ═ 2.0193, the first cumulative distance corresponding to the element (2,2) is the length of the shortest path ((1,1) → (2,2)) from the element (1,1) to the element (2,2), i.e., 0.722661+1.202821 ═ 1.925473, the first cumulative distance corresponding to the element (3,1) is the length of the shortest path ((1,1) → (2,1) → (3,1)) from the element (1,1), i.e., 0.722661+1.393014+1.412841 ═ 3.528516, and the rest of the elements are similar and are not repeated. It should be noted that, when searching for the shortest path from the first element (1,1) to the other element (i, j) in the first distance matrix, the shortest path from the element (i, j) to the first element (1,1) is traced back, that is, the shortest path from the element (i, j) to the first element (1,1) is searched, and after the shortest path from the element (i, j) to the element (1,1) is obtained, the shortest path from the element (i, j) to the element (1,1) is inverted, so that the shortest path from the element (1,1) to the element (i, j) can be obtained. The following first cumulative distance matrix may be obtained through the above process:
Figure BDA0002177493110000141
step S4013, determining a shortest path from a first element to a last element in the first cumulative distance matrix, as a shortest matching path between the stroke sequence of the text to be evaluated and the stroke sequence of the template text.
The last element in the first cumulative distance matrix is a first cumulative distance between the last stroke of the text to be evaluated and the last stroke of the template text, and the first element in the first cumulative distance matrix is a first cumulative distance between the first stroke of the text to be evaluated and the first stroke of the template text.
Specifically, the process of determining the shortest path from the first element to the last element in the first cumulative distance matrix includes: and starting from the last element in the first accumulated distance matrix, searching the shortest path from the last element to the first element, and reversing the shortest path from the last element to the first element to obtain the shortest path from the first element to the last element in the first accumulated distance matrix, wherein the shortest path is used as the shortest matching path between the stroke sequence of the character to be evaluated and the stroke sequence of the template character.
In addition, the method for searching for the shortest path from (1,1) to (m, n) in the m × n matrix is as follows: searching in three directions with (m, n) as a current element, i.e., determining an element having the smallest value among (m, n-1), (m-1, n-1), determining (m-1, n) as a target element assuming that the value of (m-1, n) is the smallest, determining (m-1, n) as a current element among (m-2, n), (m-1, n-1), and (m-2, n-1) as a target element assuming that the value of (m-1, n-1) is the smallest, determining (m-1, n-1) as a target element assuming that the value of (m-1, n-1) is the smallest, and then searching in the same manner with (m-1, n-1) as a current element until (1,1) is reached, such that (m, n) shortest path to (1,1), i.e., (m, n) → (m-1, n) → (m-1, n-1) → … → (1,1), and then reversing (m, n) → (m-1, n) → (m-1, n-1) → … → (1,1), the shortest path of (1,1) to (m, n) (1,1) → (m-1, n-1) → (m-1, n) → (m, n) can be obtained. That is, after each target element (i, j) is determined, the next target element is determined among (i, j-1), (j-1, i), (i-1, j-1).
Step S402: and determining a matching stroke group of the character to be evaluated and the template character according to the shortest matching path of the stroke sequence of the character to be evaluated and the stroke sequence of the template character.
Specifically, a matching stroke group of the text to be evaluated and the template text is determined according to the stroke pair corresponding to each element on the shortest matching path.
Illustratively, the stroke sequence of the template Chinese character is { a1, a2, A3, a4, A5}, the stroke sequence of the character to be evaluated is { B1, B2, B3, B4, B5}, wherein the strokes in the stroke sequence are arranged in writing order of strokes, the shortest matching path of the stroke sequence of the template Chinese character and the stroke sequence of the character to be evaluated is shown in fig. 5, the matching stroke pairs (a1, B1), (a2, B2), (A3, B3), (A3, B4), (a4, B5) and (A5, B5) can be obtained from fig. 5, since (A3, B3) and (A3, B3) have the same template A3, therefore, two matching stroke pairs are grouped, since (A3, B3) and (A3, B3) have the same final group of the matching stroke pairs (A3, B3, so that A3, B3, A3, a 363672, A3, a 36363672, A3, a 363672, A3, a 36363672, a 363636363672, a 3636364, A3, a 36364, A3, a 36363672, A3, a 36363672, B2) }, { (A3, B3), (A3, B4) } and { (A4, B5), (A5, B5) }.
Next, for "S104: and determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated for introduction according to the stroke matching result.
Referring to fig. 6, a schematic flow chart of determining a correct stroke and/or an incorrect stroke in strokes of a text to be evaluated according to a stroke matching result is shown, which may include:
step S601: segment segmentation features of each stroke in at least one matching stroke group are obtained.
The stroke segmentation features are features of a track point layer, and the stroke segmentation features of any stroke can reflect the trend and the number of the strokes.
The segment segmentation characteristics of each stroke in at least one matched stroke group are obtained in the same way, and the implementation process of determining the segment segmentation characteristics of the strokes is given by taking one stroke as an example as follows:
step S6011, a target segmentation point sequence is obtained according to the trajectory point data of the strokes.
The target segmentation point sequence comprises at least part of the trajectory points contained in the stroke, and the segmentation points in the target segmentation point sequence are arranged according to the sequence of the acquisition time of the trajectory points.
Specifically, the process of obtaining the target segmentation point sequence according to the trajectory point data of the stroke may include:
step b1, normalizing the stroke into an image with a preset aspect ratio to obtain the normalized stroke.
And b2, determining candidate segmentation points from the track points contained in the normalized strokes according to the direction angles of the track points, and forming a candidate segmentation point sequence by all the determined candidate segmentation points.
Considering that the normalized strokes generally contain a greater density of trace points, the present embodiment implements the compression of the trace points through step b 2. Specifically, the implementation process of step b2 may include:
and step b21, taking a track point sequence formed by the track points contained in the normalized stroke in sequence as a current candidate segmentation point sequence.
Step b22, obtaining the first candidate division point and the last candidate division point from the current candidate division point sequence, determining the division point with the direction angle being the local maximum direction angle from the middle division point, and forming a new candidate division point sequence as the current candidate division point sequence by all the obtained candidate division points in sequence.
The intermediate division point refers to a candidate division point between the first candidate division point and the last candidate division point.
It should be noted that the direction angle of any candidate division point refers to the direction change angle of the division point, and if a division point has a direct predecessor division point and a direct successor division point, the direction angle of the division point is as shown in fig. 7, where β is the direction angle of the track point p, p _ pre is the direct predecessor division point of the division point p, and p _ next is the direct successor division point of the division point p.
Considering that the first segmentation point has no predecessor segmentation point and the last segmentation point has no successor segmentation point, when determining a new candidate segmentation point sequence, the two segmentation points are directly reserved.
If the direction angle of a division point is larger than the direction angle of a direct predecessor division point of the division point and is also larger than the direction angle of a direct successor division point of the division point, the direction angle of the division point is the local maximum direction angle.
And b23, judging whether the number of the candidate segmentation points in the current candidate segmentation point sequence is smaller than the preset number, if so, taking the current candidate segmentation point sequence as a final candidate segmentation point sequence, and if not, returning to the step b 22.
And b3, removing redundant segmentation points from the candidate segmentation point sequence according to the distance between the adjacent candidate segmentation points, and taking the segmentation point sequence after the redundant segmentation points are removed as a target segmentation point sequence.
Note that, in a place where the stroke direction makes a sharp turn, the division points are likely to be piled up, that is, redundant division points exist, and the purpose of step b3 is to remove the redundant division points at the stroke direction sharp turn.
Step S6012, the segment segmentation characteristics of each segmentation point in the segmentation point sequence are respectively determined.
Referring to fig. 8, fig. 8b is a diagram illustrating the effect of segmenting the handwritten Chinese character illustrated in fig. 8 a.
Specifically, the process of determining the segment segmentation characteristics of each segmentation point in the segmentation point sequence respectively may include: for each division point in other division points except the last division point in the division point sequence, calculating the sum vector of the division point and the vector of the division point directly succeeding the division point, and forming the stroke division characteristic of the division point by the vector of the division point and the sum vector; and for the last segmentation point in the segmentation point sequence, forming the segment segmentation characteristic of the segmentation point by the vector of the segmentation point and the copy of the vector of the segmentation point. The vector of any division point is a vector pointing to a division point directly succeeding the division point from the division point.
Illustratively, for any partitioning point p, if there is a direct successor, assume the vector V of the partitioning point p1Has the coordinates of (x)1,y1) The sum vector V of the vector of the division point p and the vector of the division point p immediately succeeding the division point p2Has the coordinates of (x)2,y2) Then the segment division characteristic of the division point p is (x)1,y1,x2,y2) If there is no direct succeeding segmentation point at the segmentation point p, the segment segmentation characteristic of the segmentation point p is (x)1,y1,x1,y1)。
Step S602: and determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated according to the segment segmentation characteristics of each stroke in at least one matched stroke group.
Specifically, the process of determining a correct stroke and/or an incorrect stroke in the strokes of the text to be evaluated according to the segment segmentation characteristics of each stroke in the at least one matching stroke group may include:
step S6021: and for each matched stroke pair in each matched stroke group, determining the length of the shortest matching path of the segmentation point sequence of one stroke and the segmentation point sequence of the other stroke in the matched stroke pair according to the segment segmentation characteristics of the two strokes in the matched stroke pair, and taking the length of the shortest matching path corresponding to the matched stroke pair.
The shortest matching path can represent the matching condition of the segmentation point sequences of the two strokes in the matching stroke pair.
Specifically, the implementation process of determining the length of the shortest matching path between the segmentation point sequence of one stroke and the segmentation point sequence of the other stroke in the matched stroke pair according to the segment segmentation characteristics of the two strokes in the matched stroke pair may include:
and c1, determining the distance between each division point of one stroke and each division point of the other stroke in the matched stroke pair as a second distance according to the segment division characteristics of the two strokes in the matched stroke pair, and forming a second distance matrix by all the determined second distances according to the sequence of the division points of the two strokes.
Optionally, for any dividing point f of one stroke and any dividing point g of another stroke in the matched stroke pair, the euclidean distance between the segment dividing feature of the dividing point f and the segment dividing feature of the dividing point g may be calculated as the second distance between the dividing point f and the dividing point g.
Assuming that the template strokes in the matched stroke pair include p division points and the stroke to be evaluated includes q division points, p × q second distances are obtained, and the p × q second distances form a second distance matrix according to the sequence of the template strokes and the stroke to be evaluated in the matched stroke pair as follows:
Figure BDA0002177493110000181
and c2, determining a second accumulated distance corresponding to each element in the second distance matrix to obtain a second accumulated distance matrix.
And the second accumulated distance corresponding to any element is the length of the shortest path from the first element in the second distance matrix to the element, and the first element in the second distance matrix is the second distance between the first division point of one stroke and the first division point of the other stroke in the matched stroke pair.
Illustratively, the template stroke X in a matching stroke pair includes three division points, the stroke Y to be evaluated in the matching stroke pair includes four division points, and the determined second distance matrix is:
Figure BDA0002177493110000182
the first element in the second distance matrix is (1,1), the first cumulative distance corresponding to the first element (1,1) is 1.424252, the first cumulative distance corresponding to the element (1,2) is the length of the shortest path ((1,1) → (1,2)) from the element (1,1) to (1,2), i.e., 0.886994+1.424252 ═ 2.311246, and the first cumulative distance corresponding to the element (3,4) is the length of the shortest path ((1,1) → (1,2) → (2,3) → (3,4)) from the element (1,1) to the element (3,4), i.e., 1.424252+0.886994+0.621269+1.994787 ═ 4.927301, and other elements are similar and will not be described herein again. The following second cumulative distance matrix may be obtained through the above process:
Figure BDA0002177493110000191
step c3, determining the last element in the second cumulative distance matrix as the length of the shortest matching path between the sequence of dividing points of one stroke and the sequence of dividing points of the other stroke in the matching stroke pair.
Wherein the last element in the second cumulative distance matrix is a second cumulative distance of the last segmentation point of one stroke from the other stroke in the matched pair of strokes.
Illustratively, the second cumulative distance 4.927301 at (3,4) in the above-mentioned 3 × 4 second cumulative distance matrix is the length of the shortest matching path between the sequence of the segmentation points of the template stroke X in the matching stroke pair and the sequence of the segmentation points of the stroke Y to be evaluated.
Step S6022: and determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated according to the corresponding shortest matching path length of each matching stroke pair in each matching stroke group.
Specifically, the process of determining a correct stroke and/or an incorrect stroke in the strokes of the text to be evaluated according to the shortest matching path length corresponding to each matching stroke pair in each matching stroke group may include:
step d1, determining a target stroke pair from each of the matched stroke groups.
As previously mentioned, there may be three cases for a matching stroke group, and the following sub-cases give the process of determining the target stroke pair from the matching stroke group:
(1) the matched stroke group includes a plurality of matched stroke pairs consisting of a stroke to be evaluated and a plurality of template strokes:
and determining the matching stroke pair with the shortest matching path length as the target stroke pair.
(2) The matched stroke group includes a plurality of matched stroke pairs consisting of a plurality of strokes to be evaluated and a template stroke:
and determining the matching stroke pair with the shortest matching path length and the second distance as the target stroke pair.
Wherein the second distance of a matching stroke pair is determined based on stroke-level characteristics (e.g., 8-directional characteristics) of the two strokes.
(3) If the matched stroke group comprises a matched stroke pair consisting of a stroke to be evaluated and a template stroke:
and if the matching stroke pair cannot be determined to be the target stroke pair according to the second distance of the matching stroke pair, determining whether the matching stroke pair is the target stroke pair further according to the shortest matching path length corresponding to the matching stroke pair and the direction trend of the two strokes in the matching stroke pair.
Specifically, if the second distance of the matching stroke pair is greater than the first distance threshold smaxThen it is determined that the matched stroke pair is not the target stroke pair, if the second distance of the matched stroke pair is less than a second distance threshold sminThen the matching stroke pair is determined to be the target stroke pair, if the second distance of the matching stroke pair is greater than or equal to the second distance threshold sminAnd is less than or equal to a first distance threshold smaxAnd determining whether the matched stroke pair is a target stroke pair according to the shortest matched path length of the matched stroke pair and the direction trends of the two strokes in the matched stroke pair, specifically, if the shortest matched path length of the matched stroke pair is greater than a third distance threshold value and the direction trends of the two strokes in the matched stroke pair are different, determining that the matched stroke pair is not the target stroke pair.
And d2, determining the strokes to be evaluated in the target stroke pair in the strokes of the character to be evaluated as correct strokes, and determining other strokes as wrong strokes.
That is, for each stroke of the character to be evaluated, if the stroke has the target stroke pair, the stroke is determined to be a correct stroke, otherwise, the stroke is determined to be an incorrect stroke, and thus, the correct stroke and the incorrect stroke in the strokes of the character to be evaluated can be obtained.
The character evaluation method provided by the embodiment of the application takes strokes of characters as research objects, and performs stroke matching by using the characteristics of the strokes of the characters to be evaluated and the characteristics of the stroke layer and the track point layer of the template characters to determine the stroke writing condition of the characters to be evaluated.
An embodiment of the present application further provides a text learning method, please refer to fig. 9, which shows a flow diagram of the text learning method, and the method may include:
step S901: and acquiring template characters selected by a user.
Step S902: and acquiring and playing the character learning information corresponding to the template characters selected by the user from the character learning information corresponding to the plurality of template characters acquired in advance.
The character learning information corresponding to any template character is used for guiding a user to learn the writing of the template character.
Optionally, the text learning information corresponding to any template text may include: the dynamic tracing of the template character writing process is preferably that the character learning information can also comprise a voice broadcast file.
Specifically, there are various ways to acquire the text learning information corresponding to the template text selected by the user from the text learning information corresponding to the pre-acquired template texts, respectively:
in a possible implementation manner, a corresponding relationship between an identifier (for example, a Unicode code) of a template character and character learning information may be pre-stored, and when a user selects a certain template character, the character learning information corresponding to the identifier of the template character selected by the user is obtained according to the identifier (for example, the Unicode code) of the template character and the corresponding relationship between the identifier of the pre-stored template character and the character learning information, and is used as the character learning information corresponding to the template character selected by the user.
In another possible implementation manner, for any template character in the plurality of template characters, the character learning information corresponding to the template character may be named by the identifier of the template character, and when a user selects a certain template character, the character learning information named by the identifier of the template character may be obtained according to the identifier of the template character, and used as the character learning information corresponding to the template character selected by the user.
Step S903: and when an evaluation instruction is received, evaluating the characters to be evaluated written by the user according to the template characters.
Specifically, when an evaluation instruction is received, the text to be evaluated is evaluated by using the text evaluation method provided in the above embodiment, and for a specific evaluation process, reference may be made to the description of the above embodiment, which is not described herein again.
Optionally, after the text learning information corresponding to the template text is played, information for indicating whether the user performs the handwriting test may be popped up, and when the user selects to perform the handwriting test, an evaluation instruction is received. When the handwriting test is performed, the user writes characters according to the previous learning condition, and then the character evaluation method provided by the embodiment is adopted to evaluate the characters written by the user.
In addition, it should be noted that, when evaluating the text to be evaluated, the characteristics of the stroke level of the template text selected by the user need to be obtained, the characteristics of the stroke layer of the template characters are determined according to the track point data of each stroke of the template characters, in one possible implementation, the trajectory point data for each stroke of each template word is located in a pre-created resource file, when track point data of each stroke of the template characters are obtained, offset information (the offset information is the byte number of the offset of the initial position of the track point data of each stroke of the template characters in the resource file relative to the initial position of the resource file) can be obtained according to the identifier of the template characters, determining the position of the track point data of each stroke of the template characters in the resource file according to the offset information, and acquiring the track point data of each stroke of the template character selected by the user from the resource file according to the determined position.
Step S904: and visualizing the evaluation result of the character to be evaluated.
The method comprises the following steps of visually realizing the evaluation result of the character to be evaluated, wherein in one possible implementation mode, a user can directly display correct strokes and/or wrong strokes written by the user to the user, in another possible implementation mode, the writing condition of the user can be scored according to the correct strokes and/or wrong strokes written by the user, then the score is displayed to the user, and of course, the correct strokes and/or wrong strokes written by the user and the writing score can be displayed to the user at the same time, so that the user can know the writing condition of the user.
The text learning method provided by the embodiment of the application can play the text learning information corresponding to the template text when the user selects the template text, so that the user can learn the writing process of the text, the text written by the user aiming at the template text can be evaluated after the learning is finished, the user can know the writing condition of the user through the evaluation result of the written text, for example, correct strokes and wrong strokes are written, and the evaluation result of the written text can better guide the user to learn the writing of the template text.
When the characters to be evaluated are evaluated, the characteristics of the stroke layer of the template characters are determined according to the track point data of the template characters. In a possible implementation manner, the track point data of the template text and the dynamic tracing image of the template text are obtained in advance at the same time, and specifically, the process of obtaining the dynamic tracing image and the track point data of any template text in advance may include:
and e1, decomposing the writing process dynamic graph crawled from the network aiming at the template characters according to frames to obtain an ordered image frame sequence.
Step e2, producing a target base map and a target template from the last image frame in the ordered image frame sequence.
The target base map is a base map for tracing each image frame, and the target template is used for extracting an area needing tracing in each image frame.
Alternatively, the gray background can be a white background, a gray cross grid, and a gray font, and when the font is colored red, the gray area of the font is filled with red ink (i.e., the gray area of the font is changed into red). The target template is a white background and a pure black font, and operation is carried out on the target template and each image frame, and then the area needing to be filled with red ink in the image frame can be obtained.
Step e3, making a tracing red image of each image frame in the ordered image frame sequence according to the target base image and the target template, and determining the track point data according to the difference between the adjacent tracing red images.
It should be noted that, if the text learning information corresponding to the template text further includes a voice broadcast file, and the voice broadcast file is made according to the time stamp of the stroke, when acquiring the tracing graph and the tracing point data, the time stamp of the stroke can be acquired together, and the time stamp of each stroke is a binary group (startFrame, endFrame), where startFrame represents the starting frame of the stroke in the dynamic tracing graph; endFrame represents the ending frame of the stroke in the dynamic tracing.
Specifically, the specific process of step e3 may include:
step e 31: an unacquired image frame is acquired from the ordered sequence of image frames, and an image frame immediately succeeding the image frame is acquired.
Step e 32: two gray base images were acquired.
One of the gray base maps corresponds to one of the two acquired image frames, and the other gray base map corresponds to the other of the two acquired image frames.
Step e 33: and respectively performing AND operation on the two acquired image frames and a target template, determining areas needing to be subjected to tracing in the two image frames according to operation results, and tracing in red to obtain tracing images corresponding to the two image frames respectively.
Specifically, assuming that the target template is a white background and a black font, for each of two image frames, if at a certain pixel coordinate, the target template and the image frame both have a black pixel, the corresponding pixel position of the gray base image corresponding to the image frame is filled with red ink.
Step e 34: and determining different pixel blocks in the tracing picture respectively corresponding to the two image frames, and determining track point data according to the different pixel blocks.
Specifically, the and operation is performed on the rendering images respectively corresponding to the two image frames, different pixel blocks in the rendering images respectively corresponding to the two image frames are determined according to the operation result, if no different pixel block exists, or if different pixel blocks exist, but the number of pixels included in the different pixel blocks is smaller than the preset number, the rendering images respectively corresponding to the two image frames are determined to be the same, otherwise, the rendering images respectively corresponding to the two image frames are determined to be different. If the tracing graphs corresponding to the two image frames are different, the centers of different pixel blocks are determined as track points, so that track point data is obtained, and whether the stroke is finished or not can be determined to obtain the timestamp.
Step e 35: and judging whether the ordered image frame sequence has an image frame which is not acquired, if so, executing step e31, otherwise, executing step e 4.
And e4, synthesizing all the manufactured tracing graphs into a dynamic tracing graph in the writing process of the template characters, and forming the obtained trace point data into the trace point data of the template characters.
The following describes the text evaluation device provided in the embodiments of the present application, and the text evaluation device described below and the text evaluation method described above may be referred to in correspondence with each other.
Referring to fig. 10, a schematic structural diagram of a text evaluation apparatus 100 according to an embodiment of the present application is shown, where the text evaluation apparatus may include: the evaluation system comprises a template character feature acquisition module 1001, a track point data acquisition module 1002, a character feature acquisition module to be evaluated 1003, a stroke matching module 1004 and an evaluation result determination module 1005.
The template character feature obtaining module 1001 is configured to obtain features of a stroke layer of a template character.
The track point data obtaining module 1002 is configured to obtain track point data of each stroke of the text to be evaluated, where the track point data corresponds to the template text.
And the character feature acquisition module 1003 to be evaluated is configured to acquire features of the stroke layer of the character to be evaluated according to the track point data.
And the stroke matching module 1004 is used for performing stroke matching on the characters to be evaluated and the template characters according to the characteristics of the stroke layers of the characters to be evaluated and the template characters to obtain a stroke matching result.
And the evaluation result determining module 1005 is configured to determine, according to the stroke matching result, a correct stroke and/or an incorrect stroke in the strokes of the text to be evaluated.
The character evaluation device provided by the embodiment of the application takes strokes of characters as research objects, and determines the stroke writing condition of the characters to be evaluated by performing stroke matching on template characters and the characters to be evaluated.
In one possible scenario, the stroke matching result includes: at least one matching stroke group. Wherein each matching stroke group comprises: the evaluation device comprises a plurality of matched stroke pairs consisting of a stroke to be evaluated and a plurality of template strokes, or a plurality of matched stroke pairs consisting of a plurality of strokes to be evaluated and a template stroke, or a matched stroke pair consisting of a stroke to be evaluated and a template stroke. Any matched stroke pair comprises a template stroke and a stroke to be evaluated, wherein the template stroke is the stroke of the template character, and the stroke to be evaluated is the stroke of the character to be evaluated.
In one possible implementation, the stroke matching module 1004 may include: a shortest matching path determining module and a matching stroke group determining module.
And the shortest matching path determining module is used for determining the shortest matching path between the stroke sequence of the character to be evaluated and the stroke sequence of the template character according to the characteristics of the stroke layers of the character to be evaluated and the template character, wherein the shortest matching path can represent the stroke matching condition of the character to be evaluated and the template character.
And the matching stroke group determining module is used for determining the matching stroke group of the character to be evaluated and the template character according to the shortest matching path of the stroke sequence of the character to be evaluated and the stroke sequence of the template character.
In one possible implementation, the shortest matching path determining module includes: a first distance matrix determination submodule, a first cumulative distance matrix determination submodule and a shortest matching path determination submodule.
And the first distance matrix determining submodule is used for determining the distance between each stroke of the character to be evaluated and each stroke of the template character as a first distance according to the characteristics of stroke layers of the character to be evaluated and the template text, and forming a first distance matrix by all the obtained first distances according to the stroke sequence of the character to be evaluated and the template character.
And the first accumulated distance matrix determining submodule is used for determining a first accumulated distance corresponding to each element in the first distance matrix to obtain a first accumulated distance matrix. The first cumulative distance corresponding to any element is the length of the shortest path from the first element in the first distance matrix to the element, and the first element in the first distance matrix is the first distance between the first stroke of the character to be evaluated and the first stroke of the template character.
And the shortest matching path determining submodule is used for determining the shortest path from the first element to the last element in the first cumulative distance matrix, and the shortest path is used as the shortest matching path between the stroke sequence of the character to be evaluated and the stroke sequence of the template character. The first element and the last element in the first cumulative distance matrix are respectively a first cumulative distance between the first stroke of the character to be evaluated and the first stroke of the template character, and a first cumulative distance between the last stroke of the character to be evaluated and the last stroke of the template character.
In a possible implementation, the shortest matching path determining submodule is specifically configured to search, starting from a last element in the first cumulative distance matrix, for a shortest path from the last element to the first element; and reversing the shortest path from the last element to the first element to obtain the shortest path from the first element to the last element in the first cumulative distance matrix.
In one possible implementation, the evaluation result determining module 1005 includes: the stroke segmentation device comprises a stroke segmentation characteristic acquisition module and a stroke determination module.
The stroke segmentation characteristic acquisition module is used for acquiring the stroke segmentation characteristics of each stroke in at least one matched stroke group, wherein the stroke segmentation characteristics are characteristics of a track point layer, and the stroke segmentation characteristics of any stroke can embody the trend and the stroke number of the stroke.
And the stroke determining module is used for determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated according to the segment segmentation characteristics of each stroke in at least one matched stroke group.
In one possible implementation manner, the segment segmentation feature obtaining module includes: a segmentation point determining module and a stroke segmentation characteristic determining module.
And the segmentation point determining module is used for acquiring a target segmentation point sequence according to the trajectory point data of the stroke. The segmentation points contained in the target segmentation point sequence are at least part of the trajectory points contained in the stroke, and the segmentation points in the target segmentation point sequence are arranged according to the sequence of the acquisition time of the trajectory points;
and the stroke segmentation characteristic determination module is used for respectively determining the stroke segmentation characteristics of each segmentation point in the target segmentation point sequence.
In one possible implementation, the segmentation point determination sub-module includes: the stroke normalization submodule, the candidate segmentation point determination submodule and the segmentation point determination submodule.
And the stroke normalization submodule is used for normalizing the stroke into an image with a preset aspect ratio to obtain the normalized stroke.
And the candidate segmentation point determining submodule is used for determining candidate segmentation points from the track points contained in the normalized strokes according to the direction angles of the track points contained in the normalized strokes, and forming a candidate segmentation point sequence by all the determined candidate segmentation points. And the direction angle of the track point is determined according to the track point data of the stroke.
And the division point determining submodule is used for removing redundant division points from the candidate division point sequence according to the distance between adjacent candidate division points in the candidate division point sequence, and the division point sequence obtained after the redundant division points are removed is used as a target division point sequence.
In a possible implementation manner, the segment segmentation feature determination module is specifically configured to calculate, for each segmentation point of other segmentation points in the target segmentation point sequence except for the last segmentation point, a sum vector of a vector of the segmentation point and a vector of a segmentation point immediately succeeding the segmentation point, and combine the vector of the segmentation point and the sum vector into the segment segmentation feature of the segmentation point, where the vector of any segmentation point is a vector starting from the segmentation point and pointing to the segmentation point immediately succeeding the segmentation point; and for the last segmentation point in the target segmentation point sequence, forming the segment segmentation characteristic of the segmentation point by the vector of the segmentation point and the copy of the vector of the segmentation point.
In one possible implementation, the stroke determination module includes: a shortest matching path length determination module, a correct and/or wrong stroke determination submodule.
And the shortest matching path length determining module is used for determining the length of the shortest matching path of the segmentation point sequence of one stroke and the segmentation point sequence of the other stroke in each matching stroke group as the length of the shortest matching path corresponding to the matching stroke pair according to the segment segmentation characteristics of the two strokes in the matching stroke group. The shortest matching path can represent the matching condition of the division points of the two strokes in the matching stroke pair.
And the correct and/or wrong stroke determining module is used for determining the correct stroke and/or wrong stroke in the strokes of the character to be evaluated according to the corresponding shortest matching path length of each matched stroke pair in each matched stroke group.
The shortest matching path length determining module comprises: a second distance matrix determination submodule, a second cumulative distance matrix determination submodule and a shortest matching path length determination submodule.
And the second distance matrix determining submodule is used for determining the distance between each segmentation point of one stroke and each segmentation point of the other stroke in the matched stroke pair as a second distance according to the segment segmentation characteristics of the two strokes in the matched stroke pair, and forming a second distance matrix by all the determined second distances according to the sequence of the segmentation points of the two strokes.
And the second accumulated distance matrix determining submodule is used for determining a second accumulated distance corresponding to each element in the second distance matrix to obtain a second accumulated distance matrix. And the second accumulated distance corresponding to any element is the length of the shortest path from the first element in the second distance matrix to the element, and the first element in the second distance matrix is the second distance between the first division point of one stroke and the first division point of the other stroke in the matched stroke pair.
And the shortest matching path length determining submodule is used for determining the last element in the second cumulative distance matrix as the length of the shortest matching path of the segmentation point sequence of one stroke and the segmentation point sequence of the other stroke in the matching stroke pair. Wherein the last element in the second cumulative distance is a second cumulative distance of the last segmentation point of one stroke and the last segmentation point of the other stroke in the matched pair of strokes.
A correct and/or incorrect stroke determination module, specifically for, for each matching stroke group:
if the matched stroke group comprises a plurality of matched stroke pairs consisting of a stroke to be evaluated and a plurality of template strokes, determining the matched stroke pair with the shortest matching path length as a target stroke pair; if the matched stroke group comprises a plurality of matched stroke pairs consisting of a plurality of strokes to be evaluated and a template stroke, determining the matched stroke pair with the shortest matching path length and the matched stroke pair with the smallest second distance as a target stroke pair, wherein the second distance of one matched stroke pair is determined according to the stroke level characteristics of two strokes in the matched stroke pair; if the matched stroke group comprises a matched stroke pair consisting of a stroke to be evaluated and a template stroke, determining whether the matched stroke pair is a target stroke pair according to the second distance of the matched stroke pair, and if the matched stroke pair cannot be determined to be the target stroke pair according to the second distance of the matched stroke pair, further determining whether the matched stroke pair is the target stroke pair according to the shortest matching path length of the matched stroke pair and the direction trends of two strokes in the matched stroke pair; and determining the strokes to be evaluated in the target stroke pair as correct strokes and determining other strokes as wrong strokes in the strokes of the characters to be evaluated.
An embodiment of the present application further provides a text learning apparatus, please refer to fig. 11, which shows a schematic structural diagram of the text learning apparatus 110, and the text learning apparatus may include: a template text acquisition module 1101, a text learning information acquisition module 1102, a playback module 1103, an instruction receiving module 1104, the text evaluation apparatus 100 provided in the above embodiments, and a visualization module 1105.
The template text acquiring module 1101 is configured to acquire a template text selected by a user.
The text learning information obtaining module 1102 is configured to obtain text learning information corresponding to a template text selected by a user from text learning information corresponding to a plurality of template texts respectively obtained in advance, where the text learning information corresponding to any template text is used to guide the user to learn writing of the template text.
The playing module 1103 is configured to play the text learning information corresponding to the text learning information template selected by the user to guide the user to learn the writing of the template text.
The character evaluation device 100 is configured to evaluate a character to be evaluated written by a user according to the template character when the instruction receiving module receives the evaluation instruction.
And a visualization module 1105, configured to visualize the evaluation result of the text to be evaluated.
In one possible implementation manner, the features of the stroke layer of the template characters, which are obtained by the template character feature obtaining module 1001 in the character evaluation device 100, are obtained according to the track point data of the template characters; the character learning information corresponding to any template character includes a dynamic tracing chart of the writing process of the template character, and the character learning device may further include: and a template text information acquisition module.
The template text information acquisition module comprises: the device comprises a decomposition module, a bottom plate and template manufacturing module, a tracing drawing manufacturing module, a tracing point determining module, a tracing drawing synthesizing module and a tracing point data acquiring module.
And the decomposition module is used for decomposing the writing process dynamic image which is crawled from the network aiming at the template characters according to frames to obtain the ordered image frame sequence. And the bottom plate and template making module is used for making a target base map and a target template according to the last image frame in the ordered image frame sequence, wherein the target base map is a base map used for tracing red in each image frame, and the target template is used for extracting a region needing to be traced red in each image frame. And the tracing drawing making module is used for making a tracing drawing of each image frame in the ordered image frame sequence according to the target base map and the target template. And the track point determining module is used for determining track point data according to the difference between the adjacent tracing graphs. And the tracing red chart synthesizing module is used for synthesizing all the manufactured tracing red charts into a dynamic tracing red chart in the template character writing process. And the track point data acquisition module is used for forming the track point data of the template characters by all the obtained track point data.
An embodiment of the present application further provides a text evaluation device, please refer to fig. 12, which shows a schematic structural diagram of the text evaluation device, where the text evaluation device may include: at least one processor 1201, at least one communication interface 1202, at least one memory 1203, and at least one communication bus 1204;
in this embodiment, the number of the processor 1201, the communication interface 1202, the memory 1203 and the communication bus 1204 is at least one, and the processor 1201, the communication interface 1202 and the memory 1203 complete communication with each other through the communication bus 1204;
the processor 1201 may be a central processing unit CPU, or an application Specific Integrated circuit asic, or one or more Integrated circuits configured to implement embodiments of the present invention, etc.;
the memory 1203 may include a high-speed RAM memory, and may also include a non-volatile memory (non-volatile memory) or the like, such as at least one disk memory;
wherein the memory stores a program and the processor can call the program stored in the memory, the program for:
acquiring the characteristics of the stroke layer of the template characters;
acquiring track point data of each stroke of the character to be evaluated corresponding to the template character, and determining the characteristics of the stroke layer of the character to be evaluated according to the track point data;
according to the characteristics of the character to be evaluated and the stroke layer of the template character, performing stroke matching on the character to be evaluated and the template character to obtain a stroke matching result;
and determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated according to the stroke matching result.
Alternatively, the detailed function and the extended function of the program may be as described above.
Embodiments of the present application further provide a readable storage medium, where a program suitable for being executed by a processor may be stored, where the program is configured to:
acquiring the characteristics of the stroke layer of the template characters;
acquiring track point data of each stroke of the character to be evaluated corresponding to the template character, and determining the characteristics of the stroke layer of the character to be evaluated according to the track point data;
according to the characteristics of the character to be evaluated and the stroke layer of the template character, performing stroke matching on the character to be evaluated and the template character to obtain a stroke matching result;
and determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated according to the stroke matching result.
An embodiment of the present application further provides a text learning device, please refer to fig. 13, which shows a schematic structural diagram of the text learning device, where the text learning device may include: at least one processor 1301, at least one communication interface 1302, at least one memory 1303, and at least one communication bus 1304;
in this embodiment of the application, the number of the processor 1301, the communication interface 1302, the memory 1303, and the communication bus 1304 is at least one, and the processor 1301, the communication interface 1302, and the memory 1303 complete communication with each other through the communication bus 1304;
processor 1301 may be a central processing unit CPU, or an application Specific Integrated circuit asic, or one or more Integrated circuits configured to implement embodiments of the present invention, or the like;
the memory 1303 may include a high-speed RAM memory, a non-volatile memory (non-volatile memory), and the like, such as at least one magnetic disk memory;
wherein the memory stores a program and the processor can call the program stored in the memory, the program for:
acquiring template characters selected by a user; acquiring character learning information corresponding to template characters selected by a user from character learning information corresponding to a plurality of template characters acquired in advance and playing the character learning information, wherein the character learning information corresponding to any template character is used for guiding the user to learn the writing of the template character; when an evaluation instruction is received, the character evaluation method provided by the embodiment is adopted to evaluate the characters to be evaluated written by the user according to the template characters; and visualizing the evaluation result of the character to be evaluated.
Alternatively, the detailed function and the extended function of the program may be as described above.
Embodiments of the present application further provide a readable storage medium, where the readable storage medium may store a program adapted to be executed by a processor, where the program is configured to:
acquiring template characters selected by a user; acquiring character learning information corresponding to template characters selected by a user from character learning information corresponding to a plurality of template characters acquired in advance and playing the character learning information, wherein the character learning information corresponding to any template character is used for guiding the user to learn the writing of the template character; when an evaluation instruction is received, the character evaluation method provided by the embodiment is adopted to evaluate the characters to be evaluated written by the user according to the template characters; and visualizing the evaluation result of the character to be evaluated.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (16)

1. A method for evaluating a word, comprising:
acquiring the characteristics of the stroke layer of the template characters;
acquiring track point data of each stroke of the character to be evaluated corresponding to the template character, and determining the characteristics of the stroke layer of the character to be evaluated according to the track point data;
according to the characteristics of the stroke layer of the character to be evaluated and the template character, performing stroke matching on the character to be evaluated and the template character to obtain a shortest matching path, and further obtaining a stroke matching result;
determining correct strokes and/or wrong strokes in the strokes of the characters to be evaluated according to the stroke matching result;
the stroke matching of the characters to be evaluated and the template characters according to the characteristics of the stroke levels of the characters to be evaluated and the template characters to obtain the shortest matching path comprises the following steps:
determining the distance between each stroke of the character to be evaluated and each stroke of the template character as a first distance according to the characteristics of the stroke layers of the character to be evaluated and the template text, and forming a first distance matrix by all the obtained first distances according to the stroke sequence of the character to be evaluated and the template character;
determining a first accumulated distance corresponding to each element in the first distance matrix to obtain a first accumulated distance matrix, wherein the first accumulated distance corresponding to any element is the length of the shortest path from a first element in the first distance matrix to the element, and the first element in the first distance matrix is a first distance between a first stroke of the text to be evaluated and a first stroke of the template text;
and determining a shortest path from a first element to a last element in the first cumulative distance matrix as a shortest matching path of the stroke sequence of the character to be evaluated and the stroke sequence of the template character, wherein the first element and the last element in the first cumulative distance matrix are a first cumulative distance between a first stroke of the character to be evaluated and the first stroke of the template character, and a first cumulative distance between the last stroke of the character to be evaluated and the last stroke of the template character, respectively.
2. The text evaluation method of claim 1, wherein the stroke matching result comprises: at least one matching stroke group;
wherein each matching stroke group comprises: a plurality of matched stroke pairs consisting of one stroke to be evaluated and a plurality of template strokes, or a plurality of matched stroke pairs consisting of a plurality of strokes to be evaluated and one template stroke, or one matched stroke pair consisting of one stroke to be evaluated and one template stroke;
any matched stroke pair comprises a template stroke and a stroke to be evaluated, wherein the template stroke is the stroke of the template character, and the stroke to be evaluated is the stroke of the character to be evaluated.
3. The method for evaluating words according to claim 2, wherein the stroke matching of the words to be evaluated and the template words according to the features of the stroke level of the words to be evaluated and the template words to obtain the shortest matching path, and further obtaining the stroke matching result comprises:
determining a shortest matching path of the stroke sequence of the character to be evaluated and the stroke sequence of the template character according to the characteristics of the stroke layers of the character to be evaluated and the template character, wherein the shortest matching path can represent the stroke matching condition of the character to be evaluated and the template character;
and determining a matching stroke group of the character to be evaluated and the template character according to the shortest matching path of the stroke sequence of the character to be evaluated and the stroke sequence of the template character.
4. The text evaluation method of claim 1, wherein the determining a shortest path from a first element to a last element in the first cumulative distance matrix comprises:
searching for a shortest path from a last element to a first element in the first cumulative distance matrix starting from the last element;
and reversing the shortest path from the last element to the first element to obtain the shortest path from the first element to the last element in the first cumulative distance matrix.
5. The method for evaluating words according to claim 2, wherein said determining a correct stroke and/or an incorrect stroke among strokes of said word to be evaluated according to said stroke matching result comprises:
the method comprises the steps of obtaining segment segmentation characteristics of each stroke in at least one matched stroke group, wherein the segment segmentation characteristics are characteristics of a track point layer, and the segment segmentation characteristics of any stroke can reflect the trend and the number of the strokes;
and determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated according to the segment segmentation characteristics of each stroke in the at least one matched stroke group.
6. The text evaluation method of claim 5, wherein obtaining the segmentation characteristics of any stroke comprises:
acquiring a target segmentation point sequence according to the track point data of the stroke, wherein the segmentation points contained in the target segmentation point sequence are at least part of the track points contained in the stroke, and the segmentation points in the target segmentation point sequence are arranged according to the sequence of the acquisition time sequence of the track points;
and respectively determining the segment segmentation characteristics of each segmentation point in the target segmentation point sequence.
7. The text evaluation method of claim 6, wherein said obtaining a target segmentation point sequence based on trajectory point data of the stroke comprises:
normalizing the stroke to an image with a preset aspect ratio to obtain a normalized stroke;
determining candidate segmentation points from the track points contained in the normalized stroke according to the direction angles of the track points contained in the normalized stroke, and forming a candidate segmentation point sequence by all the determined candidate segmentation points, wherein the direction angles of the track points are determined according to the track point data of the stroke;
and removing redundant segmentation points from the candidate segmentation point sequence according to the distance between adjacent candidate segmentation points in the candidate segmentation point sequence, wherein the segmentation point sequence obtained after the redundant segmentation points are removed is used as the target segmentation point sequence.
8. The method for evaluating words according to claim 6, wherein said determining the segment segmentation characteristics of each segmentation point in the target segmentation point sequence respectively comprises:
for each division point in other division points except the last division point in the target division point sequence, calculating the sum vector of the division point and the vector of the division point directly succeeding the division point, and forming the stroke division characteristics of the division point by the vector of the division point and the sum vector, wherein the vector of any division point is the vector starting from the division point and pointing to the division point directly succeeding the division point;
and for the last segmentation point in the target segmentation point sequence, forming the segment segmentation characteristic of the segmentation point by the vector of the segmentation point and the copy of the vector of the segmentation point.
9. The method of claim 5, wherein the determining the correct stroke and/or the incorrect stroke in the strokes of the text to be evaluated according to the segment splitting characteristic of each stroke in the at least one matching stroke group comprises:
for each matching stroke pair in each matching stroke group, determining the length of the shortest matching path of the segmentation point sequence of one stroke and the segmentation point sequence of the other stroke in the matching stroke pair according to the segment segmentation characteristics of the two strokes in the matching stroke pair, and taking the length of the shortest matching path corresponding to the matching stroke pair, wherein the shortest matching path can represent the matching condition of the segmentation point sequences of the two strokes in the matching stroke pair;
and determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated according to the corresponding shortest matching path length of each matching stroke pair in each matching stroke group.
10. The text evaluation method of claim 9, wherein determining the length of the shortest matching path between the sequence of segmentation points of one stroke and the sequence of segmentation points of the other stroke in the matched pair of strokes based on the segment segmentation characteristics of the two strokes in the matched pair of strokes comprises:
determining the distance between each segmentation point of one stroke and each segmentation point of the other stroke in the matched stroke pair as a second distance according to the segment segmentation characteristics of the two strokes in the matched stroke pair, and forming a second distance matrix by all the determined second distances according to the sequence of the segmentation points of the two strokes;
determining a second accumulated distance corresponding to each element in the second distance matrix to obtain a second accumulated distance matrix, wherein the second accumulated distance corresponding to any element is the length of the shortest path from the first element to the element in the second distance matrix, and the first element in the second distance matrix is a second distance between the first dividing point of one stroke and the first dividing point of the other stroke in the matched stroke pair;
and determining the last element in the second accumulated distance matrix as the length of the shortest matching path of the segmentation point sequence of one stroke and the segmentation point sequence of the other stroke in the matched stroke pair, wherein the last element in the second accumulated distance is the second accumulated distance of the last segmentation point of one stroke and the last segmentation point of the other stroke in the matched stroke pair.
11. The method for evaluating words according to claim 9, wherein said determining a correct stroke and/or an incorrect stroke in the strokes of the word to be evaluated by the shortest matching path length corresponding to each matching stroke pair in each matching stroke group comprises:
for each matching stroke group:
if the matched stroke group comprises a plurality of matched stroke pairs consisting of a stroke to be evaluated and a plurality of template strokes, determining the matched stroke pair with the shortest matching path length as a target stroke pair;
if the matched stroke group comprises a plurality of matched stroke pairs consisting of a plurality of strokes to be evaluated and a template stroke, determining the matched stroke pair with the shortest matching path length and the matched stroke pair with the smallest second distance as a target stroke pair, wherein the second distance of one matched stroke pair is determined according to the stroke level characteristics of two strokes in the matched stroke pair;
if the matched stroke group comprises a matched stroke pair consisting of a stroke to be evaluated and a template stroke, determining whether the matched stroke pair is a target stroke pair according to the second distance of the matched stroke pair, and if the matched stroke pair cannot be determined to be the target stroke pair according to the second distance of the matched stroke pair, further determining whether the matched stroke pair is the target stroke pair according to the shortest matching path length of the matched stroke pair and the direction trends of two strokes in the matched stroke pair;
and determining the strokes to be evaluated in the target stroke pair as correct strokes and determining other strokes as wrong strokes in the strokes of the characters to be evaluated.
12. A method for learning characters, comprising:
acquiring template characters selected by a user;
acquiring character learning information corresponding to template characters selected by a user from character learning information corresponding to a plurality of template characters acquired in advance and playing the character learning information, wherein the character learning information corresponding to any template character is used for guiding the user to learn the writing of the template character;
when an evaluation instruction is received, evaluating the characters to be evaluated written by the user according to the template characters by adopting the character evaluation method as claimed in any one of claims 1 to 11;
and visualizing the evaluation result of the character to be evaluated.
13. The character learning method of claim 12, wherein the features of any template character stroke level are obtained from the tracing point data of the template character; the character learning information corresponding to any template character comprises a dynamic tracing picture of the writing process of the template character;
the process of acquiring dynamic tracing chart and track point data in the writing process of a template character in advance comprises the following steps:
decomposing a writing process dynamic graph crawled from a network aiming at the template characters according to frames to obtain an ordered image frame sequence;
making a target base map and a target template according to the last image frame in the ordered image frame sequence, wherein the target base map is a base map used for tracing each image frame, and the target template is used for extracting the region needing to be traced in each image frame;
making a tracing red image of each image frame in the ordered image frame sequence according to the target base image and the target template, and determining track point data according to the difference between adjacent tracing red images;
and synthesizing all the manufactured tracing graphs into a dynamic tracing graph in the writing process of the template characters, and forming the obtained all the tracing point data into the tracing point data of the template characters.
14. A character evaluation apparatus, comprising: the device comprises a template character characteristic acquisition module, a track point data acquisition module, a character characteristic acquisition module to be evaluated, a stroke matching module and an evaluation result determination module;
the template character characteristic acquisition module is used for acquiring the characteristics of the stroke layer of the template character;
the track point data acquisition module is used for acquiring track point data of each stroke of the character to be evaluated corresponding to the template character;
the character feature acquisition module to be evaluated is used for acquiring the features of the stroke layer of the character to be evaluated according to the track point data;
the stroke matching module is used for performing stroke matching on the characters to be evaluated and the template characters according to the characteristics of the stroke layers of the characters to be evaluated and the template characters to obtain a shortest matching path and further obtaining a stroke matching result;
the evaluation result determining module is used for determining the correct stroke and/or the wrong stroke in the strokes of the character to be evaluated according to the stroke matching result;
the stroke matching module is specifically configured to, when performing stroke matching on the text to be evaluated and the template text according to the features of the stroke levels of the text to be evaluated and the template text to obtain a shortest matching path:
determining the distance between each stroke of the character to be evaluated and each stroke of the template character as a first distance according to the characteristics of the stroke layers of the character to be evaluated and the template text, and forming a first distance matrix by all the obtained first distances according to the stroke sequence of the character to be evaluated and the template character;
determining a first accumulated distance corresponding to each element in the first distance matrix to obtain a first accumulated distance matrix, wherein the first accumulated distance corresponding to any element is the length of the shortest path from a first element in the first distance matrix to the element, and the first element in the first distance matrix is a first distance between a first stroke of the text to be evaluated and a first stroke of the template text;
and determining a shortest path from a first element to a last element in the first cumulative distance matrix as a shortest matching path of the stroke sequence of the character to be evaluated and the stroke sequence of the template character, wherein the first element and the last element in the first cumulative distance matrix are a first cumulative distance between a first stroke of the character to be evaluated and the first stroke of the template character, and a first cumulative distance between the last stroke of the character to be evaluated and the last stroke of the template character, respectively.
15. A character evaluation apparatus, comprising: a memory and a processor;
the memory is used for storing programs;
the processor is used for executing the program and realizing the steps of the character evaluation method according to any one of claims 1 to 11.
16. A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the text evaluation method according to any one of claims 1 to 11.
CN201910784175.9A 2019-08-23 2019-08-23 Character evaluation method, character learning method, device, equipment and storage medium Active CN110490157B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910784175.9A CN110490157B (en) 2019-08-23 2019-08-23 Character evaluation method, character learning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910784175.9A CN110490157B (en) 2019-08-23 2019-08-23 Character evaluation method, character learning method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110490157A CN110490157A (en) 2019-11-22
CN110490157B true CN110490157B (en) 2022-04-29

Family

ID=68553235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910784175.9A Active CN110490157B (en) 2019-08-23 2019-08-23 Character evaluation method, character learning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110490157B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297892B (en) * 2020-11-27 2022-06-14 上海交通大学 Image optimization recognition system for shape-similar Chinese characters
CN117237954B (en) * 2023-11-14 2024-03-19 暗物智能科技(广州)有限公司 Text intelligent scoring method and system based on ordering learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1652138A (en) * 2005-02-08 2005-08-10 华南理工大学 Method for identifying hand-writing characters
CN101369382A (en) * 2007-08-17 2009-02-18 英业达股份有限公司 Chinese character writing validation system and method
CN102375994A (en) * 2010-08-10 2012-03-14 广东因豪信息科技有限公司 Method and device for detecting and reducing correctness of order of strokes of written Chinese character
CN103390358A (en) * 2013-07-03 2013-11-13 广东小天才科技有限公司 Method and device for performing standardability judgment of character writing operation of electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8749531B2 (en) * 2010-03-31 2014-06-10 Blackberry Limited Method for receiving input on an electronic device and outputting characters based on sound stroke patterns

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1652138A (en) * 2005-02-08 2005-08-10 华南理工大学 Method for identifying hand-writing characters
CN101369382A (en) * 2007-08-17 2009-02-18 英业达股份有限公司 Chinese character writing validation system and method
CN102375994A (en) * 2010-08-10 2012-03-14 广东因豪信息科技有限公司 Method and device for detecting and reducing correctness of order of strokes of written Chinese character
CN103390358A (en) * 2013-07-03 2013-11-13 广东小天才科技有限公司 Method and device for performing standardability judgment of character writing operation of electronic device

Also Published As

Publication number Publication date
CN110490157A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN110738207B (en) Character detection method for fusing character area edge information in character image
CN112348815B (en) Image processing method, image processing apparatus, and non-transitory storage medium
CN109614944B (en) Mathematical formula identification method, device, equipment and readable storage medium
CN112070658B (en) Deep learning-based Chinese character font style migration method
CN111027563A (en) Text detection method, device and recognition system
WO2014174932A1 (en) Image processing device, program, and image processing method
CN111626297A (en) Character writing quality evaluation method and device, electronic equipment and recording medium
CN110490157B (en) Character evaluation method, character learning method, device, equipment and storage medium
CN111639527A (en) English handwritten text recognition method and device, electronic equipment and storage medium
JP7450868B2 (en) Gesture stroke recognition in touch-based user interface input
CN114038004A (en) Certificate information extraction method, device, equipment and storage medium
CN109086336A (en) Paper date storage method, device and electronic equipment
CN110598703A (en) OCR (optical character recognition) method and device based on deep neural network
CN115082935A (en) Method, apparatus and storage medium for correcting document image
CN112380978B (en) Multi-face detection method, system and storage medium based on key point positioning
CN112883942A (en) Evaluation method and device for handwritten character, electronic equipment and computer storage medium
US4853885A (en) Method of compressing character or pictorial image data using curve approximation
CN115905591B (en) Visual question-answering method, system, equipment and readable storage medium
CN111753719A (en) Fingerprint identification method and device
CN114511853B (en) Character image writing track recovery effect discrimination method
JP5414631B2 (en) Character string search method, character string search device, and recording medium
CN113886615A (en) Hand-drawn image real-time retrieval method based on multi-granularity association learning
CN113591845A (en) Multi-topic identification method and device and computer equipment
WO2022125127A1 (en) Detection of image space suitable for overlaying media content
CN111652204A (en) Method and device for selecting target text area, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 230088 China (Anhui) pilot Free Trade Zone, Hefei, Anhui province 6 / F and 23 / F, scientific research building, building 2, zone a, China sound Valley, No. 3333 Xiyou Road, high tech Zone, Hefei

Applicant after: Anhui taoyun Technology Co.,Ltd.

Address before: 230088 9th floor, building 1, tianyuandike science and Technology Park, 66 Qianshui East Road, high tech Zone, Hefei City, Anhui Province

Applicant before: ANHUI TAOYUN TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant