WO2022160619A1 - 手写体识别方法及装置、手写体识别系统和交互平板 - Google Patents

手写体识别方法及装置、手写体识别系统和交互平板 Download PDF

Info

Publication number
WO2022160619A1
WO2022160619A1 PCT/CN2021/107460 CN2021107460W WO2022160619A1 WO 2022160619 A1 WO2022160619 A1 WO 2022160619A1 CN 2021107460 W CN2021107460 W CN 2021107460W WO 2022160619 A1 WO2022160619 A1 WO 2022160619A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
recognition result
point
track
handwriting
Prior art date
Application number
PCT/CN2021/107460
Other languages
English (en)
French (fr)
Inventor
卞甲慧
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/CN2021/074622 external-priority patent/WO2022160330A1/zh
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to CN202180001926.0A priority Critical patent/CN115413335A/zh
Priority to US17/789,592 priority patent/US20230343125A1/en
Publication of WO2022160619A1 publication Critical patent/WO2022160619A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/333Preprocessing; Feature extraction
    • G06V30/347Sampling; Contour coding; Stroke extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/36Matching; Classification

Definitions

  • the embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a handwriting recognition method and device, a handwriting recognition system, and an interactive tablet.
  • the handwriting trajectory is recognized, and the text of the recognition result is stored in the form of a document. If the user wants to check the recognition result, he needs to open the document for browsing in the background. If he wants to confirm whether the recognition is wrong, he needs to compare it with the original handwriting trace word by word. If the recognition is wrong, he needs to edit the document twice. The method is very inconvenient for users to use, and the recognition efficiency is low.
  • Embodiments of the present disclosure provide a handwriting recognition method and device, which are used to solve the problem that the user cannot view the handwriting recognition result in real time in the existing handwriting recognition method.
  • an embodiment of the present disclosure provides a method for handwriting recognition, including:
  • the information includes coordinates, and the multiple track points include a start track point and a current track point;
  • a text recognition model is used to recognize the first track point to be recognized, and a first text recognition result is obtained;
  • the first text recognition result is displayed in printed form in the first display area of the handwriting screen.
  • the information further includes a user writing state
  • the user writing state includes starting a pen, running a pen or raising a pen
  • the determination condition is that no detection is detected within a preset time period after the pen-lifting time of the current track point
  • the initial trajectory point is the next trajectory point of the last trajectory point input to the text recognition model last time, or the first trajectory point of the handwriting screen.
  • displaying the first text recognition result in printed form in the first display area of the handwriting screen includes:
  • the plurality of characters included in the first text recognition result are stored as character tracks, wherein each character is stored as a character track respectively.
  • displaying the first text recognition result in printed form in the first display area of the handwriting screen includes:
  • Each character included in the first text recognition result is drawn separately.
  • displaying the first text recognition result in print includes:
  • a user's erasing operation on the target text in the first text recognition result is received, and the target text is erased.
  • the erasing operation includes selecting target text.
  • the erasing operation includes a first erasing gesture
  • erasing the target text includes:
  • the erasing operation includes a second erasing gesture
  • erasing the target text includes:
  • the label includes a first label or a second label, wherein the first label includes line information of the character track, and the second label includes time information, paragraph information or batch information of the character track.
  • displaying the first text recognition result in printed form in the first display area of the handwriting screen includes:
  • the plurality of characters included in the first text recognition result are drawn line by line.
  • branching the first track point to be identified includes:
  • the first track point to be recognized is divided into a plurality of characters
  • determining whether two adjacent characters are on the same line includes:
  • the height of the second character and its position coordinates the height of the first character and its position coordinates, it is judged whether two adjacent characters are on the same line.
  • the handwriting recognition method further includes:
  • the target text is erased, and the text track belonging to the same line as the target text is erased.
  • the handwriting recognition method further includes:
  • the information includes coordinates, time, and the user's writing state, and the user's writing state includes: starting a pen, running a pen, and lifting a pen;
  • the second The track points to be identified include the start track point and the end track point;
  • the current trajectory point is the termination trajectory point of the second to-be-identified trajectory point according to the judgment condition. If the current trajectory point satisfies the judgment condition, the current trajectory point is used as the termination trajectory point of the second to-be-identified trajectory point, and the start The track points during the start track point and the end track point are used as the second track point to be identified;
  • the second text recognition result is displayed in print in the second display area.
  • displaying the second text recognition result in print in the second display area includes:
  • the display information includes font size and coordinates
  • the second display area is determined, and the second text recognition result is displayed in printed form in the second display area, and the size of the font in the second text recognition result is the same as the size of the The fonts in the first text recognition result are of the same size, and the text in the second text recognition result is aligned with the text in the first text recognition result.
  • displaying the second text recognition result in print in the second display area includes:
  • determining whether the text in the second text recognition result and the text in the first text recognition result are in the same line includes:
  • the position coordinates of the text in the second text recognition result and the position coordinates of the text in the first text recognition result determine the text in the second text recognition result and the first text Whether the text in the recognition result is on the same line.
  • determining whether the text in the second text recognition result and the text in the first text recognition result are in the same line includes:
  • the position coordinates of the text in the second text recognition result and the position coordinates of the text in the first text recognition result determine the text in the second text recognition result and the first text Whether the first spacing of the text in the row direction of the text in the recognition result is less than a first threshold, and whether the second spacing of the text in the column direction of the text is less than a second threshold;
  • the first threshold is positively related to the width of the text in the second text recognition result.
  • the second threshold is positively correlated with the height of the text in the first text recognition result.
  • displaying the text in the second text recognition result and the text in the first text recognition result on the same line includes:
  • an embodiment of the present disclosure provides a handwriting recognition device, including:
  • a detection module is used to detect the information of a plurality of trajectory points corresponding to the handwriting trajectory of the user on the handwriting screen, the information includes coordinates, and the plurality of trajectory points include a starting trajectory point and a current trajectory point; Identifying track points includes start track points and end track points;
  • the judgment module is used to judge whether the current trajectory point is the termination trajectory point according to the judgment condition, if the current trajectory point satisfies the judgment condition, the current trajectory point is used as the termination trajectory point, and the period between the start trajectory point and the termination trajectory point is determined.
  • the track point is used as the first track point to be identified;
  • a recognition module configured to recognize the first track point to be recognized by using a text recognition model to obtain a first text recognition result
  • a display module configured to display the first text recognition result in printed form in the first display area of the handwriting screen.
  • the information further includes a user writing state
  • the user writing state includes starting a pen, running a pen or raising a pen
  • the determination condition is that no detection is detected within a preset time period after the pen-lifting time of the current track point
  • the initial trajectory point is the next trajectory point of the last trajectory point input to the text recognition model last time, or the first trajectory point of the handwriting screen.
  • an embodiment of the present disclosure provides an interactive tablet, including a touch module, a display module, a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program Or, when the instructions are executed by the processor, the steps of the above-mentioned handwriting recognition method of the first aspect are implemented.
  • an embodiment of the present disclosure provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the steps of the handwriting recognition method of the above-mentioned first aspect are implemented .
  • an embodiment of the present disclosure provides a handwriting recognition device, including:
  • a processor coupled to the memory, the processor configured to perform one or more steps of the handwriting recognition method of the first aspect above based on instructions stored in the memory device.
  • an embodiment of the present disclosure provides a handwriting recognition system, including the handwriting recognition device of the fifth aspect, wherein the handwriting recognition device includes:
  • the first processor located on the server side, is configured to use a text recognition model to recognize the track points to be recognized, and obtain a text recognition result;
  • the second processor located on the terminal side, is configured to draw the characters included in the text recognition result one by one, and store each character as a character track.
  • the text recognition result of the handwritten trajectory is displayed in real time in printed form, so that it is convenient for the user to view and correct the text recognition result in real time, which can effectively improve the recognition rate and enhance the communication with the user. interactivity between.
  • FIG. 1 is a schematic flowchart of a handwriting recognition method according to an embodiment of the disclosure
  • FIG. 2 is a comparison diagram of a handwriting trajectory displayed on a handwriting screen according to an embodiment of the present disclosure and a text recognition result;
  • FIG. 2A is a schematic diagram of a text erasing operation on a handwriting screen according to an embodiment of the present disclosure
  • 2B is a comparison diagram of a handwriting trajectory displayed on a handwriting screen according to an embodiment of the present disclosure and a text display result;
  • FIG. 2C is a schematic diagram of a reference line for text parallel judgment according to an embodiment of the present disclosure.
  • 2D is a schematic diagram of peer correction for uneven text according to an embodiment of the present disclosure.
  • 2E is a schematic diagram of the modification of rewriting characters after an erasing operation according to an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a handwriting recognition device according to an embodiment of the disclosure.
  • FIG. 4 is a schematic diagram of a composition structure of a handwriting recognition device according to an embodiment of the present disclosure
  • FIG. 5 is a block diagram of a handwriting recognition device according to an embodiment of the present disclosure.
  • FIG. 6 is a block diagram illustrating a computer system for implementing embodiments of the present disclosure. .
  • an embodiment of the present disclosure provides a method for recognizing handwriting, including steps 11-14.
  • Step 11 Detect information of multiple track points corresponding to the user's handwriting track on the handwriting screen, where the information includes coordinates, and the multiple track points include a start track point and a current track point.
  • the information further includes a user writing state, and the user writing state includes starting a pen, running a pen, or lifting a pen.
  • the handwriting screen can be a handwriting device such as an electronic conference whiteboard, and has a touch module and a display module.
  • the track point may include a track point of one character or multiple characters, and the character may be Chinese, English, numbers, or the like.
  • the upper left corner of the handwriting screen may be used as the origin, the X axis extending from left to right, and the Y axis extending from top to bottom.
  • the lower left corner of the handwriting screen can also be used as the origin, the X axis extending from left to right, and the Y axis extending from bottom to top.
  • the embodiment of the present disclosure does not limit the settings of the coordinate axes.
  • starting a pen refers to the first trajectory point of a stroke
  • raising a pen refers to the last trajectory point of a stroke
  • moving a pen refers to the middle trajectory point of a stroke.
  • each track point may be as follows: (x, y, t, flag), where x and y represent the coordinate position of each track point, and t is the writing of the track point Time, flag indicates the user's writing status (starting, running, or lifting).
  • the current track point refers to the last track point handwritten by the user on the handwriting screen.
  • step 12 determine whether the current trajectory point is the termination trajectory point according to the judgment condition, if the current trajectory point satisfies the judgment condition, the current trajectory point is used as the termination trajectory point, and the trajectory during the start trajectory point and the termination trajectory point is point as the first track point to be identified.
  • the determination condition is that no new trajectory point is detected within a preset time period after the pen-lifting time of the current trajectory point, and the starting trajectory point is the last input to the text recognition model last time.
  • “the last track point” and “the next track point” both refer to the track points input to the text recognition model.
  • the track points written when the character recognition function is interrupted do not belong to the track points input to the text recognition model.
  • the first track point to be recognized includes a start track point and an end track point of the handwritten track.
  • the writing interval of each character does not exceed a threshold (for example, 500ms or 2000ms), otherwise it will be considered that a single character is still written.
  • a threshold for example, 500ms or 2000ms
  • a text recognition model is used to recognize the first track point to be recognized, and a first text recognition result is obtained.
  • the first text recognition result may include one word or a text paragraph including multiple words.
  • Step 14 Display the first text recognition result in printed form in the first display area of the handwriting screen.
  • multiple characters included in the first text recognition result are stored as character tracks, wherein each character is stored as a character track.
  • each character in the first text recognition result can be separately drawn, modified, erased, and the like.
  • the multiple characters included in the first text recognition result are drawn one by one.
  • the first track point to be recognized may also be divided into lines; according to the result of division, the plurality of characters included in the first text recognition result are drawn line by line.
  • the text included in the first text recognition result may be associated with the same tag, and the tag may include line information of the text. Labels can be set for each track point, each text track or text.
  • the entire text included in the first text recognition result may also be stored as a text track and drawn as a whole. Text traces can be stored in the specified storage area. In this way, when operations such as modifying, erasing, etc. are performed on the first text recognition result, the overall processing can be realized.
  • the text included in the first text recognition result may be associated with the same tag, and the tag may include time information, paragraph information or batch information of the text.
  • Standard texts such as Times New Roman are used for the printing type in Chinese
  • standard texts such as Times New Roman are used for English and numbers, so as to be distinguished from handwritten tracks.
  • FIG. 2 is a comparison diagram of a handwriting trajectory displayed on a handwriting screen and a text recognition result according to an embodiment of the disclosure.
  • the text recognition result of the handwritten trajectory is displayed in real time in printed form, so that it is convenient for the user to view and correct the text recognition result in real time, which can effectively improve the recognition rate and enhance the communication with the user. interactivity between.
  • the text recognition model receives the coordinates of many track points, these track points are arranged in the order of writing time, and are not divided into lines.
  • the text recognition model is based on line-by-line text track points for recognition, so firstly, all track points need to be divided into lines (if the user writes multiple lines, the track points need to be divided into lines, if the user writes only one line, it is not branch required). That is, in the example of the present disclosure, before using the text recognition model to identify the information of the first track point to be identified, the first track point to be identified may also be branched. In some embodiments, based on the result of the branching, a label including line information of the text is set.
  • the projection method may be used to branch the first track point to be identified, that is, branching the first track point to be identified includes:
  • Step 21 Obtain the number of X-axis coordinate values of the first to-be-identified trajectory point corresponding to each Y-axis coordinate value of the first to-be-identified trajectory point;
  • Step 22 According to the number of the X-axis coordinate values on the Y-axis, branch the first track point to be identified.
  • the number of the X-axis coordinate values of all the track points in the first track point to be identified is projected onto the Y-axis.
  • the number will be relatively large. If the handwritten track has branch lines, the number of X-axis coordinate values on the Y-axis in the blank positions in the two lines before and after is small or 0, that is, a trough occurs, and the value of the trough is taken as Branch basis.
  • the present disclosure implements In an example, before branching the first track point to be identified, it can also include:
  • the coordinates of the first track point to be recognized are tilted and corrected to obtain corrected coordinates.
  • the coordinate value range of the track point to be identified may be 0 to thousands or tens of thousands, in order to facilitate the calculation and eliminate the influence of the order of magnitude,
  • the branching of the first track point to be identified may also include: dividing the first track point to be identified into multiple two characters; determine whether two adjacent characters are on the same line, wherein the two adjacent characters include the first character written earlier and the second character written later.
  • Whether two adjacent characters are on the same line can be determined according to the height of the second character and its position coordinates, the height of the first character and its position coordinates. For example, determine whether the first difference between the abscissa of the left edge of the second character and the abscissa of the right edge of the first character is less than the first threshold; Whether the second difference of the ordinate of the side edge is smaller than the second threshold; if the first difference is smaller than the first threshold and the second difference is smaller than the second threshold, it is determined that the second character and the first character are on the same line.
  • the first threshold is positively related to the width of the second character, for example, the first threshold is half of the width of the second character.
  • the second threshold is positively related to the height of the first character, for example, the second threshold is half of the height of the first character.
  • a text recognition model is used to recognize the information of the first track point to be recognized, and before obtaining the first text recognition result, the method further includes: normalizing the coordinates of the first track point to be recognized to Within the same numerical range, for example, normalized to (0,1).
  • a Seq2Seq network may be used to perform text recognition on the information of the first track point to be identified.
  • other text recognition models such as RNN networks, etc., may also be used.
  • a text recognition model is used to recognize the information of the first track point to be recognized, and after obtaining the first text recognition result, the method further includes: performing semantic correction on the first text recognition result.
  • the semantic correction may include: Chinese semantic correction and/or English semantic correction. According to the semantic information before and after, when there is a letter recognition error in an English word, the wrong letter is corrected.
  • displaying the first text recognition result in printed form in the first display area includes:
  • Step 31 remove the handwriting track corresponding to the first track point to be recognized on the handwriting screen
  • Step 32 Determine a first display area according to the coordinates of the first track point to be identified
  • Step 33 Display the first text recognition result in printed form in the first display area.
  • the handwriting track on the handwriting screen is erased in real time, and the text recognition result of the handwriting track is displayed in printed form in real time in the area where the handwriting track is located, which further facilitates the user to view and check the handwriting track in real time. Correct text recognition results.
  • determining the first display area according to the coordinates of the first track point to be identified includes:
  • Step 41 Obtain the minimum X-axis coordinate, the minimum Y-axis coordinate, the maximum X-axis coordinate and the maximum Y-axis coordinate in the first track point to be identified, namely x min , y min , x max , y max ;
  • Step 42 Determine the first rectangular frame according to the minimum X-axis coordinate, the minimum Y-axis coordinate, the maximum X-axis coordinate and the maximum Y-axis coordinate in the first track point to be identified, and use the first rectangular frame as the first display area.
  • determining the first display area according to the coordinates of the first track point to be identified includes:
  • Step 51 Determine whether the handwritten trajectory corresponding to the first to-be-recognized trajectory point is inclined;
  • Step 52 If the handwritten track corresponding to the first track point to be recognized is inclined, perform tilt correction on the coordinates of the first track point to be recognized to obtain corrected coordinates.
  • Step 53 Determine the first display area according to the corrected coordinates.
  • the method for performing tilt correction on the coordinates of the first track point to be identified may be: first, a rectangular frame may be determined according to the original coordinates of the first track point to be identified, and then, a rectangular frame may be determined. A point is used as the rotation center of the rectangle frame to rotate the rectangle frame, and the rotation center can be the center point of the rectangle frame or other points.
  • the first rectangular frame is determined as the first display area in the manner of step 41 and step 42 . That is to say, the minimum X-axis coordinates, the minimum Y-axis coordinates, the maximum X-axis coordinates, and the maximum Y-axis coordinates in steps 41 and 42 are all corrected coordinates.
  • displaying the first text recognition result in printed form in the first display area includes:
  • the first text recognition result is displayed in the first display area with the determined font size and word spacing.
  • the correspondence between the font and the word spacing may be pre-stored, and after the font size is determined, the word spacing is also determined accordingly.
  • the word spacing can be determined according to the size of the first display area.
  • the method further includes:
  • Step 51 Receive the user's erasing operation on the target text in the first text recognition result, and erase the target text.
  • the erasing operation includes selecting target text. For example, after circling the target text, click the delete/erase button to erase the target text.
  • a corresponding erasing operation may be performed according to the user's erasing gesture. For example, after selecting the erase function, delete the text track that intersects the erase track. Alternatively, the user generates an eraser mark by an erasing gesture, such as a palm touching the display panel, and deletes the text track intersecting with the track of the hand movement.
  • the erasing gesture can be, for example, a flat oval gesture, a polyline, a zigzag shape, an inverted N, a cross, etc., as long as it does not affect writing. It should be understood that the correspondence between the erasing gesture and the erasing operation can be set according to actual needs. Users can erase part or all of the text recognition results. After the erase operation, the user can rewrite.
  • each character included in the first text recognition result is stored as a character track, so that only the character track intersecting with the track of the erasing gesture can be deleted. That is, in some embodiments, according to the user's first erasing gesture, the text trajectory intersecting the trajectory of the first erasing gesture is erased.
  • FIG. 2A is a schematic diagram of a text erasing operation on a handwriting screen according to an embodiment of the disclosure.
  • FIG. 2A(a) shows the text displayed on the handwriting screen.
  • FIG. 2A(b) shows a schematic diagram of erasing mode 1.
  • FIG. 2A(c) shows a schematic diagram of erasing mode 2.
  • FIG. 2A(d) shows the erasure result.
  • an eraser icon will appear at the touch point, for example, the eraser is a transparent brush;
  • the erase track is drawn on the drawing board, that is, the erase gesture; when the erase gesture ends, for example, the user's finger leaves the screen, according to the track of the erase gesture, the text track that intersects with the track will be removed from the drawing board. erased, as shown in Figure 2A(d). At the same time, the text track intersecting with the track will also be removed from the specified storage area.
  • the trace of the user's finger touching the screen can form a dotted line, that is, the erasing gesture, which intersects with the text; when the erasing gesture ends, for example, the user's finger After leaving the screen, a rectangular marquee and a delete button will appear; after clicking the delete button, the text track intersecting with the track will be erased from the drawing board, as shown in Figure 2A(d). At the same time, the text track intersecting with the track will also be removed from the specified storage area.
  • the user's second erasing gesture erase the text track intersecting with the track of the second erasing gesture, and erase the text track that is the same as the label of the text track .
  • the label here is the first label, including the line information of the text track. That is, the character track intersecting with the track of the second erasing gesture can be erased, and the character track belonging to the same line as the character track can be erased. Similarly, these erased text tracks will be cleared from the artboard and removed from the specified storage area.
  • the text included in the first text recognition result can be associated with the same label, so that the text included in the first text recognition result can be conveniently realized by deleting the text track intersecting with the track of the erasing gesture.
  • the label here is the second label, including time information, paragraph information or batch information of the text track. That is, the character track intersecting with the track of the second erasing gesture can be erased, and the handwritten character track belonging to the same time period, the same paragraph or the same batch as the character track can be erased.
  • the handwriting recognition method further includes steps 52 to S55.
  • Step 52 Detect the information of the second track point to be recognized corresponding to the handwriting track of the user on the handwriting screen.
  • the second track point to be identified includes a start track point and an end track point.
  • step S52 the user can rewrite in the erased area, or can rewrite in other locations. That is, in step S52, the information of the second to-be-recognized trajectory point corresponding to the handwritten trajectory rewritten by the user is detected, and the information includes coordinates, time, and the user's writing state, and the user's writing state includes: starting the pen, running the pen, and lifting the pen;
  • the second track point to be identified includes a start track point and an end track point.
  • Step 53 Determine whether the current trajectory point is the termination trajectory point of the second to-be-identified trajectory point according to the judgment condition, and if the current trajectory point satisfies the judgment condition, use the current trajectory point as the termination trajectory point of the second to-be-identified trajectory point , and take the trajectory point between the start trajectory point and the end trajectory point as the second to-be-identified trajectory point.
  • the determination condition is that no new trajectory point is detected within a preset time period after the pen-lifting time of the current trajectory point, and the starting trajectory point is the last input to the text recognition model last time. The next track point of a track point.
  • Step 54 Use a text recognition model to recognize the second track point to be recognized to obtain a second text recognition result.
  • the second track point to be identified may also be processed by branching and/or normalization.
  • the processing method please refer to the above-mentioned first track point to be identified, which will not be described one by one here. .
  • semantic correction can also be performed.
  • Step 55 Display the second text recognition result in printed form in the second display area.
  • Times New Roman For example, Chinese uses Times New Roman, etc. for Chinese, and Times New Roman is used for English and numerals, so as to distinguish them from handwritten tracks. Optionally, it is the same as the printing used for the first text recognition result.
  • the handwriting track on the handwriting screen is erased in real time, and the recognition result of the handwriting track is displayed in print in real time in the area where the handwriting track is located, so as to facilitate the user to view, check and correct the recognition in real time. As a result, the interactivity with the user is improved.
  • the user is supported to perform multiple erasing and rewriting.
  • displaying the second text recognition result in printed form in the second display area includes:
  • Step 61 Obtain the minimum X-axis coordinate, the minimum Y-axis coordinate, the maximum X-axis coordinate and the maximum Y-axis coordinate in the second track point to be identified;
  • Step 62 Determine a second rectangular frame according to the minimum X-axis coordinate, the minimum Y-axis coordinate, the maximum X-axis coordinate and the maximum Y-axis coordinate in the second track point to be identified, and use the second rectangular frame as the second display area.
  • the second display area displays the second text recognition result in printed form, it may further include:
  • Step 71 judging whether the handwritten trajectory corresponding to the second recognition trajectory point is inclined
  • Step 72 If the handwritten track corresponding to the second identified track point is inclined, perform tilt correction on the coordinates of the second track point to be identified to obtain corrected coordinates.
  • Step 73 Determine the second display area according to the corrected coordinates.
  • the second display area is determined according to the above steps 61 and 62 .
  • displaying the second text recognition result in printed form in the second display area includes:
  • Step 81 Acquire display information of the first text recognition result, the display information includes font size and coordinates;
  • the size of the font in the second text recognition result is the same as the size of the font in the first text recognition result, and the text in the second text recognition result is aligned with the text in the first text recognition result.
  • step 81 the display information of the first display area is acquired.
  • step 82 it should be understood that: due to the difference in the glyphs of different characters, "the font size is the same” means roughly the same, and is not limited to strictly the same width and height; “text alignment” also means roughly aligned, not limited to Strict alignment in both row and column directions.
  • step 82 includes: step 821, judging whether the text in the second text recognition result and the text in the first text recognition result are in the same line; step 822, if the judgment result is yes, then The text in the second text recognition result and the text in the first text recognition result are displayed on the same line. It should be understood that only when the distance between the second display area and the first display area (including the distance in the row direction and the column direction) is relatively close, it can be judged that the text in the second text recognition result is different from all Describe whether the text in the first text recognition result is on the same line.
  • Step 821 according to the positional coordinates of the text in the second text recognition result, and the positional coordinates of the text in the first text recognition result, it can be judged that the text in the second text recognition result is different from the text in the second text recognition result. Whether the first spacing of the text in the row direction of the text in the first text recognition result is less than the first threshold, and the second spacing in the column direction of the text is less than the second threshold; Whether the first spacing is less than the first threshold , and when the second distance is smaller than the second threshold, it is determined that the characters in the second text recognition result and the characters in the first text recognition result are on the same line.
  • the first spacing in the line direction of the text can be used as the left side of the text in the second text recognition result. It is represented by the difference between the abscissa of the side edge and the abscissa of the right edge of the character in the first text recognition result.
  • the first spacing in the line direction of the text can be used as the text in the second text recognition result. It is represented by the difference between the abscissa of the right edge of the character and the abscissa of the left edge of the character in the first text recognition result.
  • the first threshold may also be determined according to the size and positional relationship between characters.
  • the first threshold may also be positively correlated with the width of the character in the second text recognition result, for example, may be half of the width of the character in the second text recognition result.
  • the second spacing in the column direction of the text can be represented by the difference between the ordinate of the top edge of the text in the second text recognition result and the ordinate of the top edge of the first text recognition result .
  • the second threshold is positively related to the height of the character in the first text recognition result, for example, may be half the height of the character in the first text recognition result.
  • the boundary information of the existing text such as the Information such as upper left and lower right determines the display area of the newly written text, including the starting position, text size and other information. This makes it possible to correct the display of later-written characters based on the previously displayed characters.
  • FIG. 2B is a comparison diagram of a handwriting track displayed on a handwriting screen and a text display result according to an embodiment of the disclosure.
  • FIG. 2C is a schematic diagram of a reference line for text parallel determination according to an embodiment of the disclosure.
  • Figures 2C (a), (b) show the upper and lower reference lines, respectively.
  • the previously displayed characters on either side can be used as a reference to judge whether the characters written later and the characters displayed earlier are in the same line. .
  • At least one of the following problems can be solved: because the writing position and font size are not uniform each time, the recognized standard text will have different text sizes and irregular positions, etc. , or the phenomenon of text overlapping; after erasing part of the text (that is, the target text) in the first text recognition result, when writing at the same position again, there will also be a positional deviation or size deviation between the new text and the existing text. And other issues.
  • FIG. 2D shows a schematic diagram of peer correction for ragged text according to an embodiment of the present disclosure.
  • Figures 2D(a) and (b) show the text before and after peer correction, respectively.
  • FIG. 2E shows a schematic diagram of the correction of rewriting characters after an erasing operation according to an embodiment of the present disclosure.
  • 2E (a), (b), and (c) respectively show the text before erasure, the rewritten text after erasure, and the text displayed after correction.
  • the characters after handwriting recognition can be displayed in a standard text style (such as print), and these texts can be arranged in a standard manner, so that the format of each character in the displayed text is neat.
  • some embodiments of the present disclosure can also support that the rewritten text after modification is also displayed in a neat layout with the text displayed by the previous writing, without the typesetting of the text rewritten after erasure and the text displayed by the previous writing. chaotic situation.
  • the same label can also be set for the text displayed on the same line, and the label reflects the line information where the text is located. In this way, even if the handwritten characters belong to different batches, as long as they are displayed on the same line, they can still be erased line by line according to the corresponding erasing gesture.
  • the target text can be erased according to the user's erasing operation on the target text in the first text recognition result, and the target text can be erased and the target text can be erased.
  • the text belongs to the text track of the same line.
  • the correspondence between the font and the word spacing may be pre-stored, and after the font size is determined, the word spacing is also determined accordingly.
  • the word spacing can be determined according to the size of the second display area.
  • the display effect of the text recognition result can be improved.
  • an embodiment of the present disclosure further provides a handwriting recognition device 300, including:
  • the detection module 301 is used to detect the information of multiple track points corresponding to the handwriting track of the user on the handwriting screen, the information includes coordinates, time and the user's writing state, and the user's writing state includes: starting the pen, running the pen and lifting the pen;
  • the multiple track points include a start track point and a current track point;
  • the determination module 302 is used to determine whether the current trajectory point is the termination trajectory point according to the determination condition, if the current trajectory point satisfies the determination condition, the current trajectory point is used as the termination trajectory point, and the period between the start trajectory point and the termination trajectory point is determined.
  • the trajectory point is used as the first trajectory point to be identified, and the determination condition is that no new trajectory point is detected within the preset time period after the pen-lifting moment of the current trajectory point, and the starting trajectory point is the last input to the The next trajectory point of the last trajectory point of the text recognition model, or, the first trajectory point of the handwriting screen;
  • An identification module 303 configured to identify the first track point to be identified by using a text identification model to obtain a first text identification result
  • the display module 304 is configured to display the first text recognition result in printed form in the first display area of the handwriting screen.
  • the text recognition result of the handwritten trajectory is displayed in real time in printed form, so that it is convenient for the user to view and correct the text recognition result in real time, which can effectively improve the recognition rate and enhance the communication with the user. interactivity between.
  • the handwriting recognition device further includes at least one of the following:
  • a branching module for branching the first track point to be identified
  • the normalization module is used for normalizing the coordinates of the first track point to be recognized to be within the same numerical range.
  • the branch module is configured to acquire the number of X-axis coordinate values of the first to-be-identified trajectory point corresponding to each Y-axis coordinate value of the first to-be-identified trajectory point number; according to the number of the X-axis coordinate values on the Y-axis, the first track point to be identified is divided into rows.
  • the handwriting recognition device further includes:
  • a semantic recognition module configured to perform semantic recognition on the first text recognition result, and perform semantic correction if the first text recognition result has a semantic error.
  • the display module is configured to remove the handwriting track corresponding to the first track point to be recognized on the handwriting screen; and determine according to the coordinates of the first track point to be recognized a first display area; displaying the first text recognition result in printed form in the first display area.
  • the display module is configured to acquire the minimum X-axis coordinate, the minimum Y-axis coordinate, the maximum X-axis coordinate, and the maximum Y-axis coordinate in the first track point to be identified;
  • a first rectangular frame is determined, and the first rectangular frame is used as the first display area.
  • the display module is further configured to determine whether the handwritten track corresponding to the first track point to be recognized is inclined; if the handwritten track corresponding to the first track point to be recognized is tilted, Perform tilt correction on the coordinates of the first track point to be identified to obtain corrected coordinates; and determine a first display area according to the corrected coordinates.
  • the display module is further configured to determine the size of the font of the first text recognition result according to the size of the first display area; The size of the first text recognition result is determined, and the word spacing of the first text recognition result is determined; the first text recognition result is displayed in the first display area with the determined font size and word spacing.
  • the handwriting recognition device further includes:
  • an erasing module for receiving the user's erasing operation on the target text in the first text recognition result, and erasing the target text
  • the detection module is also used to detect the information of the second track point to be recognized corresponding to the handwritten track rewritten by the user, the information includes coordinates, time and the user's writing state, and the user's writing state includes: starting the pen, running the pen and lift the pen; the second track point to be identified includes a start track point and a termination track point;
  • the determination module is further configured to determine whether the current track point is the termination track point of the second track point to be identified according to the determination condition, and if the current track point satisfies the determination condition, the current track point is used as the second track point to be identified.
  • the termination trajectory point of the point, the trajectory point during the initial trajectory point and the termination trajectory point is used as the second trajectory point to be identified, and the judgment condition is that the current trajectory point is not detected within a preset time period after the pen-lifting time A new trajectory point, the initial trajectory point is the next trajectory point of the last trajectory point input to the text recognition model last time;
  • the recognition module is further configured to use a text recognition model to recognize the second track point to be recognized to obtain a second text recognition result;
  • the display module is further configured to display the second text recognition result in printed form in the second display area.
  • the display module is further configured to acquire the minimum X-axis coordinate, the minimum Y-axis coordinate, the maximum X-axis coordinate, and the maximum Y-axis coordinate in the second track point to be identified; according to The minimum X-axis coordinate, the minimum Y-axis coordinate, the maximum X-axis coordinate, and the maximum Y-axis coordinate in the second track point to be identified, determine a second rectangular frame, and use the second rectangular frame as the second display area .
  • the display module is further configured to acquire display information of the first text recognition result, where the display information includes the font size and coordinates; and determine the display information according to the display information.
  • the display information includes the font size and coordinates; and determine the display information according to the display information.
  • a second display area and the second text recognition result is displayed in printed form in the second display area, the size of the font in the second text recognition result is the same as the size of the font in the first text recognition result The size is the same, and the text in the second text recognition result is aligned with the text in the first text recognition result.
  • the above-mentioned functional modules may be integrated in one entity device, or may be set in multiple entity devices.
  • the above-mentioned function module used to detect the information of the trajectory point corresponding to the handwriting trajectory of the user on the handwriting screen The detection module, the display module used to display the text recognition results in printed form, can be set on the handwriting screen.
  • the handwriting screen can be called the front end at this time, and is used to use the text recognition model to recognize the information of the track points to be recognized, and obtain
  • the recognition module of the text recognition result can be set on the server.
  • the server can also be called the backend at this time. Please refer to Figure 4.
  • the information of the trajectory points detected by the handwriting screen is sent to the server, and the server recognizes the information of the trajectory points.
  • the text recognition result is obtained and sent to the handwriting screen for display by the handwriting screen.
  • Embodiments of the present disclosure also provide an interactive tablet, including a touch module, a display module, a processor and a memory, and a program or instruction stored in the memory and executable on the processor, where the program or instruction is executed by the processor
  • a touch module including a touch module, a display module, a processor and a memory
  • a program or instruction stored in the memory and executable on the processor, where the program or instruction is executed by the processor
  • An embodiment of the present disclosure further provides a handwriting recognition device, as shown in FIG. 5 .
  • the handwriting recognition device includes: a memory 510; and a processor coupled to the memory 510, 520 the processor is configured to execute any of the embodiments of the present disclosure based on instructions stored in the memory device One or more steps of a handwriting recognition method.
  • An embodiment of the present disclosure further provides a handwriting recognition system, including the handwriting recognition device described in the preceding embodiments, wherein the handwriting recognition device includes: a first processor, located on the server side, configured to use a text recognition model to be recognized The track points are recognized to obtain a text recognition result; the second processor, located on the terminal side, is configured to draw the characters included in the text recognition result one by one, and store each character as a character track.
  • the handwriting recognition device includes: a first processor, located on the server side, configured to use a text recognition model to be recognized The track points are recognized to obtain a text recognition result; the second processor, located on the terminal side, is configured to draw the characters included in the text recognition result one by one, and store each character as a character track.
  • Embodiments of the present disclosure further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, each process of the above-mentioned embodiments of the handwriting recognition method can be implemented, and the same can be achieved. In order to avoid repetition, the technical effect will not be repeated here.
  • the processor is the processor in the terminal described in the foregoing embodiment.
  • the readable storage medium includes a computer-readable storage medium, such as a computer read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
  • FIG. 6 is a block diagram illustrating a computer system for implementing some embodiments of the present disclosure.
  • the computer system can be represented in the form of a general-purpose computing device, and the computer system can be used to implement the handwritten text recognition apparatus of the above-mentioned embodiment.
  • the computer system includes a memory 610, a processor 620, and a bus 600 that connects various system components.
  • Memory 610 may include, for example, system memory, non-volatile storage media, and the like.
  • the system memory stores, for example, an operating system, an application program, a boot loader (Boot Loader), and other programs.
  • System memory may include volatile storage media such as random access memory (RAM) and/or cache memory.
  • RAM random access memory
  • the non-volatile storage medium stores, for example, instructions for executing corresponding embodiments of the display method.
  • Non-volatile storage media include, but are not limited to, magnetic disk memory, optical memory, flash memory, and the like.
  • Processor 620 may be implemented as a general purpose processor, digital signal processor (DSP), application specific integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete hardware components such as discrete gates or transistors.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • each device such as the judging device and the determining device can be implemented by a central processing unit (CPU) running instructions in a memory for executing the corresponding steps, or can be implemented by a dedicated circuit for executing the corresponding steps.
  • CPU central processing unit
  • bus 600 may use any of a variety of bus structures.
  • bus structures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Peripheral Component Interconnect (PCI) bus.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • PCI Peripheral Component Interconnect
  • the computer system may also include an input-output interface 630, a network interface 640, a storage interface 650, and the like.
  • the interfaces 630 , 640 , 650 and the memory 610 and the processor 620 can be connected through the bus 600 .
  • the input and output interface 630 may provide a connection interface for input and output devices such as a monitor, a mouse, and a keyboard.
  • Network interface 640 provides a connection interface for various networked devices.
  • the storage interface 640 provides a connection interface for external storage devices such as a floppy disk, a U disk, and an SD card.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Character Discrimination (AREA)

Abstract

本公开提供一种手写体识别方法及装置,该手写体识别方法包括:检测用户在手写屏上的手写轨迹对应的多个轨迹点的信息,多个轨迹点包括起始轨迹点和当前轨迹点;若当前轨迹点为终止轨迹点,将起始轨迹点和终止轨迹点期间的轨迹点作为待识别轨迹点,当前轨迹点的抬笔时刻之后的预设时长内未检测到新的轨迹点则为终止轨迹点,起始轨迹点为上一次输入给文本识别模型的最后一个轨迹点的下一个轨迹点,或者,手写屏的第一个轨迹点;采用文本识别模型对待识别轨迹点的信息进行识别,得到文本识别结果;以印刷体形式显示文本识别结果。本公开实施例中,实时显示手写轨迹的文本识别结果,从而方便用户实时查看和纠正文本识别结果,能够有效提高识别率。

Description

手写体识别方法及装置、手写体识别系统和交互平板
相关申请的交叉引用
本申请是以申请号为PCT/CN2021/074622、申请日为2021年2月1日的申请,以及申请号为PCT/CN2021/097349、申请日为2021年5月31日的申请为基础,并主张其优先权,上述申请的公开内容在此作为整体引入本申请中。
技术领域
本公开实施例涉及计算机技术领域,尤其涉及一种手写体识别方法及装置、手写体识别系统和交互平板。
背景技术
现有的手写体识别方法中,一般是在用户在电子白板上书写后,对手写轨迹进行识别,并把识别结果的文本以文档的形式进行存储。用户如果想查看识别结果,需在后台打开文档进行浏览,如果想确认识别是否有误,需要与原始手写轨迹逐字进行对照,若识别有误,需在文档里进行二次编辑,这种交互方式用户使用起来非常不方便,且识别效率较低。
发明内容
本公开实施例提供一种手写体识别方法及装置,用于解决现有的手写体识别方法用户对手写体识别结果无法实时查看的问题。
为了解决上述技术问题,本公开是这样实现的:
第一方面,本公开实施例提供了一种手写体识别方法,包括:
检测用户在手写屏上的手写轨迹对应的多个轨迹点的信息,所述信息包括坐标,所述多个轨迹点包括起始轨迹点和当前轨迹点;
根据判定条件判定当前轨迹点是否为终止轨迹点,若当前轨迹点满足所述判定条件,将所述当前轨迹点作为终止轨迹点,将起始轨迹点和终止轨迹点期间的轨迹点作为第一待识别轨迹点;
采用文本识别模型对所述第一待识别轨迹点进行识别,得到第一文本识别结果;
在所述手写屏的第一显示区域以印刷体形式显示所述第一文本识别结果。
在一些实施例中,所述信息还包括用户书写状态,所述用户书写状态包括起笔、运笔 或抬笔,所述判定条件为所述当前轨迹点的抬笔时刻之后的预设时长内未检测到新的轨迹点,所述起始轨迹点为上一次输入给文本识别模型的最后一个轨迹点的下一个轨迹点,或者,所述手写屏的第一个轨迹点。
在一些实施例中,在所述手写屏的第一显示区域以印刷体形式显示所述第一文本识别结果包括:
将所述第一文本识别结果中包括的多个文字存储为文字轨迹,其中,每个文字分别存储为一条文字轨迹。
在一些实施例中,在所述手写屏的第一显示区域以印刷体形式显示所述第一文本识别结果包括:
对所述第一文本识别结果中包括的每个文字分别进行绘制。
在一些实施例中,以印刷体形式显示所述第一文本识别结果之后包括:
接收用户对所述第一文本识别结果中的目标文本的擦除操作,擦除目标文本。
在一些实施例中,擦除操作包括选择目标文本。
在一些实施例中,擦除操作包括第一擦除手势,擦除目标文本包括:
擦除与所述第一擦除手势的轨迹相交的文字轨迹。
在一些实施例中,擦除操作包括第二擦除手势,擦除目标文本包括:
擦除与所述第二擦除手势的轨迹相交的文字轨迹,并擦除与该文字轨迹的标签相同的文字轨迹。
在一些实施例中,标签包括第一标签或第二标签,其中,第一标签包括文字轨迹的行信息,第二标签包括文字轨迹的时间信息、段落信息或批次信息。
在一些实施例中,在所述手写屏的第一显示区域以印刷体形式显示所述第一文本识别结果包括:
对所述第一待识别轨迹点进行分行;
根据分行的结果,对所述第一文本识别结果中包括的多个文字逐行分别进行绘制。
在一些实施例中,对所述第一待识别轨迹点进行分行包括:
根据所述第一待识别轨迹点中各轨迹点的信息,将所述第一待识别轨迹点划分为多个文字;
判断相邻的两个文字是否在同一行,其中相邻的两个文字包括在先书写的第一文字和在后书写的第二文字。
在一些实施例中,判断相邻的两个文字是否在同一行包括:
根据第二文字的高度及其位置坐标、第一文字的高度及其位置坐标,判断相邻的两个文字是否在同一行。
在一些实施例中,所述手写体识别方法还包括:
根据用户对所述第一文本识别结果中的目标文本的擦除操作,擦除目标文本,并擦除与目标文本属于同一行的文字轨迹。
在一些实施例中,所述手写体识别方法还包括:
检测用户在手写屏上的手写轨迹对应的第二待识别轨迹点的信息,所述信息包括坐标、时间和用户书写状态,所述用户书写状态包括:起笔、运笔和抬笔;所述第二待识别轨迹点包括起始轨迹点和终止轨迹点;
根据判定条件判定当前轨迹点是否为第二待识别轨迹点的终止轨迹点,若当前轨迹点满足所述判定条件,将所述当前轨迹点作为第二待识别轨迹点的终止轨迹点,将起始轨迹点和终止轨迹点期间的轨迹点作为第二待识别轨迹点;
采用文本识别模型对所述第二待识别轨迹点进行识别,得到第二文本识别结果;
在第二显示区域以印刷体形式显示所述第二文本识别结果。
在一些实施例中,在第二显示区域以印刷体形式显示所述第二文本识别结果包括:
获取所述第一显示区域的显示信息,所述显示信息包括字体的大小和坐标;
根据所述显示信息,确定所述第二显示区域,并在所述第二显示区域以印刷体形式显示所述第二文本识别结果,所述第二文本识别结果中的字体的大小与所述第一文本识别结果中的字体的大小相同,所述第二文本识别结果中的文字与所述第一文本识别结果中的文字对齐。
在一些实施例中,在第二显示区域以印刷体形式显示所述第二文本识别结果包括:
判断所述第二文本识别结果中的文字与所述第一文本识别结果中的文字是否在同一行;
若判断结果为是,则将所述第二文本识别结果中的该文字与所述第一文本识别结果中的该文字显示在同一行。
在一些实施例中,判断所述第二文本识别结果中的该文字与所述第一文本识别结果中的该文字是否在同一行包括:
根据所述第二文本识别结果中的该文字的其位置坐标、所述第一文本识别结果中的该文字的位置坐标,判断所述第二文本识别结果中的该文字与所述第一文本识别结果中的该文字是否在同一行。
在一些实施例中,判断所述第二文本识别结果中的该文字与所述第一文本识别结果中的该文字是否在同一行包括:
根据所述第二文本识别结果中的该文字的其位置坐标、所述第一文本识别结果中的该文字的位置坐标,判断所述第二文本识别结果中的该文字与所述第一文本识别结果中的该文字在文本的行方向上的第一间距是否小于第一阈值、在文本的列方向上的第二间距是否小于第二阈值;
在第一间距是否小于第一阈值、且第二间距小于第二阈值的情况下,判断所述第二文本识别结果中的文字与所述第一文本识别结果中的文字在同一行。
在一些实施例中,所述第一阈值与所述第二文本识别结果中的该文字的宽度正相关;和/或
所述第二阈值与所述第一文本识别结果中的该文字的高度正相关。
在一些实施例中,将所述第二文本识别结果中的该文字与所述第一文本识别结果中的该文字显示在同一行包括:
对显示在同一行的文字,设置相同的标签,所述标签反映文字所在的行信息。
第二方面,本公开实施例提供了一种手写体识别装置,包括:
检测模块,用于检测用户在手写屏上的手写轨迹对应的多个轨迹点的信息,所述信息包括坐标,所述多个轨迹点包括起始轨迹点和当前轨迹点;所述第二待识别轨迹点包括起始轨迹点和终止轨迹点;
判定模块,用于根据判定条件判定当前轨迹点是否为终止轨迹点,若当前轨迹点满足所述判定条件,将所述当前轨迹点作为终止轨迹点,将起始轨迹点和终止轨迹点期间的轨迹点作为第一待识别轨迹点;
识别模块,用于采用文本识别模型对所述第一待识别轨迹点进行识别,得到第一文本识别结果;
显示模块,用于在所述手写屏的第一显示区域以印刷体形式显示所述第一文本识别结果。
在一些实施例中,所述信息还包括用户书写状态,所述用户书写状态包括起笔、运笔或抬笔,所述判定条件为所述当前轨迹点的抬笔时刻之后的预设时长内未检测到新的轨迹点,所述起始轨迹点为上一次输入给文本识别模型的最后一个轨迹点的下一个轨迹点,或者,所述手写屏的第一个轨迹点。
第三方面,本公开实施例提供了一种交互平板,包括触摸模块,显示模块,处理器, 存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现上述第一方面的手写体识别方法的步骤。
第四方面,本公开实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现上述第一方面的手写体识别方法的步骤。
第五方面,本公开实施例提供了一种手写体识别装置,包括:
存储器;和
耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器装置中的指令,执行上述第一方面的手写体识别方法的一个或多个步骤。
第六方面,本公开实施例提供了手写体识别系统,包括上述第五方面的手写体识别装置,其中,所述手写体识别装置包括:
第一处理器,位于服务器侧,被配置为采用文本识别模型对待识别轨迹点进行识别,得到文本识别结果;
第二处理器,位于终端侧,被配置为对所述文本识别结果中包括的文字逐一进行绘制,并将每个文字存储为一个文字轨迹。
本公开实施例中,对手写轨迹进行实时识别之后,以印刷体形式实时显示手写轨迹的文本识别结果,从而方便用户实时查看和纠正文本识别结果,能够有效提高识别率,并增强了与用户之间的互动性。
附图说明
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本公开的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1为本公开一实施例的手写体识别方法的流程示意图;
图2为本公开实施例的手写屏上显示的手写轨迹与文本识别结果的对照图;
图2A为本公开实施例的手写屏上的文本擦除操作的示意图;
图2B为本公开实施例的手写屏上显示的手写轨迹与文本显示结果的对照图;
图2C为本公开实施例的文字同行判断的参考线的示意图;
图2D为本公开实施例的对参差不齐的文本的同行修正示意图;
图2E为本公开实施例的对擦除操作后重写文字的修正示意图;
图3为本公开实施例的手写体识别装置的结构示意图;
图4为本公开实施例的手写体识别装置的组成架构示意图;
图5为本公开实施例的手写体识别装置的框图;
图6是示出用于实现本公开实施例的计算机系统的框图。。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
请参考图1,本公开实施例提供一种手写体识别方法,包括:步骤11-14。
在步骤11:检测用户在手写屏上的手写轨迹对应的多个轨迹点的信息,所述信息包括坐标,所述多个轨迹点包括起始轨迹点和当前轨迹点。在一些实施例中,所述信息还包括用户书写状态,所述用户书写状态包括起笔、运笔或抬笔。
所述手写屏可以为电子会议白板等手写设备,具有触控模块和显示模块。
本公开实施例中,所述轨迹点可以包括一个字符或多个字符的轨迹点,所述字符可以为中文、英文或数字等。
本公开实施例中,可以把手写屏的左上角作为原点,从左向右延伸为X轴,从上向下延伸为Y轴。或者,也可以把手写屏的左下角为原点,从左向右延伸为X轴,从下向上延伸为Y轴。本公开实施例并不对坐标轴的设置进行限定。
本公开实施例中,起笔是指一个笔画的第一个轨迹点,抬笔是指一个笔画的最后一个轨迹点,运笔是指一个笔画的中间轨迹点。
本公开实施例中,可选的,每个轨迹点的表示方式可以如下:(x,y,t,flag),其中x和y表示每个轨迹点的坐标位置,t为该轨迹点的书写时间,flag表示用户书写状态(起笔、运笔或抬笔)。
所述当前轨迹点是指用户在手写屏上的手写的最后一个轨迹点。
在步骤12:根据判定条件判定当前轨迹点是否为终止轨迹点,若当前轨迹点满足所述判定条件,将所述当前轨迹点作为终止轨迹点,将起始轨迹点和终止轨迹点期间的轨迹点作为第一待识别轨迹点。
在一些实施例中,所述判定条件为所述当前轨迹点的抬笔时刻之后的预设时长内未检测到新的轨迹点,所述起始轨迹点为上一次输入给文本识别模型的最后一个轨迹点的下一 个轨迹点,或者,所述手写屏的第一个轨迹点。这里,“最后一个轨迹点”、“下一个轨迹点”都是指输入给文本识别模型的轨迹点。例如,在文字识别功能中断的情况下书写的轨迹点,则不属于输入给文本识别模型的轨迹点。
所述第一待识别轨迹点包含所述手写轨迹的起始轨迹点和终止轨迹点。
应当理解,在连续书写多个文字时,每个文字的书写间隔不超过阈值(例如500ms或2000ms),否则会认为书写的还是单个字。
在步骤13:采用文本识别模型对所述第一待识别轨迹点进行识别,得到第一文本识别结果。第一文本识别结果可以包括一个文字或包括多个文字的文本段落。
在步骤14:在所述手写屏的第一显示区域以印刷体形式显示所述第一文本识别结果。
在一些实施例中,将所述第一文本识别结果中包括的多个文字存储为文字轨迹,其中,每个文字存储为一条文字轨迹。这样,可以对第一文本识别结果中的每个文字进行分别的绘制、修改、擦除等处理。例如,对所述第一文本识别结果中包括的多个文字逐一分别进行绘制。
应当理解,也可以对所述第一待识别轨迹点进行分行;根据分行的结果,对所述第一文本识别结果中包括的多个文字逐行分别进行绘制。这样,对第一文本识别结果中的文本进行修改、擦除等操作时,可实现按分行的处理。例如,所述第一文本识别结果中包括的文字可以与同一标签关联,该标签可以包括文字的行信息。可以为每个轨迹点、每个文字轨迹或文字设置标签。
当然,也可以将第一文本识别结果中包括的文字整体存储为一条文字轨迹,并作为一个整体进行绘制。文字轨迹可以存储在指定的存储区。这样,对第一文本识别结果进行修改、擦除等操作时,可实现整体处理。例如,所述第一文本识别结果中包括的文字可以与同一标签关联,该标签可以包括文字的时间信息、段落信息或批次信息。
所述印刷体例如中文采用宋体、楷体或黑体等标准文本,英文和数字采用Times New Roman等标准文本,以与手写轨迹进行区分。
请参考图2,图2为本公开实施例的手写屏上显示的手写轨迹与文本识别结果的对照图。
本公开实施例中,对手写轨迹进行实时识别之后,以印刷体形式实时显示手写轨迹的文本识别结果,从而方便用户实时查看和纠正文本识别结果,能够有效提高识别率,并增强了与用户之间的互动性。
由于文本识别模型接收到的是很多轨迹点坐标,这些轨迹点是按照书写时间顺序排 列的,没有进行分行。而文本识别模型是按照一行一行的文本轨迹点进行识别的,所以首先需要对所有轨迹点进行分行(若用户书写了多行,就需要对轨迹点进行分行,若用户只写了一行,则不需要分行)。即,本公开实例中,在采用文本识别模型对所述第一待识别轨迹点的信息进行识别之前,还可以对所述第一待识别轨迹点进行分行。在一些实施例中,根据分行的结果,设置包括文字的行信息的标签。
本公开实施例中,可以使用投影法对第一待识别轨迹点进行分行,即:对所述第一待识别轨迹点进行分行包括:
步骤21:获取所述第一待识别轨迹点的每一Y轴坐标值对应的所述第一待识别轨迹点的X轴坐标值的个数;
步骤22:根据Y轴上所述X轴坐标值的个数,对所述第一待识别轨迹点进行分行。
即把第一待识别轨迹点中的所有轨迹点的X轴坐标值的个数投影到Y轴上,若一行中字数比较多,即轨迹点较多时,在Y轴上的X轴坐标值的个数则会比较大,若手写轨迹出现分行,则前后两行中的空白位置处在Y轴上的X轴坐标值的个数就较小或者为0,即出现波谷,把波谷的值作为分行的依据。
然而,如果用户书写的第一待识别轨迹点对应的文字是倾斜的,此时,如果按照上述投影法对第一待识别轨迹点进行分行则会不准确,因而,可选的,本公开实施例中,对所述第一待识别轨迹点进行分行之前还可以包括:
判断所述第一待识别轨迹点对应的手写轨迹是否发生倾斜;
若所述第一待识别轨迹点对应的手写轨迹发生倾斜,对所述第一待识别轨迹点的坐标进行倾斜矫正,得到矫正后的坐标。
由于待识别轨迹点的坐标取值范围可能是0到几千或者几万,为了方便计算,消除数量级的影响,
在另一些实施例中,对所述第一待识别轨迹点进行分行也可以包括:根据所述第一待识别轨迹点中各轨迹点的信息,将所述第一待识别轨迹点划分为多个文字;判断相邻的两个文字是否在同一行,其中相邻的两个文字包括在先书写的第一文字和在后书写的第二文字。
可以根据第二文字的高度及其位置坐标、第一文字的高度及其位置坐标,判断相邻的两个文字是否在同一行。例如,判断第二文字的左侧边缘的横坐标与第一文字的右侧边缘的横坐标的第一差值是否小于第一阈值;判断第二文字的顶侧边缘的纵坐标与第一文字的顶侧边缘的纵坐标的第二差值是否小于第二阈值;在第一差值小于第一阈值、且第二差值 小于第二阈值的情况下,判断第二文字与第一文字在同一行。
所述第一阈值与第二文字的宽度正相关,例如所述第一阈值为第二文字的宽度的一半。所述第二阈值与第一文字的高度正相关,例如所述第二阈值为第一文字的高度的一半。
本公开实施例中,可选的,采用文本识别模型对所述第一待识别轨迹点的信息进行识别,得到第一文本识别结果之前还包括:将第一待识别轨迹点的坐标归一到同一数值范围内,例如归一化到(0,1)内。
本公开实施例中,可以采用Seq2Seq网络对所述第一待识别轨迹点的信息进行文本识别。当然,在本公开的其他一些实施例中,也可以采用其他文本识别模型,例如RNN网络等。
本公开实施例中,可选的,采用文本识别模型对所述第一待识别轨迹点的信息进行识别,得到第一文本识别结果之后还包括:对所述第一文本识别结果进行语义矫正。例如,所述语义矫正可以包括:中文语义校正和/或英文语义矫正,根据前后的语义信息,当一个英文单字中有字母识别错误时,把错误的字母矫正过来。
由于用户在电子白板上书写时,自由度很高,有的用户喜欢先在电子白板的右半边书写,再写左半边,有的喜欢先在中间区域书写,再写上半部分或者下半部分,而现有的手写体识别方法是根据用户书写的时间先后关系进行识别的,把识别结果一次性写入文档中,文档里也是根据识别顺序进行存储的,没有考虑到原始手写轨迹排版的空间位置关系,当用户将识别结果与原始手写轨迹进行对照时,若原始手写轨迹书写的顺序比较随意,会导致用户需要根据识别结果去找原始手写轨迹的位置,十分不便。
为解决上述问题,本公开实施例中,可选的,在第一显示区域以印刷体形式显示所述第一文本识别结果包括:
步骤31:去除所述手写屏上的所述第一待识别轨迹点对应的手写轨迹;
步骤32:根据所述第一待识别轨迹点的坐标,确定第一显示区域;
步骤33:在所述第一显示区域以印刷体形式显示所述第一文本识别结果。
本公开实施例中,对手写轨迹进行识别之后,实时擦除手写屏上的手写轨迹,并在手写轨迹所在区域以印刷体形式实时显示手写轨迹的文本识别结果,从而进一步方便了用户实时查看和纠正文本识别结果。
本公开实施例中,可选的,根据所述第一待识别轨迹点的坐标,确定第一显示区域包括:
步骤41:获取所述第一待识别轨迹点中的最小X轴坐标、最小Y轴坐标、最大X轴 坐标和最大Y轴坐标,即x min,y min,x max,y max
步骤42:根据所述第一待识别轨迹点中的最小X轴坐标、最小Y轴坐标、最大X轴坐标和最大Y轴坐标,确定第一矩形框,将所述第一矩形框作为所述第一显示区域。
然而,如果用户书写的第一待识别轨迹点对应的文字是倾斜的,此时,如果按照第一待识别轨迹点的原始坐标确定第一显示区域,则第一显示区域为倾斜的矩形框,本公开实施例中,可选的,根据所述第一待识别轨迹点的坐标,确定第一显示区域包括:
步骤51:判断所述第一待识别轨迹点对应的手写轨迹是否发生倾斜;
步骤52:若所述第一待识别轨迹点对应的手写轨迹发生倾斜,对所述第一待识别轨迹点的坐标进行倾斜矫正,得到矫正后的坐标。
步骤53:根据矫正后的坐标确定第一显示区域。
本公开实施例中,可选的,对所述第一待识别轨迹点的坐标进行倾斜矫正的方法可以是:首先可以根据第一待识别轨迹点的原始坐标确定一个矩形框,然后,然后确定一个点作为该矩形框的旋转中心对矩形框进行旋转,该旋转中心可以是矩形框的中心点或者其他点。
根据矫正后的坐标,按照步骤41和步骤42的方式确定第一矩形框作为第一显示区域。也就是说,步骤41和42中的最小X轴坐标、最小Y轴坐标、最大X轴坐标和最大Y轴坐标均为矫正后的坐标。
本公开实施例中,可选的,在所述第一显示区域以印刷体形式显示所述第一文本识别结果包括:
根据所述第一显示区域的大小,确定所述第一文本识别结果的字体的大小;
根据所述第一文本识别结果的字体的大小,确定所述第一文本识别结果的字间距;
在所述第一显示区域以确定的字体的大小和字间距显示所述第一文本识别结果。
本公开实施例中,可选的,可以预先存储字体和字间距的对应关系,当字体大小确定后,字间距也随之确定。当然,也不排除可以根据第一显示区域的大小来确定字间距的方式。
本公开实施例中,可选的,在所述手写轨迹所在区域以印刷体形式显示所述第一文本识别结果之后还包括:
步骤51:接收用户对所述第一文本识别结果中的目标文本的擦除操作,擦除目标文本。
在一些实施例中,擦除操作包括选择目标文本。例如,圈选目标文本后,点击删除/擦除按钮,即可擦除目标文本。
在另一些实施例中,可以根据用户的擦除手势,执行相应的擦除操作。例如,在选择擦除功能后,删除与擦除轨迹相交的文字轨迹。或者,用户通过擦除手势,例如手掌接触显示面板,产生橡皮擦标志,删除与手移动的轨迹相交的文字轨迹。擦除手势例如可以是扁平的椭圆手势、多折线、Z字图形、反N、叉号等等,只要不对书写产生影响即可。应当理解,擦除手势与擦除操作的对应关系可以根据实际需要进行设置。用户可以对文本识别结果进行部分或全部擦除。在擦除操作后,用户可以重写。
对于部分擦除,如前所述,第一文本识别结果中包括的每个文字分别存储为一条文字轨迹,这样,可以仅删除与擦除手势的轨迹相交的文字轨迹。即,在一些实施例中,根据用户的第一擦除手势,擦除与所述第一擦除手势的轨迹相交的文字轨迹。
请参考图2A,图2A为本公开实施例的手写屏上的文本擦除操作的示意图。
图2A(a)为手写屏上显示的文本。图2A(b)示出擦除方式1的示意图。图2A(c)示出擦除方式2的示意图。图2A(d)示出擦除结果。
对于图2A(b)所示的擦除方式,在激活擦除开关后,在用户触摸手写屏上的画板时,接触点会出现橡皮擦图标,例如橡皮擦为颜色为透明画笔;橡皮擦在画板上移动过程中,在画板上绘制擦除轨迹,即擦除手势;当擦除手势结束后,例如用户手指离开屏幕,则根据擦除手势的轨迹,与轨迹相交的文字轨迹会被从画板擦除,如图2A(d)所示。同时,与轨迹相交的文字轨迹也会被从指定存储区中移除。
对于图2A(c)所示的擦除方式,在元素选区模式下,用户手指与屏幕触摸的轨迹可形成一条虚线,即擦除手势,与文字相交;当擦除手势结束后,例如用户手指离开屏幕,会出现矩形选框和删除按钮;点击删除按钮后,与轨迹相交的文字轨迹会被从画板擦除,如图2A(d)所示。同时,与轨迹相交的文字轨迹也会被从指定存储区中移除。
对于部分删除,在另一些实施例中,根据用户的第二擦除手势,擦除与所述第二擦除手势的轨迹相交的文字轨迹,并擦除与该文字轨迹的标签相同的文字轨迹。这里的标签为第一标签,包括文字轨迹的行信息。即,可以擦除与所述第二擦除手势的轨迹相交的文字轨迹,并擦除与该文字轨迹属于同一行的文字轨迹。类似地,这些被擦除的文字轨迹会被从画板清除,同时也会被从指定存储区中移除。
对于全部删除,如前所述,第一文本识别结果中包括的文字可以与同一标签关联,这样可以通过删除与擦除手势的轨迹相交的文字轨迹,方便地实现对第一文本识别结果中包括的所有文字的整体删除。这里的标签为第二标签,包括文字轨迹的时间信息、段落信息或批次信息。即,可以擦除与所述第二擦除手势的轨迹相交的文字轨迹,并擦除与该文字 轨迹属于同一时间段、同一段落或同一批次手写的文字轨迹。
在上述实施例中,通过将每个文字单独进行绘制和单独存储文字轨迹,可以方便地实现根据不同的擦除手势进行不同的擦除操作,例如,既可以仅删除与擦除手势的轨迹相交的文字轨迹,也可以删除与擦除手势的轨迹相交的文字轨迹及其同一行的文字轨迹,还可以删除与擦除手势的轨迹相交的文字轨迹及其同一时间段、同一段落或同一批次手写的文字轨迹。
另外,应当理解不管是否执行擦除操作,在以印刷体形式显示所述第一文本识别结果之后,可以继续书写文字。相应地,可以对继续书写的手写轨迹对应的待识别轨迹点,进行与前述第一待识别轨迹点类似的检测、判定、识别和显示等处理。即,手写体识别方法还包括步骤52-步骤S55。
步骤52:检测用户在手写屏上的手写轨迹对应的第二待识别轨迹点的信息,所述信息包括坐标、时间和用户书写状态,所述用户书写状态包括起笔、运笔和抬笔,所述第二待识别轨迹点包括起始轨迹点和终止轨迹点。
本公开实施例中,在步骤S52,用户可以在擦除区域进行重写,也可以在其他位置进行重写。即,在步骤S52,检测用户重写的手写轨迹对应的第二待识别轨迹点的信息,所述信息包括坐标、时间和用户书写状态,所述用户书写状态包括:起笔、运笔和抬笔;所述第二待识别轨迹点包括起始轨迹点和终止轨迹点。
当然,本公开实施例中,用户也可以不执行擦除操作,在其他位置进行书写。步骤53:根据判定条件判定当前轨迹点是否为第二待识别轨迹点的终止轨迹点,若当前轨迹点满足所述判定条件,将所述当前轨迹点作为第二待识别轨迹点的终止轨迹点,将起始轨迹点和终止轨迹点期间的轨迹点作为第二待识别轨迹点。
在一些实施例中,所述判定条件为所述当前轨迹点的抬笔时刻之后的预设时长内未检测到新的轨迹点,所述起始轨迹点为上一次输入给文本识别模型的最后一个轨迹点的下一个轨迹点。
步骤54:采用文本识别模型对所述第二待识别轨迹点进行识别,得到第二文本识别结果。
在对第二待识别轨迹点进行识别之前,还可以对第二待识别轨迹点进行分行和/或归一化等处理,处理方式参见上述第一待识别轨迹点,此处不再一一描述。同样的,在对第二待识别轨迹点进行识别之后,也可以进行语义矫正的处理。
步骤55:在第二显示区域以印刷体形式显示所述第二文本识别结果。
所述印刷体例如中文采用宋体、楷体或黑体等,英文和数字采用Times New Roman等,以与手写轨迹进行区分,可选的,与第一文本识别结果采用的印刷体相同。
本公开实施例中,对手写轨迹识别之后,实时擦除手写屏上的手写轨迹,并在手写轨迹所在区域以印刷体形式实时显示手写轨迹的识别结果,从而方便用户实时查看、核对以及纠正识别结果,提高了与用户之间的交互性。
本公开实施例中,支持用户进行多次擦除和重写。
本公开实施例中,可选的,在第二显示区域以印刷体形式显示所述第二文本识别结果包括:
步骤61:获取所述第二待识别轨迹点中的最小X轴坐标、最小Y轴坐标、最大X轴坐标和最大Y轴坐标;
步骤62:根据所述第二待识别轨迹点中的最小X轴坐标、最小Y轴坐标、最大X轴坐标和最大Y轴坐标,确定第二矩形框,将所述第二矩形框作为所述第二显示区域。
同样的,在所述第二显示区域以印刷体形式显示所述第二文本识别结果之前还可以包括:
步骤71:判断所述第二识别轨迹点对应的手写轨迹是否发生倾斜;
步骤72:若所述第二识别轨迹点对应的手写轨迹发生倾斜,对所述第二待识别轨迹点的坐标进行倾斜矫正,得到矫正后的坐标。
步骤73:根据矫正后的坐标,确定第二显示区域。
例如根据矫正后的坐标,按照上述步骤61和步骤62确定第二显示区域。
本公开实施例中,可选的,在第二显示区域以印刷体形式显示所述第二文本识别结果包括:
步骤81:获取所述第一文本识别结果的显示信息,所述显示信息包括字体的大小和坐标;步骤82:根据所述显示信息,确定所述第二显示区域,并在所述第二显示区域以印刷体形式显示所述第二文本识别结果。所述第二文本识别结果中的字体的大小与所述第一文本识别结果中的字体的大小相同,所述第二文本识别结果中的文字与所述第一文本识别结果中的文字对齐。
在步骤81,获取所述第一显示区域的显示信息。在步骤82,应当理解:由于不同的文字的字形差异,“字体的大小相同”表示大致相同,并不限定为严格的宽度和高度都相同;“文字对齐”也表示大致对齐,并不限定为在行、列方向都严格对齐。
在一些实施例中,步骤82包括:步骤821,判断所述第二文本识别结果中的文字与所 述第一文本识别结果中的文字是否在同一行;步骤822,若判断结果为是,则将所述第二文本识别结果中的该文字与所述第一文本识别结果中的文字显示在同一行。应当理解,可以仅在第二显示区域与第一显示区域之间的距离(包括行方向、列方向的距离)较近的情况下,才判断判断所述第二文本识别结果中的文字与所述第一文本识别结果中的文字是否在同一行。
步骤821,可以根据所述第二文本识别结果中的该文字的其位置坐标、所述第一文本识别结果中的该文字的位置坐标,判断所述第二文本识别结果中的该文字与所述第一文本识别结果中的该文字在文本的行方向上的第一间距是否小于第一阈值、在文本的列方向上的第二间距是否小于第二阈值;在第一间距是否小于第一阈值、且第二间距小于第二阈值的情况下,判断所述第二文本识别结果中的文字与所述第一文本识别结果中的文字在同一行。
对于第二文本识别结果中的该文字在第一文本识别结果中的该文字的右侧的情况下,在文本的行方向上的第一间距,可以用第二文本识别结果中的该文字的左侧边缘的横坐标与第一文本识别结果中的该文字的右侧边缘的横坐标的差值来表征。
类似地,对于第二文本识别结果中的该文字在第一文本识别结果中的该文字的左侧的情况下,在文本的行方向上的第一间距,可以用第二文本识别结果中的该文字的右侧边缘的横坐标与第一文本识别结果中的该文字的左侧边缘的横坐标的差值来表征。
第一阈值也可以根据各文字之间的大小和位置关系确定。第一阈值也可以与第二文本识别结果中的该文字的宽度正相关,例如,可以为第二文本识别结果中的该文字的宽度的一半。
在文本的列方向上的第二间距,可以用第二文本识别结果中的该文字的顶侧边缘的纵坐标与第一文本识别结果中的该的顶侧边缘的纵坐标的差值来表征。
第二阈值与第一文本识别结果中的该文字的高度正相关,例如,可以为第一文本识别结果中的该文字的高度的一半。
在上述实施例中,当画板上已经有一部分文字,即在第一显示区域以印刷体显示第一文本识别结果后,在书写新文字的时候,可以根据已有文字的边界信息,例如文字的左上、右下等信息,确定新书写文字的显示区域,包括起始位置、文字大小等信息。由此,可以根据在先显示的文字,修正在后书写文字的显示。
请参考图2B,图2B为本公开实施例的手写屏上显示的手写轨迹与文本显示结果的对照图。
可以看出,在图2B的(c)中,文本显示结果呈现出:根据先显示的文字“北”,对图2B的(a)和(b)中手写轨迹1和2中后书写的文字“京”的修正显示。
另外,对于文字的同行判断,可以设置参考线,如图2C所示。图2C为本公开实施例的文字同行判断的参考线的示意图。图2C(a)、(b)分别示出上参考线和下参考线。
当在后书写的文字在行方向上左右两侧都存在在先的显示的文字的情况下,可以任一侧的在先显示文字为参照,判断在后书写的文字与在先显示的文字是否同行。
通过上述实施例,可以解决以下问题中的至少一个:由于每次书写的位置和字体大小并不统一,则会导致出现所识别出的标准文本的文字大小不一、位置摆放不整齐等问题,或者出现文字重叠的现象;在擦除第一文本识别结果中的部分文字(即目标文本)后,再次在相同位置书写时,也会出现新的文字与已有文字的位置偏差或大小偏差等问题。
图2D示出本公开实施例的对参差不齐的文本的同行修正示意图。图2D(a)、(b)分别示出同行修正前后的文本。
图2E示出本公开实施例的对擦除操作后重写文字的修正示意图。图2E(a)、(b)、(c)分别示出擦除前文本、擦除后重写文本、修正后显示的文本。可以看出,通过本公开的一些实施例,可以对手写识别后的文字进行标准文本的样式(如印刷体)显示,并对这些文本进行标准排列,使得显示的文本中各个文字的格式整齐。另外,本公开的一些实施例还能够支持修改后重新书写的文本也与在先书写显示的文本保持排版整齐的显示,而不会出现擦除后重写的文字与在先书写显示的文本排版错乱的情况。
在一些实施例中,还可以对显示在同一行的文字,设置相同的标签,所述标签反映文字所在的行信息。这样,即使属于不同批次手写的文字,只要显示在同一行,根据对应的擦除手势,仍然可以被逐行擦除。
类似地,对于第一文本识别结果中包括的文字分别逐行绘制的情形,可以根据用户对所述第一文本识别结果中的目标文本的擦除操作,擦除目标文本,并擦除与目标文本属于同一行的文字轨迹。
本公开实施例中,可选的,可以预先存储字体和字间距的对应关系,当字体大小确定后,字间距也随之确定。当然,也不排除可以根据第二显示区域的大小来确定字间距的方式。
根据上述显示方法,可以提高文本识别结果的显示效果。
请参考图3,本公开实施例还提供一种手写体识别装置300,包括:
检测模块301,用于检测用户在手写屏上的手写轨迹对应的多个轨迹点的信息,所述 信息包括坐标、时间和用户书写状态,所述用户书写状态包括:起笔、运笔和抬笔;所述多个轨迹点包括起始轨迹点和当前轨迹点;
判定模块302,用于根据判定条件判定当前轨迹点是否为终止轨迹点,若当前轨迹点满足所述判定条件,将所述当前轨迹点作为终止轨迹点,将起始轨迹点和终止轨迹点期间的轨迹点作为第一待识别轨迹点,所述判定条件为所述当前轨迹点的抬笔时刻之后的预设时长内未检测到新的轨迹点,所述起始轨迹点为上一次输入给文本识别模型的最后一个轨迹点的下一个轨迹点,或者,所述手写屏的第一个轨迹点;
识别模块303,用于采用文本识别模型对所述第一待识别轨迹点进行识别,得到第一文本识别结果;
显示模块304,用于在所述手写屏的第一显示区域以印刷体形式显示所述第一文本识别结果。
本公开实施例中,对手写轨迹进行实时识别之后,以印刷体形式实时显示手写轨迹的文本识别结果,从而方便用户实时查看和纠正文本识别结果,能够有效提高识别率,并增强了与用户之间的互动性。
本公开实施例中,可选的,所述手写体识别装置还包括以下至少一项:
分行模块,用于对所述第一待识别轨迹点进行分行;
归一化模块,用于将所述第一待识别轨迹点的坐标归一到同一数值范围内。
本公开实施例中,可选的,所述分行模块,用于获取所述第一待识别轨迹点的每一Y轴坐标值对应的所述第一待识别轨迹点的X轴坐标值的个数;根据Y轴上所述X轴坐标值的个数,对所述第一待识别轨迹点进行分行。
本公开实施例中,可选的,所述手写体识别装置还包括:
语义识别模块,用于对所述第一文本识别结果进行语义识别,若所述第一文本识别结果具有语义错误,进行语义矫正。
本公开实施例中,可选的,所述显示模块,用于去除所述手写屏上的所述第一待识别轨迹点对应的手写轨迹;根据所述第一待识别轨迹点的坐标,确定第一显示区域;在所述第一显示区域以印刷体形式显示所述第一文本识别结果。
本公开实施例中,可选的,所述显示模块,用于获取所述第一待识别轨迹点中的最小X轴坐标、最小Y轴坐标、最大X轴坐标和最大Y轴坐标;
根据所述第一待识别轨迹点中的最小X轴坐标、最小Y轴坐标、最大X轴坐标和最大Y轴坐标,确定第一矩形框,将所述第一矩形框作为所述第一显示区域。
本公开实施例中,可选的,所述显示模块还用于判断所述第一待识别轨迹点对应的手写轨迹是否发生倾斜;若所述第一待识别轨迹点对应的手写轨迹发生倾斜,对所述第一待识别轨迹点的坐标进行倾斜矫正,得到矫正后的坐标;根据矫正后的坐标确定第一显示区域。
本公开实施例中,可选的,所述显示模块还用于根据所述第一显示区域的大小,确定所述第一文本识别结果的字体的大小;根据所述第一文本识别结果的字体的大小,确定所述第一文本识别结果的字间距;在所述第一显示区域以确定的字体的大小和字间距显示所述第一文本识别结果。
本公开实施例中,可选的,所述手写体识别装置还包括:
擦除模块,用于接收用户对所述第一文本识别结果中的目标文本的擦除操作,擦除目标文本;
其中,所述检测模块,还用于检测用户重写的手写轨迹对应的第二待识别轨迹点的信息,所述信息包括坐标、时间和用户书写状态,所述用户书写状态包括:起笔、运笔和抬笔;所述第二待识别轨迹点包括起始轨迹点和终止轨迹点;
所述判定模块,还用于根据判定条件判定当前轨迹点是否为第二待识别轨迹点的终止轨迹点,若当前轨迹点满足所述判定条件,将所述当前轨迹点作为第二待识别轨迹点的终止轨迹点,将起始轨迹点和终止轨迹点期间的轨迹点作为第二待识别轨迹点,所述判定条件为所述当前轨迹点的抬笔时刻之后的预设时长内未检测到新的轨迹点,所述起始轨迹点为上一次输入给文本识别模型的最后一个轨迹点的下一个轨迹点;
所述识别模块,还用于采用文本识别模型对所述第二待识别轨迹点进行识别,得到第二文本识别结果;
所述显示模块,还用于在第二显示区域以印刷体形式显示所述第二文本识别结果。
本公开实施例中,可选的,所述显示模块,还用于获取所述第二待识别轨迹点中的最小X轴坐标、最小Y轴坐标、最大X轴坐标和最大Y轴坐标;根据所述第二待识别轨迹点中的最小X轴坐标、最小Y轴坐标、最大X轴坐标和最大Y轴坐标,确定第二矩形框,将所述第二矩形框作为所述第二显示区域。
本公开实施例中,可选的,所述显示模块,还用于获取所述第一文本识别结果的显示信息,所述显示信息包括字体的大小和坐标;根据所述显示信息,确定所述第二显示区域,并在所述第二显示区域以印刷体形式显示所述第二文本识别结果,所述第二文本识别结果中的字体的大小与所述第一文本识别结果中的字体的大小相同,所述第二文本识别结果中 的文字与所述第一文本识别结果中的文字对齐。
本公开实施例中,上述各功能模块,可以集成在一个实体设备中,也可以设置在多个实体设备中,例如,上述用于检测用户在手写屏上的手写轨迹对应的轨迹点的信息的检测模块、用于以印刷体形式显示文本识别结果的显示模块,可以设置在手写屏上,手写屏此时可以称为前端,而用于采用文本识别模型对待识别轨迹点的信息进行识别,得到文本识别结果的识别模块,可以设置在服务器上,服务器此时也可以称为后端,请参见图4,手写屏检测到的轨迹点的信息发送给服务器,服务器对轨迹点的信息进行识别,得到文本识别结果,并发送给手写屏,由手写屏进行显示。
本公开实施例还提供一种交互平板,包括触摸模块,显示模块,处理器和存储器,以及,存储在存储器并可在所述处理器上运行的程序或指令,该程序或指令被处理器执行时实现上述手写体识别方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本公开实施例还提供一种手写体识别装置,如图5所示。手写体识别装置包括:存储器510;和耦接至所述存储器510的处理器,520所述处理器被配置为基于存储在所述存储器装置中的指令,执行本公开中任意一些实施例所述的手写体识别方法的一个或多个步骤。
本公开实施例还提供一种手写体识别系统,包括前述实施例所述的手写体识别装置,其中,所述手写体识别装置包括:第一处理器,位于服务器侧,被配置为采用文本识别模型对待识别轨迹点进行识别,得到文本识别结果;第二处理器,位于终端侧,被配置为对所述文本识别结果中包括的文字逐一进行绘制,并将每个文字存储为一个文字轨迹。
本公开实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述手写体识别方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的终端中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
图6是示出用于实现本公开一些实施例的计算机系统的框图。
如图6所示,计算机系统可以通用计算设备的形式表现,该计算机系统可以用来实现上述实施例的手写文本识别装置。计算机系统包括存储器610、处理器620和连接不同系统组件的总线600。
存储器610例如可以包括系统存储器、非易失性存储介质等。系统存储器例如存储有操作系统、应用程序、引导装载程序(Boot Loader)以及其他程序等。系统存储器可以包括易失性存储介质,例如随机存取存储器(RAM)和/或高速缓存存储器。非易失性存储介质例如存储有执行显示方法的对应实施例的指令。非易失性存储介质包括但不限于磁盘存储器、光学存储器、闪存等。
处理器620可以用通用处理器、数字信号处理器(DSP)、应用专用集成电路(ASIC)、现场可编程门阵列(FPGA)或其它可编程逻辑设备、分立门或晶体管等分立硬件组件方式来实现。相应地,诸如判断设备和确定设备的每个设备,可以通过中央处理器(CPU)运行存储器中执行相应步骤的指令来实现,也可以通过执行相应步骤的专用电路来实现。
总线600可以使用多种总线结构中的任意总线结构。例如,总线结构包括但不限于工业标准体系结构(ISA)总线、微通道体系结构(MCA)总线、外围组件互连(PCI)总线。
计算机系统还可以包括输入输出接口630、网络接口640、存储接口650等。这些接口630、640、650以及存储器610和处理器620之间可以通过总线600连接。输入输出接口630可以为显示器、鼠标、键盘等输入输出设备提供连接接口。网络接口640为各种联网设备提供连接接口。存储接口640为软盘、U盘、SD卡等外部存储设备提供连接接口。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本公开实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机, 服务器,空调器,或者网络设备等)执行本公开各个实施例所述的方法。
上面结合附图对本公开的实施例进行了描述,但是本公开并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本公开的启示下,在不脱离本公开宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本公开的保护之内。

Claims (26)

  1. 一种手写体识别方法,其特征在于,包括:
    检测用户在手写屏上的手写轨迹对应的多个轨迹点的信息,所述信息包括坐标,所述多个轨迹点包括起始轨迹点和当前轨迹点;
    根据判定条件判定当前轨迹点是否为终止轨迹点,若当前轨迹点满足所述判定条件,将所述当前轨迹点作为终止轨迹点,将起始轨迹点和终止轨迹点期间的轨迹点作为第一待识别轨迹点;
    采用文本识别模型对所述第一待识别轨迹点进行识别,得到第一文本识别结果;
    在所述手写屏的第一显示区域以印刷体形式显示所述第一文本识别结果。
  2. 如权利要求1所述的手写体识别方法,其特征在于,所述信息还包括用户书写状态,所述用户书写状态包括起笔、运笔或抬笔,所述判定条件为所述当前轨迹点的抬笔时刻之后的预设时长内未检测到新的轨迹点,所述起始轨迹点为上一次输入给文本识别模型的最后一个轨迹点的下一个轨迹点,或者,所述手写屏的第一个轨迹点。
  3. 如权利要求1或2所述的手写体识别方法,其特征在于,在所述手写屏的第一显示区域以印刷体形式显示所述第一文本识别结果包括:
    将所述第一文本识别结果中包括的多个文字存储为文字轨迹,其中,每个文字分别存储为一条文字轨迹。
  4. 如权利要求3所述的手写体识别方法,其特征在于,在所述手写屏的第一显示区域以印刷体形式显示所述第一文本识别结果包括:
    对所述第一文本识别结果中包括的每个文字分别进行绘制。
  5. 如权利要求1至4中任一项所述的手写体识别方法,其特征在于,以印刷体形式显示所述第一文本识别结果之后包括:
    接收用户对所述第一文本识别结果中的目标文本的擦除操作,擦除目标文本。
  6. 如权利要求5所述的手写体识别方法,其特征在于,擦除操作包括选择目标文本。
  7. 如权利要求5所述的手写体识别方法,其特征在于,擦除操作包括第一擦除手势,擦除目标文本包括:
    擦除与所述第一擦除手势的轨迹相交的文字轨迹。
  8. 如权利要求5所述的手写体识别方法,其特征在于,擦除操作包括第二擦除手势,擦除目标文本包括:
    擦除与所述第二擦除手势的轨迹相交的文字轨迹,并擦除与该文字轨迹的标签相同的文字轨迹。
  9. 如权利要求8所述的手写体识别方法,其特征在于,标签包括第一标签或第二标签,其中,第一标签包括文字轨迹的行信息,第二标签包括文字轨迹的时间信息、段落信息或批次信息。
  10. 如权利要求1或2所述的手写体识别方法,其特征在于,在所述手写屏的第一显示区域以印刷体形式显示所述第一文本识别结果包括:
    对所述第一待识别轨迹点进行分行;
    根据分行的结果,对所述第一文本识别结果中包括的多个文字逐行分别进行绘制。
  11. 如权利要求10所述的手写体识别方法,其特征在于,对所述第一待识别轨迹点进行分行包括:
    根据所述第一待识别轨迹点中各轨迹点的信息,将所述第一待识别轨迹点划分为多个文字;
    判断相邻的两个文字是否在同一行,其中相邻的两个文字包括在先书写的第一文字和在后书写的第二文字。
  12. 如权利要求11所述的手写体识别方法,其特征在于,判断相邻的两个文字是否在同一行包括:
    根据第二文字的高度及其位置坐标、第一文字的高度及其位置坐标,判断相邻的两个文字是否在同一行。
  13. 如权利要求10所述的手写体识别方法,其特征在于,还包括:
    根据用户对所述第一文本识别结果中的目标文本的擦除操作,擦除目标文本,并擦除与目标文本属于同一行的文字轨迹。
  14. 如权利要求1或2所述的手写体识别方法,其特征在于,还包括:
    检测用户在手写屏上的手写轨迹对应的第二待识别轨迹点的信息,所述信息包括坐标,所述第二待识别轨迹点包括起始轨迹点和终止轨迹点;
    根据判定条件判定当前轨迹点是否为第二待识别轨迹点的终止轨迹点,若当前轨迹点满足所述判定条件,将所述当前轨迹点作为第二待识别轨迹点的终止轨迹点,将起始轨迹点和终止轨迹点期间的轨迹点作为第二待识别轨迹点;
    采用文本识别模型对所述第二待识别轨迹点进行识别,得到第二文本识别结果;
    在第二显示区域以印刷体形式显示所述第二文本识别结果。
  15. 如权利要求14所述的手写体识别方法,其特征在于,在第二显示区域以印刷体形式显示所述第二文本识别结果包括:
    获取所述第一显示区域的显示信息,所述显示信息包括字体的大小、和坐标;
    根据所述显示信息,确定所述第二显示区域,并在所述第二显示区域以印刷体形式显示所述第二文本识别结果,所述第二文本识别结果中的字体的大小与所述第一文本识别结果中的字体的大小相同,所述第二文本识别结果中的文字与所述第一文本识别结果中的文字对齐。
  16. 如权利要求15所述的手写体识别方法,其特征在于,在第二显示区域以印刷体形式显示所述第二文本识别结果包括:
    判断所述第二文本识别结果中的文字与所述第一文本识别结果中的文字是否在同一行;
    若判断结果为是,则将所述第二文本识别结果中的该文字与所述第一文本识别结果中的该文字显示在同一行。
  17. 如权利要求16所述的手写体识别方法,其特征在于,判断所述第二文本识别结果中的该文字与所述第一文本识别结果中的该文字是否在同一行包括:
    根据所述第二文本识别结果中的该文字的其位置坐标、所述第一文本识别结果中的该文字的位置坐标,判断所述第二文本识别结果中的该文字与所述第一文本识别结果中的该文字是否在同一行。
  18. 如权利要求17所述的手写体识别方法,其特征在于,判断所述第二文本识别结果中的该文字与所述第一文本识别结果中的该文字是否在同一行包括:
    根据所述第二文本识别结果中的该文字的其位置坐标、所述第一文本识别结果中的该文字的位置坐标,判断所述第二文本识别结果中的该文字与所述第一文本识别结果中的该文字在文本的行方向上的第一间距是否小于第一阈值、在文本的列方向上的第二间距是否小于第二阈值;
    在第一间距是否小于第一阈值、且第二间距小于第二阈值的情况下,判断所述第二文本识别结果中的文字与所述第一文本识别结果中的文字在同一行。
  19. 如权利要求18所述的手写体识别方法,其特征在于:
    所述第一阈值与所述第二文本识别结果中的该文字的宽度正相关;和/或
    所述第二阈值与所述第一文本识别结果中的该文字的高度正相关。
  20. 如权利要求16至19中任一项所述的手写体识别方法,其特征在于,将所述第二 文本识别结果中的该文字与所述第一文本识别结果中的该文字显示在同一行包括:
    对显示在同一行的文字,设置相同的标签,所述标签反映文字所在的行信息。
  21. 一种手写体识别装置,其特征在于,包括:
    检测模块,用于检测用户在手写屏上的手写轨迹对应的多个轨迹点的信息,所述信息包括坐标,所述多个轨迹点包括起始轨迹点和当前轨迹点;
    判定模块,用于根据判定条件判定当前轨迹点是否为终止轨迹点,若当前轨迹点满足所述判定条件,将所述当前轨迹点作为终止轨迹点,将起始轨迹点和终止轨迹点期间的轨迹点作为第一待识别轨迹点;
    识别模块,用于采用文本识别模型对所述第一待识别轨迹点进行识别,得到第一文本识别结果;
    显示模块,用于在所述手写屏的第一显示区域以印刷体形式显示所述第一文本识别结果。
  22. 如权利要求21所述的手写体识别装置,其特征在于,所述信息还包括用户书写状态,所述用户书写状态包括起笔、运笔或抬笔,所述判定条件为所述当前轨迹点的抬笔时刻之后的预设时长内未检测到新的轨迹点,所述起始轨迹点为上一次输入给文本识别模型的最后一个轨迹点的下一个轨迹点,或者,所述手写屏的第一个轨迹点。
  23. 一种交互平板,其特征在于,包括触摸模块,显示模块,处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-20任一项所述的手写体识别方法的步骤。
  24. 一种可读存储介质,其特征在于,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1-20任一项所述的手写体识别方法的步骤。
  25. 一种手写体识别装置,包括:
    存储器;和
    耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器装置中的指令,执行权利要求1至20任一项所述的手写体识别方法的一个或多个步骤。
  26. 一种手写体识别系统,包括如权利要求25所述的手写体识别装置,其中,所述手写体识别装置包括:
    第一处理器,位于服务器侧,被配置为采用文本识别模型对待识别轨迹点进行识别,得到文本识别结果;
    第二处理器,位于终端侧,被配置为对所述文本识别结果中包括的文字逐一进行绘制, 并将每个文字存储为一个文字轨迹。
PCT/CN2021/107460 2021-02-01 2021-07-20 手写体识别方法及装置、手写体识别系统和交互平板 WO2022160619A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180001926.0A CN115413335A (zh) 2021-02-01 2021-07-20 手写体识别方法及装置、手写体识别系统和交互平板
US17/789,592 US20230343125A1 (en) 2021-02-01 2021-07-20 Handwriting Recognition Method and Apparatus, Handwriting Recognition System and Interactive Display

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CNPCT/CN2021/074622 2021-02-01
PCT/CN2021/074622 WO2022160330A1 (zh) 2021-02-01 2021-02-01 手写体识别方法及装置
CN2021097349 2021-05-31
CNPCT/CN2021/097349 2021-05-31

Publications (1)

Publication Number Publication Date
WO2022160619A1 true WO2022160619A1 (zh) 2022-08-04

Family

ID=82652921

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/107460 WO2022160619A1 (zh) 2021-02-01 2021-07-20 手写体识别方法及装置、手写体识别系统和交互平板

Country Status (3)

Country Link
US (1) US20230343125A1 (zh)
CN (1) CN115413335A (zh)
WO (1) WO2022160619A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117193565B (zh) * 2023-11-02 2024-03-01 广州众远智慧科技有限公司 触控屏检测方法、装置、电子设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060013484A1 (en) * 2004-07-15 2006-01-19 Hitachi, Ltd. Character recognition method, method of processing correction history of character data, and character recognition system
CN101893988A (zh) * 2010-06-09 2010-11-24 华为终端有限公司 一种手写输入的移动通信终端及其输入方法
WO2019127162A1 (zh) * 2017-12-27 2019-07-04 深圳市柔宇科技有限公司 手写输入装置及其控制方法
CN110045840A (zh) * 2019-04-15 2019-07-23 广州视源电子科技股份有限公司 一种书写轨迹关联的方法、装置、终端设备和存储介质
CN111626238A (zh) * 2020-05-29 2020-09-04 京东方科技集团股份有限公司 文本识别方法、电子设备及存储介质
CN111931710A (zh) * 2020-09-17 2020-11-13 开立生物医疗科技(武汉)有限公司 一种联机手写文字识别方法、装置、电子设备及存储介质

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2845149B2 (ja) * 1994-12-28 1999-01-13 日本電気株式会社 手書文字入力装置および手書文字入力方法
JP3744997B2 (ja) * 1996-01-12 2006-02-15 キヤノン株式会社 文字認識装置及びその方法
JP4050055B2 (ja) * 2002-01-10 2008-02-20 株式会社リコー 手書き文字一括変換装置、手書き文字一括変換方法およびプログラム
JP4092371B2 (ja) * 2005-02-15 2008-05-28 有限会社Kiteイメージ・テクノロジーズ 手書き文字認識方法、手書き文字認識システム、手書き文字認識プログラム及び記録媒体
KR20130128681A (ko) * 2012-05-17 2013-11-27 삼성전자주식회사 서체 보정을 수행하도록 하기 위한 방법 및 그 전자 장치
ITRM20130022A1 (it) * 2013-01-11 2014-07-12 Natural Intelligent Technologies S R L Procedimento e apparato di riconoscimento di scrittura a mano
US11282410B2 (en) * 2015-11-20 2022-03-22 Fluidity Software, Inc. Computerized system and method for enabling a real time shared work space for solving, recording, playing back, and assessing a student's stem problem solving skills
US10248880B1 (en) * 2016-06-06 2019-04-02 Boston Inventions, LLC Method of processing and recognizing hand-written characters
NO20161728A1 (en) * 2016-11-01 2018-05-02 Bja Holding As Written text transformer
JP6859667B2 (ja) * 2016-11-10 2021-04-14 株式会社リコー 情報処理装置、情報処理プログラム、情報処理システム及び情報処理方法
US11311105B2 (en) * 2019-04-22 2022-04-26 Forever Gifts, Inc. Smart vanity mirror speaker system
CN110427601B (zh) * 2019-07-16 2021-05-18 广州视源电子科技股份有限公司 表格处理方法、装置、智能交互平板及存储介质
CN111381754B (zh) * 2020-04-30 2021-10-22 京东方科技集团股份有限公司 笔迹处理方法、设备及介质
US12033411B2 (en) * 2020-05-11 2024-07-09 Apple Inc. Stroke based control of handwriting input
KR20220088166A (ko) * 2020-12-18 2022-06-27 삼성전자주식회사 복수의 사용자 환경에서 필기 입력 인식 방법 및 장치
US12008692B2 (en) * 2022-06-03 2024-06-11 Google Llc Systems and methods for digital ink generation and editing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060013484A1 (en) * 2004-07-15 2006-01-19 Hitachi, Ltd. Character recognition method, method of processing correction history of character data, and character recognition system
CN101893988A (zh) * 2010-06-09 2010-11-24 华为终端有限公司 一种手写输入的移动通信终端及其输入方法
WO2019127162A1 (zh) * 2017-12-27 2019-07-04 深圳市柔宇科技有限公司 手写输入装置及其控制方法
CN110045840A (zh) * 2019-04-15 2019-07-23 广州视源电子科技股份有限公司 一种书写轨迹关联的方法、装置、终端设备和存储介质
CN111626238A (zh) * 2020-05-29 2020-09-04 京东方科技集团股份有限公司 文本识别方法、电子设备及存储介质
CN111931710A (zh) * 2020-09-17 2020-11-13 开立生物医疗科技(武汉)有限公司 一种联机手写文字识别方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN115413335A (zh) 2022-11-29
US20230343125A1 (en) 2023-10-26

Similar Documents

Publication Publication Date Title
CN109284059B (zh) 笔迹绘制方法、装置、交互智能平板和存储介质
JP4694606B2 (ja) ジェスチャ判定方法
US9811193B2 (en) Text entry for electronic devices
US10664695B2 (en) System and method for managing digital ink typesetting
US7848573B2 (en) Scaled text replacement of ink
JP7046806B2 (ja) ジェスチャを用いたノートテイキングのための装置および方法
US7256773B2 (en) Detection of a dwell gesture by examining parameters associated with pen motion
JP5423525B2 (ja) 手書き入力装置、手書き入力方法及び手書き入力プログラム
WO2019140987A1 (zh) 表格控制方法、装置、设备及存储介质
US20160098186A1 (en) Electronic device and method for processing handwritten document
KR102075433B1 (ko) 필기 입력 장치 및 그 제어 방법
WO2022160619A1 (zh) 手写体识别方法及装置、手写体识别系统和交互平板
US9811238B2 (en) Methods and systems for interacting with a digital marking surface
WO2022160330A1 (zh) 手写体识别方法及装置
JP2018067298A (ja) 手書き内容編集装置および手書き内容編集方法
WO2017041588A1 (zh) 擦除框的范围确定方法和系统
JP6373664B2 (ja) 電子機器、方法及びプログラム
WO2020093329A1 (zh) 一种终端设备的数据输入方法、终端设备及存储介质
WO2023000613A1 (zh) 一种显示装置及其图表显示的方法
WO2023070334A1 (zh) 手写输入显示方法及装置、计算机可读存储介质
US20240176482A1 (en) Gesture Based Space Adjustment for Editing
US20150067592A1 (en) Methods and Systems for Interacting with a Digital Marking Surface
JP2994176B2 (ja) 罫線入力装置
CN117931041A (zh) 书写区域的控制方法、装置、书写设备及存储介质
KR101680777B1 (ko) 오타 문자 수정 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21922216

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.11.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21922216

Country of ref document: EP

Kind code of ref document: A1