CN117930995A - Text drawing method and device - Google Patents

Text drawing method and device Download PDF

Info

Publication number
CN117930995A
CN117930995A CN202410338536.8A CN202410338536A CN117930995A CN 117930995 A CN117930995 A CN 117930995A CN 202410338536 A CN202410338536 A CN 202410338536A CN 117930995 A CN117930995 A CN 117930995A
Authority
CN
China
Prior art keywords
stroke
point
points
determined
drawn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410338536.8A
Other languages
Chinese (zh)
Other versions
CN117930995B (en
Inventor
陈国藩
蔡林甫
宋伟鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202410338536.8A priority Critical patent/CN117930995B/en
Publication of CN117930995A publication Critical patent/CN117930995A/en
Application granted granted Critical
Publication of CN117930995B publication Critical patent/CN117930995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a text drawing method and a text drawing device, which relate to the technical field of data processing, and the method comprises the following steps: responding to the screen touch event generated under the condition that a new pen lifting action is not recognized every time, obtaining the screen touch information corresponding to the current event, and executing the following steps: determining the position of a first stroke point based on screen touch information; drawing a first stroke point based on the determined position; based on the relative position relation between the adjacent stroke points, the integral stroke type of the character stroke formed by the second stroke point is identified, wherein the second stroke point comprises: under the condition that the previous pen-lifting action exists, all the drawn stroke points after the previous pen-lifting action exist, or under the condition that the previous pen-lifting action does not exist, all the drawn stroke points exist; the setpoint is determined and drawn based on the target font style, the overall stroke type, and the second stroke point. By applying the scheme provided by the embodiment of the application, the text of the expected effect of the user can be drawn.

Description

Text drawing method and device
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for drawing text.
Background
Applications such as drawing boards and input methods installed in terminals with touch screens generally support handwriting input of characters, so that users can write characters in a drawing area in an interface of the application using fingers, touch pens and the like. The application program draws the text along the sliding track of the finger or the touch pen in the drawing area.
However, it is generally difficult for a user to write characters in the drawing area using a finger or a stylus, as with writing brushes, pens, or the like, resulting in difficulty in an application program drawing characters of a user's intended effect.
Disclosure of Invention
The embodiment of the application aims to provide a character drawing method and a character drawing device for drawing characters with expected effects of users. The specific technical scheme is as follows:
In a first aspect, an embodiment of the present application provides a text drawing method, where the method includes:
Responding to the screen touch event generated under the condition that a new pen lifting action is not recognized every time, obtaining the screen touch information corresponding to the current event, and executing the following steps:
Determining the position of a first stroke point based on the screen touch information;
Drawing a first stroke point based on the determined position;
Based on the relative position relation between the adjacent stroke points, the integral stroke type of the character stroke formed by the second stroke point is identified, wherein the second stroke point comprises: under the condition that the previous pen-lifting action exists, all the drawn stroke points after the previous pen-lifting action exist, or under the condition that the previous pen-lifting action does not exist, all the drawn stroke points exist;
And determining a setpoint based on the target font style, the overall stroke type and the second stroke point, and drawing the determined setpoint.
In a second aspect, an embodiment of the present application provides a text drawing apparatus, including:
The touch information acquisition module is used for responding to a screen touch event generated under the condition that a new pen lifting action is not recognized every time, acquiring screen touch information corresponding to the current event and triggering the following modules;
The stroke point position determining module is used for determining the position of the first stroke point based on the screen touch information;
A stroke point drawing module for drawing a first stroke point based on the determined position;
And the stroke type recognition module is used for recognizing the integral stroke type of the text stroke formed by the second stroke point based on the relative position relation between the adjacent stroke points, wherein the second stroke point comprises: under the condition that the previous pen-lifting action exists, all the drawn stroke points after the previous pen-lifting action exist, or under the condition that the previous pen-lifting action does not exist, all the drawn stroke points exist;
And the adjusting point determining module is used for determining an adjusting point based on the target font style, the whole stroke type and the second stroke point and drawing the determined adjusting point.
In a third aspect, an embodiment of the present application provides an electronic device, including:
A memory for storing a computer program;
and a processor, configured to implement the method according to the first aspect when executing the program stored in the memory.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements the method of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect.
In the text drawing scheme provided by the embodiment of the application, the screen touch event is triggered by the touch action of the user on the screen, and in the application scene of the embodiment of the application, the touch action is actually the writing action of the user, so that the screen touch information is used as the description information of the touch action, and the writing track of the user on the screen is actually reflected. Therefore, after the stroke points are drawn according to the screen touch information, drawn characters formed by the drawn stroke points, namely characters drawn based on the writing track of the user, can retain the original font style of the user to a large extent. On the basis, the drawn text is adjusted by drawing the adjusting points based on the whole stroke type of the drawn stroke, and the adjusting process is a process of fine adjustment and standardization of the drawn stroke by combining the characteristics of the stroke type of the drawn stroke based on the original font style of the user. In addition, for each screen touch event generated under the condition that a new pen lifting action is not recognized, drawing of a pen drawing point and stroke adjustment are carried out, namely, drawing and adjustment are continuously carried out before a user lifts a pen, and the effect of drawing and adjustment at the same time is achieved. In a comprehensive view, the adjusted drawn characters not only keep the original font style of the user, but also combine the characteristics of the stroke types of the drawn strokes to conduct fine adjustment and standardization processing on the drawn strokes, so that the situation that the strokes of the characters drawn by the application program shake and the like caused by the fact that the user is not used to write the characters in the drawing area in the application program can be reduced, the application program can draw the characters with the expected effect of the user, and the strokes can be gradually and gradually adjusted for multiple times, and therefore after the user draws the strokes, the adjusted effect can be presented for the user, abrupt changes of the strokes are prevented from being sensed after the user draws the strokes, abrupt feeling in the process of stroke adjustment is reduced, and user experience is improved.
Of course, it is not necessary for any one product or method of practicing the application to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the application, and other embodiments may be obtained according to these drawings to those skilled in the art.
FIG. 1 is a schematic diagram of a text rendering scene according to an embodiment of the present application;
fig. 2 is a flow chart of a first text drawing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a pen point according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a first text adjustment process according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a second text adjustment process according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a third text adjustment process according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a text adjustment process according to an embodiment of the present application;
FIG. 8 is a flowchart of a second text drawing method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a text drawing device according to an embodiment of the present application;
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. Based on the embodiments of the present application, all other embodiments obtained by the person skilled in the art based on the present application are included in the scope of protection of the present application.
Firstly, an application scenario of the scheme provided by the embodiment of the application is explained.
The application scene of the scheme provided by the embodiment of the application is as follows: the application program needs to draw a scene of the text written by the user drawing area in the application program interface.
The application scenario is described directly by a specific example.
First scenario: a user uses a handwriting input method application to type in a scene.
In some cases, some users are more familiar with typing using handwriting input methods than pinyin input methods.
In other cases, the user knows the writing method of a certain word, but does not know the pronunciation of the word, and at this time, the user also chooses to type by adopting a handwriting input method.
When typing by using the handwriting input method, a user can directly write characters in a handwriting area in an input method interface, the input method needs to draw the characters written by the user in the handwriting area, the drawn characters are subjected to character recognition, then a plurality of alternative characters corresponding to the recognition result are determined and displayed to the user, and the user can further determine the characters expected to be input by the user from the displayed alternative characters.
The second scenario: the user uses at least one application program in a drawing board and a picture editor to conduct a scene of related design work, wherein the related design work can be at least one of signature design and artistic text design.
The operating system of an electronic device typically provides only system text in a number of default fonts, which is difficult to meet the requirements of signature design and artistic text design.
Therefore, the user generally writes the signature or artistic text which the user wants to design in the handwriting area in at least one application program interface in the drawing board and the picture editor, and the application program needs to draw the text written by the user, so that the user can perform at least one of modifying and storing the drawn text later.
It can be seen that, in the above scenario, the user writes text in the drawing area in the application interface, and the application needs to draw the text written by the user.
However, it is generally difficult for a user to write characters in the above-described drawing area using a finger or a stylus, as with writing brushes, pens, or the like, thereby making it difficult for an application program to draw characters of the user's intended effect.
For example, referring to fig. 1, a schematic diagram of a text rendering scene is provided in an embodiment of the present application.
Fig. 1 shows a scenario in which a user types with some handwriting input method application.
It can be seen that the handwriting input method application program finally draws Chinese characters 'wood' according to the writing of the user in the drawing area, and the character parts shown by the dotted line boxes 1-3 are character parts which do not accord with the expected effect of the user.
According to common knowledge, when people use writing brushes, pens and the like to write characters on paper, the tail ends of the left-falling strokes and the right-falling strokes generally have a stroke end, so that people can expect that the tail ends of the left-falling strokes and the right-falling strokes drawn by application programs generally have the stroke end; in addition, as people can master writing on paper by using writing brushes, pens and the like, the strokes and lines in the written characters are smooth, so that people can expect that the strokes and lines drawn by application programs are smooth.
In fig. 1, the left-falling end of the pen is relatively rounded and has no pen point, which is marked by a dotted line box 1; the strokes marked by the dotted line frame 2 have stronger vertical shaking performance and are not smooth enough; similarly, the end of the stroke marked by the dotted line frame 3 is also relatively blunt and has no pen point.
It can be seen that the text portions shown in the dashed boxes 1-3 do not correspond to the intended effect of the user. That is, the word "wood" drawn by the handwriting input method application in fig. 1 is not a word that meets the user's intended effect.
In view of the above, the embodiment of the application provides a text drawing method for drawing text with a user expected effect.
The execution subject of the scheme provided by the embodiment of the application is further described.
The implementation main body of the scheme provided by the embodiment of the application is as follows: any one of the electronic devices provided with at least one application program of the drawing board and the input method has at least one function of data processing and storage. For example, the electronic device may be a terminal device in a mobile phone, a tablet computer, or a portable computer.
Specifically, the implementation main body of the scheme provided by the embodiment of the application is as follows: and the electronic equipment runs the process created when the application program is executed.
The text drawing scheme provided by the embodiment of the application is described in detail below.
Referring to fig. 2, a flowchart of a first text drawing method according to an embodiment of the present application is shown, where the method includes the following steps S201 to S205.
It can be seen that, in the scheme provided in the embodiment shown in fig. 2, after executing step S201 to obtain each screen touch event generated when no new pen-lifting action is recognized and obtaining screen touch information, the stroke drawing, stroke type recognition and stroke adjustment processes shown in steps S202-S205 are all executed. That is, for each screen touch event triggered before a user draws a pen, recognizes the pen, and adjusts the pen in real time, in other words, draws the pen, recognizes the pen, and adjusts the pen while drawing the pen.
Step S201: and responding to the screen touch event generated under the condition that a new pen lifting action is not recognized every time, and obtaining the screen touch information corresponding to the current event.
For the screen touch information corresponding to each screen touch event in step S201, steps S202 to S205 are executed.
First, the pen-lifting action will be described.
The above-described pen-lifting action is an action in which the user pauses writing so that a finger or a stylus temporarily leaves the screen.
The characters consist of strokes, and people write the characters by writing each stroke, and after one stroke is written, people usually stop the pen first and then write the next stroke. That is, there is typically a pen-lifting action from stroke to stroke.
Specifically, whether a new pen-lifting action is generated can be determined according to the time interval between the screen touch event generated at this time and the screen touch event generated last time. For example, if the time interval is greater than the preset time period, a new pen-lifting action is considered to be generated, otherwise, the pen-lifting action is considered not to be generated.
The screen touch event will be described in detail below.
Screen touch events may also be referred to as touch screen events, touch events, which may include: clicking on the screen, sliding on the screen, pressing a region on the screen.
After a user opens application programs such as a handwriting input method and a drawing board, characters can be written in a writing area of the application programs by using fingers or a touch pen, and the specific expression of the written characters can be at least one of clicking a screen, sliding on the screen and pressing the screen, so that a screen touch event is generated.
The screen touch event generated under the condition that no new pen lifting action is recognized in the step can be divided into the following cases:
In the first case, no pen-lifting action has been previously performed:
After the user opens the application, the finger or stylus first touches the screen and writing in the writing area is just started, that is, the user does not perform writing action before, and does not generate pen lifting action. For example, a user may write a word "wood" by first writing a horizontal stroke, which is the first stroke written by the user, without performing any writing action before, and without generating a pen-lifting action. In this case, the screen touch event generated when the user writes a stroke is a screen touch event generated when a new pen-lifting action is not recognized.
In the second case, there has been a pen-lifting action before:
The user has written a portion of the text in the writing area of the application, produced a pen-lifting action, and then continued to write, but not produced a new pen-lifting action. For example, the user wants to write a word "wood" and has written a horizontal stroke and after writing the horizontal stroke, the user is drawing a vertical stroke, which has not been written, and no new stroke is generated. In this case, the screen touch event generated when the user writes the vertical stroke is also the screen touch event generated when the new pen-lifting action is not recognized.
The manner in which the screen touch information is obtained is described below.
When a user clicks, presses and slides at least one of the finger or the touch pen on the screen of the electronic equipment, the finger or the touch pen can be contacted with the screen, and the original electric field distribution of the screen can be changed under the influence of the electric field of the human body. The sensing unit arranged under the screen can detect the electric field distribution change according to a certain touch sampling frequency, and once the electric field distribution change is detected, the electric field distribution change can be transmitted to a CPU (central processing unit) in the electronic equipment in an electric signal mode, so that an operating system deployed on the CPU can determine that a screen touch event occurs, and the screen touch event is recalled to a process created when the electronic equipment runs the application program.
In this way, the process can respond to the screen touch event recalled by the operating system to judge whether the screen touch event is generated under the condition that a new pen lifting action is not recognized, if yes, the operating system can request screen touch information from the operating system, so that the operating system can obtain electric field distribution change information of the screen according to the received electric signals, further determine the screen touch information according to the electric field distribution change information, and recall the screen touch information to the process.
The screen touch information may include at least one of a touch point position, a touch point pressure, and a touch point instantaneous speed, which is not limited in the embodiment of the present application.
It should be noted that the number of touch point positions included in the screen touch information corresponding to different screen touch events may be different.
For example, if the screen touch event is a click event, the number of touch point positions included in the screen touch information may be 1; if the screen touch event is a sliding event, the number of touch point positions included in the screen touch information may be plural.
Step S202: based on the screen touch information, a location of the first stroke point is determined.
The stroke points are first described.
The stroke point is a basic drawing unit adopted by an application program when drawing characters in a drawing area, namely, a minimum display unit of the drawn characters. Briefly, each stroke point combination drawn by an application program is a text stroke, and each stroke combination is a text.
The stroke point may be one pixel point, or may be a solid shape including a plurality of pixel points, such as one of a circle and a polygon.
The strokes may be solid, e.g. black, blue; or may be textured or patterned, in which case the stroke points may be understood as a map.
In one case, the stroke points may also be divided into skeleton stroke points and fill stroke points.
The position of the skeleton stroke point can be directly determined according to screen touch information and is used for integrally reflecting the writing track of a user on a screen so that the character strokes obtained by subsequent drawing are consistent with the writing track of the user on the screen; the positions of the filling stroke points can be determined according to the positions of the skeleton stroke points, and the filling stroke points are used for filling gaps among the skeleton stroke points so as to improve the fluency of character strokes obtained by subsequent drawing.
Referring to fig. 3, a schematic diagram of a pen point is provided in an embodiment of the present application.
Fig. 3 shows the effect of locally enlarging the stroke point, and after two local enlargement, it is obvious that the stroke point includes not only skeleton stroke points but also filling stroke points.
The first stroke point is: the stroke points determined according to the screen touch information corresponding to the screen touch event generated at this time can comprise first skeleton stroke points and first filling stroke points.
The manner in which the location of the first stroke point is determined is described below.
Specifically, the positions of the first skeleton stroke point and the first fill stroke point may be obtained in the following manner, respectively.
For a first skeleton stroke point:
The position of the touch point may be determined based on the screen touch information, and then the position of the first skeleton stroke point may be determined from the position of the touch point.
Specifically, the positions of touch points included in the screen touch information may be obtained, and then the position of each touch point is determined as the position of one first skeleton stroke point.
The position of the touch point is determined according to the touch operation of the user on the screen, so that the touch track of the user on the screen can be accurately reflected, and the position of the skeleton stroke point is determined based on the position of the touch point, so that the overall track of the determined skeleton stroke point can accurately reflect the writing track of the user on the screen.
For a first fill stroke point:
in one embodiment, the instantaneous speed of the touch point may be determined based on the screen touch information, and then the stroke width may be determined based on the instantaneous speed, and further the position of the first fill stroke point may be determined based on the first skeleton stroke point position and the stroke width.
The instantaneous speed may be included in the screen touch information, or may be determined based on touch points included in the screen touch information.
Because the touch points are collected according to a fixed frequency, the distance between two adjacent touch points reflects the instantaneous speed of the touch points. The specific manner in which the instantaneous speed is determined is not described in detail herein.
Wherein the stroke width may be inversely proportional to the above-mentioned instantaneous speed, i.e., the faster the instantaneous speed, the smaller the stroke width, the slower the instantaneous speed, and the larger the stroke width.
Specifically, the stroke width corresponding to the instantaneous speed may be determined according to a preset mapping relationship between the instantaneous speed and the stroke width, which is not limited in the embodiment of the present application.
After the positions and the stroke widths of the first skeleton strokes are obtained, the distance between each pair of adjacent first skeleton strokes can be determined, and a region to be filled between the adjacent first skeleton strokes is determined according to the distance and the stroke widths, wherein the region to be filled can be understood as a line segment between a pair of first adjacent skeleton strokes; then, starting from the starting point of the area to be filled, the positions of the first filling stroke points can be determined in the area to be filled in sequence according to the preset filling interval.
The width of the area to be filled is the interval, and the height of the area to be filled is the stroke width.
In this way, when the first stroke points are subsequently drawn, the to-be-filled area between each adjacent first skeleton stroke point is drawn with each first filling stroke point according to the determined position, so that the strokes formed by the first skeleton stroke point and the first filling stroke point reach a certain width.
It can be seen that prior to determining the location of the fill stroke point, the stroke width is also determined based on the instantaneous speed of the touch point, and the location of the fill stroke point is determined based on the stroke width. When the position of the filling stroke point is determined based on the stroke width, the influence of the instantaneous speed of the touch point on the stroke width is also considered, so that the character strokes formed by the stroke points can be more consistent with the thickness variation characteristics of the character strokes written in real life when the stroke points are drawn subsequently, and the drawn characters can be favorably consistent with the expectations of users.
In another embodiment, the target number of strokes points to be filled between the adjacent skeleton strokes points may be determined directly according to the interval between each pair of first adjacent skeleton strokes points and the width of the strokes points, and then the positions of the target number of first filling strokes points may be uniformly determined between each pair of adjacent first skeleton strokes points.
If the stroke point is 1 pixel point, the width of the stroke point can be 1 pixel; if the stroke point is a solid shape or map including a plurality of pixels, the width of the stroke point may be the width of the solid shape or map, for example, may be 2 pixels, 3 pixels, or the like.
For example, if the interval between adjacent first skeleton strokes is 30 pixels and the stroke point width is 3 pixels, the target number of first filling strokes between adjacent first skeleton strokes is 30/3=10.
Step S203: the first stroke point is drawn based on the determined position.
Specifically, a first stroke point may be drawn at the determined location.
Step S204: based on the relative positional relationship between adjacent stroke points, the overall stroke type of the text stroke formed by the second stroke point is identified.
As can be seen from the foregoing description, the screen touch information obtained in step S201 can be divided into two cases:
In the first case, no pen lifting action is performed before, so that a screen touch event and screen touch information corresponding to the screen touch event are obtained; in the second case, the pen-lifting action is performed before, but no new pen-lifting action is recognized, and the screen touch event and the screen touch information corresponding to the screen touch event are obtained.
The stroke points included in the second stroke point in the above two cases are described below.
In the first case, in the case where there has been no previous stroke, that is, in the case where there has been no previous stroke, the second stroke point includes all the stroke points that have been drawn, that is, all the stroke points that have been drawn from the time when the user starts writing to the current time, in other words, in the case, the second stroke point is a stroke point in the first stroke that the user writes.
In the second case, in the case where there has been a previous stroke, that is, in the case where there has been a previous stroke, the second stroke point includes all the stroke points that have been drawn after the previous stroke, that is, all the stroke points that have been drawn from the last stroke of the user to the current time, in other words, the second stroke point is all the stroke points in the current stroke that have been written after the previous stroke has been written by the user, and does not include the stroke points in the previous stroke. For example, when the previous stroke written by the user is a horizontal stroke and a pen-lifting action occurs after the horizontal drawing is finished, in this step, the second pen-lifting point is the pen-lifting point drawn after the pen-lifting action, and the pen-lifting point in the horizontal stroke is not included.
Where the stroke points include skeleton stroke points and filling stroke points, the adjacent stroke points may be 2 skeleton stroke points that are adjacent, 2 filling stroke points that are adjacent, or both.
The relative positional relationship between adjacent stroke points reflects the writing trend corresponding to the stroke points.
For example, if the stroke point P2 is to the right of the stroke point P1, the stroke points P1, P2 may be considered to reflect a left-to-right writing trend.
Specifically, the overall stroke type of the text stroke formed by the second stroke point can be identified based on the vector direction corresponding to the first stroke point and the vector direction corresponding to the third stroke point.
The second stroke point is all stroke points in the current written stroke after the user finishes writing the previous stroke, the first stroke point is part of the stroke points in the current written stroke of the user, and the third stroke point is the stroke points except the first stroke point in the second stroke point, namely the stroke points except the first stroke point in the current written stroke.
For example, the user is ready to write a vertical stroke, first, the electronic device detects a screen touch event a according to the writing of the user, and draws a stroke point a according to the screen touch event a; then, as the vertical stroke is not written yet, along with the writing of the user, the electronic device detects the screen touch event B, and draws the stroke point B according to the screen touch event B, at this time, the pen lifting action is detected, and the vertical stroke writing is completed. Wherein, the stroke includes the stroke point as follows: the first stroke point is a stroke point B drawn according to the currently obtained screen touch event B; the second stroke point is all the stroke points in the current stroke written by the user, namely, the stroke point a+the stroke point b; the third stroke point is a stroke point a except the first stroke point b in the second stroke point, namely the stroke point except the first stroke point b in the current written stroke.
The vector direction corresponding to the stroke point is as follows: the forward adjacent stroke point of the stroke point is the vector direction of the stroke point.
The vector direction corresponding to a stroke point may be determined according to the position of the stroke point, specifically, the vector direction corresponding to a stroke point may be obtained by subtracting the coordinates of the forward adjacent stroke point of the already-stroked point from the coordinates of the stroke point, which is not described herein.
After the first stroke point is drawn, a vector direction corresponding to the first stroke point can be obtained according to the position of the first stroke point.
It should be noted that, in the scheme provided by the embodiment of the present application, when a screen touch event generated under the condition that a new pen lifting action is not recognized each time, a stroke point is drawn according to screen touch information corresponding to the current event, and in order to be different from a stroke point drawn according to a screen touch event generated previously, the drawn stroke point is referred to as a first stroke point.
That is, the first stroke point is a name that is distinguished from the stroke points drawn according to the previously generated screen touch event, and is a relative concept that reflects the stroke points drawn according to the newly generated screen touch event. Similarly, the third stroke point is a reference to a stroke point drawn according to the newly generated screen touch event, and is also a relative concept, reflecting a stroke point drawn according to the previously generated screen touch event.
For example, a stroke point a is drawn according to the newly generated screen touch event a, and at this time, the stroke point a is referred to as a first stroke point; then, drawing a stroke point B according to the newly generated screen touch event B, wherein the stroke point B is called a first stroke point, and the stroke point a originally called the first stroke point is called a third stroke point corresponding to the first stroke point a at present; subsequently, the stroke point C is drawn again according to the newly generated screen touch event C, and at this time, the stroke point C is referred to as a first stroke point, and the stroke points a and b originally referred to as first stroke points are referred to as third stroke points with respect to the first stroke point C.
It can be seen that a third stroke point will be referred to as a first stroke point, i.e., a third stroke point has also been referred to as a first stroke point, and the vector direction corresponding to the third stroke point will be obtained at that time, before a new stroke point is drawn in accordance with a new screen touch event. In other words, the vector direction corresponding to the third stroke point is already obtained.
After the vector directions corresponding to the first stroke point and the third stroke point are obtained, the overall stroke type of the text stroke formed by the second stroke point can be identified in the following manner:
In one embodiment, the direction distribution information of the vector direction corresponding to the second stroke point can be obtained based on the vector direction corresponding to the first stroke point and the vector direction corresponding to the third stroke point, and the overall stroke type of the text stroke formed by the second stroke point is identified based on the direction distribution information.
The direction distribution information may have various forms.
In the first expression, the direction distribution information may be the number of vector directions corresponding to the angles of each direction among the vector directions corresponding to the second stroke points.
For example, the number of vector directions corresponding to a direction angle of 5 ° is 2000, the number of vector directions corresponding to a direction angle of 6 ° is 1500, and the number of vector directions corresponding to a direction angle of-5 ° is 500.
In the second expression, the direction distribution information may be a ratio of the number of vector directions corresponding to each direction angle to the total number of vector directions among the vector directions corresponding to the second stroke point.
For example, the number of vector directions corresponding to the second stroke point is 4000, wherein the ratio of the number of vector directions corresponding to the direction angle of 5 ° to the total number of vector directions is 50%, the number of vector directions corresponding to the direction angle of 6 ° is 37.5%, and the number of vector directions corresponding to the direction angle of-5 ° is 12.5%.
After the direction distribution information is obtained, the ratio of the number of vector directions in the direction determination range of each stroke type to the number of all vector directions can be determined based on the direction distribution information, and the whole stroke type of the character strokes formed by the second stroke point can be determined according to the ratio.
The above range of direction determination for each stroke type can be understood as: the expected vector direction for the stroke points included in each stroke type may be preset empirically by the staff. For example, the direction determination range corresponding to the stroke bar may be [ -10 °,10 ° ], indicating that the expected vector direction corresponding to the stroke point included in the stroke bar should be located at [ -10 °,10 ° ]; the direction determination range for the stroke vertical may be [260 °,280 ° ], indicating that the expected vector direction for the stroke points included in the stroke vertical should be located at [260 °,280 ° ].
For example, the ratio of the number of vector directions corresponding to the direction angle of 5 ° to the total number of vector directions is 50%, the ratio of the number of vector directions corresponding to the direction angle of 6 ° to the total number of vector directions is 37.5%, the ratio of the number of vector directions corresponding to the direction angle of-5 ° to the total number of vector directions is 12.5%, and since 5 °,6 ° and-5 ° each belong to the direction determination range of-10 °,10 ° ] corresponding to the stroke horizontal direction, that is, the ratio of the number of vector directions located in the direction determination range of-10 °,10 ° ] in the stroke horizontal direction to the total number of vector directions is 100%, it can be determined that the stroke horizontal direction is the stroke type corresponding to the highest ratio, that is, the stroke horizontal direction is the total stroke type of the character strokes formed by the second stroke point.
In some cases, the overall stroke type of the text stroke formed by the second stroke point may be determined according to the above-mentioned duty ratio and the drawing order of the second stroke point.
For example, the ratio of the number of vector directions corresponding to the direction angle of 5 ° to the total number of vector directions is 50%, the ratio of the number of vector directions corresponding to the direction angle of 275 ° to the total number of vector directions is 50%, since 5 ° belongs to the direction determination range [ -10 °,10 ° ] corresponding to the stroke horizontal direction, that is, the ratio of the number of vector directions located in the direction determination range of the stroke horizontal direction to the total number of vector directions is 50%; since 275 ° belongs to the direction determination range [260 °,280 ° ] corresponding to the stroke vertical direction, that is, the ratio of the number of vector directions in the direction determination range of the stroke horizontal direction to the total number of vector directions is also 50%. On the basis, if the fact that the vector direction in the horizontal stroke direction judging range corresponds to the second stroke point and is drawn firstly is determined, and the vector direction in the vertical stroke direction judging range corresponds to the second stroke point and is drawn later, the fact that the user draws the horizontal stroke firstly and then draws the vertical stroke can be determined, and therefore the fact that the horizontal stroke is the whole stroke type of the text stroke formed by the second stroke point can be determined.
Because the vector direction corresponding to the stroke point reflects the writing direction of the stroke point, the vector direction corresponding to the first stroke point and the vector direction corresponding to the third stroke point can reflect the writing direction of the second stroke point, so that the overall stroke type is determined according to the vector direction corresponding to the first stroke point and the vector direction corresponding to the third stroke point, namely, the overall stroke type is determined according to the writing direction of the second stroke point, and the accuracy of the determined overall stroke type is improved.
In another embodiment, the vector directions corresponding to the stroke points in each sub-stroke formed by the second stroke point may be stored in different direction objects, in which case, the sub-stroke types of the sub-strokes corresponding to the latest direction object may be determined, and the overall stroke types of the text strokes formed by the second stroke point are identified according to the creation sequence of each direction object created after the last pen lifting action and the stroke types of the sub-strokes corresponding to each direction object. The specific embodiment is shown in step S809 in the example shown in fig. 8, which will not be described in detail here.
It should be noted that, since this step is performed when no new pen-lifting action is recognized, that is, when no pen-lifting is performed during the writing process of the user, in other words, when the pen-lifting action is recognized according to the writing of the user, the whole stroke type of the text stroke formed by the second drawn pen-lifting point is determined.
In this way, since the user is still writing and does not pick up a pen, the stroke points can be drawn continuously according to the writing track of the user. It will be appreciated that, as the stroke points are drawn, the second stroke points that have been drawn without identifying a new pen-lifting action are updated, and thus the text strokes formed by the identified second stroke points are updated.
Step S205: a setpoint is determined based on the target font style, the overall stroke type, and the second stroke point, and the determined setpoint is drawn.
The target font style may be a preset font style or a font style selected by the user when writing characters by using the application program.
The target font style may specifically be one of a regular script style, song Ti style, a pigment style, and a lean body style, which is not limited in the embodiment of the present application.
The above adjustment points are understood to be points for adjusting and normalizing the drawn strokes, and may be points located in the drawn strokes or points located outside the drawn strokes, and by drawing the adjustment points, the drawn strokes may be changed.
After determining the overall stroke type of the drawn stroke via step S204, a setpoint may be determined and drawn for adjusting the drawn stroke according to the target font style and overall stroke type.
The manner in which the setpoint is determined will first be described.
Specifically, the adjustment location and adjustment manner of the drawn stroke may be determined based on the target font style and the overall stroke type, and then the adjustment point may be determined in the adjustment manner based on the stroke point located at the adjustment location in the second stroke point.
The adjustment part can be at least one of the front end part of the stroke, the main part of the stroke, the bending part of the stroke and the tail end part of the stroke, and the adjustment part is determined by the target font style and the whole stroke type.
Specifically, the adjustment part may be determined according to a preset target font style and a corresponding relationship between the overall stroke type and the adjustment part.
For example, if the target font style is regular script and the overall stroke type is horizontal, based on the correspondence, the adjustment part may be determined to be the stroke trunk, the stroke front section, and the stroke tail end.
For another example, if the target font style is lean body and the overall stroke type is vertical, based on the correspondence, the adjustment part can be determined to be the stroke trunk and the stroke tail end.
In addition, the adjustment modes of the adjustment parts are different, and the modes of determining the adjustment points according to the adjustment modes are also different, and are described below.
1. Front end position of stroke
The adjustment method for the front end part of the stroke can comprise the following steps: a supplemental point is added at the front end position.
The supplemental points are also defined as setpoint points that are located outside the drawn stroke.
In this adjustment mode, the supplement point can be determined according to the widths of the front-end supplement region and the front-stage portion.
The front-end supplementing area is an area for adding at the front end of the stroke and is determined by the target font style and the whole stroke type.
For example, if the target font style is regular script and the overall stroke type is horizontal, the front-end supplementary region may be a region having a shape similar to a "silkworm head".
Specifically, the front-end supplemental region may be scaled based on the width of the front-end portion, and a point in the front-end supplemental region after the scaling may be determined as the supplemental point before the front-end supplemental region after the scaling is moved to the front-end portion.
2. Trunk part of stroke
The adjustment mode for the trunk part of the stroke can comprise: the stroke point of the trunk part is adjusted to change the thickness degree of the trunk part.
In the adjustment mode, the length of the trunk part of the drawn stroke can be determined, and the expected width of the trunk part is determined according to the corresponding relation between the preset stroke length and the width of the trunk part; and then determining a point to be deleted required to be deleted for reaching the expected width from the second stroke points or determining a point to be supplemented required to be supplemented for reaching the expected width from the outside of the second stroke points based on the expected width, wherein the point to be deleted and the point to be supplemented are the determined adjusting points.
3. Bending part of stroke
The adjustment mode for the bending part of the stroke can comprise: the stroke point of the bending part is adjusted to increase the sharpness of the bending part.
In the adjustment mode, the expected sharpness of the bending part can be determined according to the corresponding relation between the preset stroke type and the sharpness of the bending part. And then determining a point to be deleted required to be deleted to reach the expected sharpness degree from the second drawing points or determining a point to be supplemented required to reach the expected sharpness degree from the outside of the second drawing points according to the expected sharpness degree, wherein the point to be deleted and the point to be supplemented are the determined adjusting points.
4. Tail end part of stroke
The adjustment method for the tail end part of the stroke can comprise the following steps: and adjusting the stroke point of the tail end part to increase the thickness convergence of the tail end part.
In the adjustment mode, the length of the tail end part of the drawn stroke can be determined, and then the expected thickness of the tail end part is determined according to the corresponding relation between the preset length of the tail end part and the thickness of the tail end part. And then determining a point to be deleted required to be deleted for reaching the expected thickness from the second stroke points based on the expected thickness, or determining a point to be supplemented required to be supplemented for reaching the expected thickness from the outside of the second stroke points, wherein the point to be deleted and the point to be supplemented are the determined adjusting points.
The adjustment method for the tail end part of the stroke can further comprise: and adding a supplement point at the tail end part.
The supplemental points are also defined as setpoint points that are located outside the drawn stroke.
In this adjustment mode, the replenishment point can be determined based on the widths of the trailing replenishment area and the trailing portion.
The tail end supplementing area is an area for adding at the tail end of the stroke and is determined by the target font style and the whole stroke type.
For example, if the target font style is regular script and the overall stroke type is horizontal, the tail-end supplemental region may be a region shaped like a "wild goose tail".
Specifically, the tail replenishment area may be scaled based on the width of the tail portion, and after the scaled tail replenishment area is moved to the tail portion, a point in the moved tail replenishment area may be determined as the replenishment point.
Therefore, the adjustment positions of drawn strokes and the adjustment modes corresponding to the adjustment positions of various types are different along with the difference of the types of the whole strokes and the styles of the target fonts, so that the adjustment points can be flexibly determined based on the adjustment positions rich in types and the adjustment modes corresponding to the adjustment positions, and the styles of drawn characters can be flexibly adjusted to various target fonts through drawing the adjustment points, the flexibility of the drawn characters is improved, and the adjusted characters tend to various target font effects expected by users.
From the above, the adjustment position and the adjustment manner of the drawn stroke are determined based on the target font style and the overall stroke type, and the adjustment point is determined based on the adjustment manner and the stroke point of the adjustment position, that is, when the adjustment point is determined, the target font style, the overall stroke type and the drawn stroke point are comprehensively considered, so that the adjustment point for adjusting and normalizing the drawn stroke can be comprehensively and reasonably determined based on the above, and after the drawn stroke is adjusted through the drawing adjustment point, the adjusted drawn text can be enabled to conform to the user expectation.
The way of drawing the adjustment points is described.
As can be seen from the foregoing description, the setpoint may include a point to be supplemented and a point to be deleted.
Wherein, for the point to be complemented, the point to be complemented can be directly drawn; for the point to be deleted, it may be deleted or its transparency may be adjusted to be transparent.
It should be noted that this step is performed when no new pen-lifting action is recognized, that is, when no pen-lifting is performed during the writing process of the user, in other words, when the pen-lifting action is recognized according to the writing of the user, the whole stroke type of the text stroke formed by the second pen-lifting point which is drawn when no new pen-lifting action is recognized is determined, and the drawn stroke is adjusted according to the determined whole stroke type.
In this way, since the user is still writing and does not pick up a pen, the stroke points can be drawn continuously according to the writing track of the user. It will be appreciated that, in the event that no new pen-lifting action is identified, the second drawn pen-point will be updated continuously as the pen-point is drawn, and the drawn pen-strokes will be updated continuously, so that the drawn pen-strokes may be adjusted in real time according to the determined overall pen-stroke type.
The text adjustment process provided by the embodiment of the application is specifically described below through fig. 4 to 6.
The screen touch events a-F in fig. 4-6 are all screen touch events generated when no new pen-lifting action is detected, and the drawn stroke types in fig. 4-6 are: the second stroke point forms an overall stroke type of the text stroke.
Fig. 4 is a schematic diagram of a first text adjustment process according to an embodiment of the present application.
In fig. 4, the target font style is a regular script, firstly, according to a screen touch event a, drawing a stroke point, determining that the type of the drawn stroke is horizontal, determining an adjustment position and an adjustment point, specifically, determining that the adjustment position of the drawn horizontal stroke is the front end of the stroke, thereby determining that the adjustment point is a point located in a region marked by a dashed line frame in the figure according to the adjustment mode corresponding to the determined adjustment position, and drawing the adjustment point, so as to obtain an adjusted character stroke; because the user does not raise a pen, along with the drawing of the user, the obtained screen touch event B continuously draws a pen point according to the screen touch event B, at the moment, the type of the drawn pen is determined to be horizontal, the adjustment part and the adjustment point can be continuously determined, and particularly, the adjustment part of the drawn horizontal pen is determined to be a main pen line and a tail pen end of the pen, so that according to the adjustment mode corresponding to the determined adjustment part, the adjustment point can be determined to be a point positioned in the marked area of a broken line frame in the figure, and the adjusted point is drawn, so that the adjusted character pen can be obtained.
Fig. 5 is a schematic diagram of a second text adjustment process according to an embodiment of the present application.
In fig. 5, the target font style is a regular script, firstly, according to a screen touch event C, a stroke point is drawn, and the type of the drawn stroke is determined to be vertical, then an adjustment position and an adjustment point can be determined, specifically, the adjustment position of the drawn vertical stroke can be determined to be the front end of the stroke, thus according to the adjustment mode corresponding to the determined adjustment position, the adjustment point can be determined to be a point located in the marked area of the dashed line frame in the figure, and the adjustment point is drawn, so that the adjusted character stroke can be obtained; because the user does not raise a pen, a screen touch event D is obtained along with the drawing of the user, and the drawing of the pen points is continued according to the screen touch event D, at the moment, the type of the drawn pen is still determined to be vertical, the adjustment position and the adjustment point can be determined continuously, and the adjustment position of the drawn vertical pen is determined to be the main stroke and the tail end of the pen, so that the adjustment point can be determined to be the point positioned in the marked area of the broken line frame in the drawing according to the adjustment mode corresponding to the determined adjustment position, and the adjusted character pen is obtained after the adjustment point is drawn.
In some cases, when no new pen-lifting action is recognized, the drawn second stroke point is updated continuously along with the continuous drawing of the stroke point, so that the overall stroke type of the text stroke formed by the recognized second stroke point may also change, and the drawn stroke can be adjusted in real time according to the changed overall stroke type.
For example, the horizontal folding stroke is composed of a horizontal stroke and a vertical stroke, in the process of writing the horizontal folding by a user, firstly, drawing a stroke point according to a horizontal track written by the user, and determining the overall stroke type of a character stroke formed by a drawn second stroke point as the horizontal stroke, so that the drawn stroke is adjusted by adopting a corresponding horizontal adjustment mode; along with the writing of the user, the stroke points are drawn according to the vertical track written by the user, and the overall stroke type of the text stroke formed by the drawn second stroke points is determined to be a cross-fold, so that the drawn stroke is adjusted again by adopting an adjustment mode corresponding to the cross-fold.
Referring to fig. 6, a schematic diagram of a third text adjustment process according to an embodiment of the present application is provided.
In fig. 6, the target font style is regular script, it can be seen that, after the stroke point is drawn according to the screen touch event E, the type of the drawn stroke is determined to be horizontal, the adjustment position and the adjustment point can be determined, specifically, the adjustment position of the drawn horizontal stroke can be determined to be the front end of the stroke, the main stroke and the tail end of the stroke, so that according to the adjustment mode corresponding to the determined adjustment position, the adjustment point can be determined to be the point located in the marked area of the dashed line frame in the figure, and the adjusted character stroke can be obtained by drawing the adjustment point.
Since the user does not pick up a pen and still continues writing, then, as the user continues writing characters, the screen touch event F is obtained again, and the drawing of the stroke points is continued according to the screen touch event F, at the moment, the type of the drawn stroke is determined to be a cross fold, namely, the type of the drawn stroke is determined to be changed, the adjustment position and the adjustment point can be determined when the cross fold is changed from the cross fold, specifically, the adjustment position of the cross fold stroke can be determined to be the bending position and the tail end of the stroke, and therefore, according to the adjustment mode corresponding to the determined adjustment position, the adjustment point can be the point positioned in the marked area of the broken line frame in the figure, and the adjusted character stroke can be obtained by drawing the adjustment point.
It can be seen that the scheme provided by the embodiment of the application can draw the strokes in real time according to the writing action before the user lifts the pen, continuously determine the integral stroke type of the drawn strokes formed by the second strokes in real time, and perform real-time integral adjustment on the strokes formed by the second strokes according to the integral stroke type.
From the above, by applying the text drawing scheme provided by the embodiment of the application, the screen touch information corresponding to the current event can be obtained in response to the screen touch event generated under the condition that no new pen lifting action is recognized each time, and the following steps are executed: the method comprises the steps of determining the positions of stroke points based on screen touch information, drawing a first stroke point based on the determined positions, then identifying the overall stroke type of a character stroke formed by a second stroke point based on the relative position relation between adjacent stroke points, further determining an adjusting point based on a target font style, the overall stroke type and the second stroke point, drawing the determined adjusting point, and realizing the adjustment of drawn characters by drawing the adjusting point.
In the application scenario of the embodiment of the application, the touch action is actually a writing action of the user, so that the screen touch information is used as the description information of the touch action, and actually reflects the writing track of the user on the screen. Therefore, after the stroke points are drawn according to the screen touch information, the drawn characters formed by the stroke points, namely the characters drawn based on the writing track of the user, can retain the original font style of the user to a greater extent. On the basis, the drawn text is adjusted by drawing the adjusting points based on the whole stroke type of the drawn stroke, and the adjusting process is a process of fine adjustment and standardization of the drawn stroke by combining the characteristics of the stroke type of the drawn stroke based on the original font style of the user.
In a comprehensive view, the adjusted drawn characters not only keep the original font style of the user, but also combine the characteristics of the stroke types of the drawn strokes to perform fine adjustment and standardization processing on the drawn strokes, so that the situations that the drawn characters of the application program appear in a shaking manner due to the fact that the user is not used to write the characters in the drawing area in the application program can be reduced, and the application program can draw the characters with the expected effect of the user.
In addition, the target font style is also considered when determining the above-mentioned adjustment points, so that the drawn text is adjusted by drawing the adjustment points, that is, the font style of the drawn text is brought toward the target font style based on the original font style of the user himself. Because the adjusted characters are close to the target style, the user can write the characters with the target style without extra exercise, and the user experience is improved.
Moreover, according to the embodiment of the application, the drawing, the stroke identification and the stroke adjustment of the stroke points are carried out for each screen touch event generated under the condition that a new pen lifting action is not identified, so that the stroke points can be drawn in real time according to the writing action before the pen lifting of a user, the integral stroke type of the strokes formed by the drawn second stroke points is continuously determined in real time, and the real-time integral adjustment is carried out on the strokes formed by the second stroke points according to the integral stroke type. Therefore, the effect of drawing and adjusting at the same time can be realized, and the strokes can be gradually and gradually adjusted for a plurality of times before the user lifts the pen, so that the adjusted effect can be presented to the user after the user lifts the pen, instead of carrying out stroke type identification and stroke adjustment once after the user lifts the pen, the abrupt change of the strokes is prevented from being felt after the user lifts the pen, the abrupt sense during the stroke adjustment is reduced, and the user experience is improved.
In one embodiment of the present application, the following steps S1-S3 may be employed to map the determined setpoint.
Step S1: based on the determined position of the setpoint, an expected rendering parameter of the determined setpoint is obtained.
The above-described expected drawing parameters may include at least one of a size, a position, and a transparency of the adjustment point.
Step S2: and obtaining the target drawing parameters of the determined adjusting points in each animation transition frame based on the expected drawing parameters, the preset drawing frequency and the number of the animation transition frames.
The above-described drawing frequency may also be referred to as the lifecycle of the setpoint, reflecting the number of drawings of the setpoint within 1 second.
Specifically, the parameter variation step length can be determined according to the expected drawing parameter and the preset drawing frequency, then the drawing parameter corresponding to each drawing of the set point is determined according to the parameter variation step length, and finally the target drawing parameter of the determined set point in each animation transition frame is determined according to the determined drawing parameter.
The parameter variation step may be a ratio between an expected drawing parameter and a preset drawing frequency.
For example, if the expected rendering parameter is 600 pixels and the preset rendering frequency is 30 times/second, the ratio 600/30=20 between the expected rendering parameter and the preset rendering frequency may be used as the parameter variation step.
The determining of the drawing parameters corresponding to each drawing of the setpoint may be determining the drawing parameters corresponding to the setpoint when the setpoint is first drawn as a parameter variation step, and the drawing parameters corresponding to each subsequent drawing are the drawing parameters corresponding to the previous drawing plus the parameter variation step.
For example, if the expected drawing parameter is 600 pixels, the preset drawing frequency is 30 times/second, the ratio 600/30=20 between the expected drawing parameter and the preset drawing frequency may be used as the parameter variation step, then, the drawing parameter corresponding to the 1 st drawing of the adjustment point is determined to be 20, the drawing parameter corresponding to the 2 nd drawing is 40, the drawing parameter corresponding to the 3 rd drawing is 60 … …, and so on.
The target drawing parameters of the adjusting points in each animation transition frame are determined according to the determined drawing parameters, and the following situations can be classified:
Case 1: the number of animated transition frames is equal to the number of determined drawing parameters.
Each of the determined drawing parameters may be assigned to each of the animation transition frames as a target drawing parameter of each of the animation transition frames.
For example, the expected drawing parameter is 600 pixels, the preset drawing frequency is 30 times/second, the ratio 600/30=20 between the expected drawing parameter and the preset drawing frequency can be used as a parameter variation step, then, the corresponding drawing parameter is [20, 40, …,600] when each drawing is determined, if the number of animation transition frames is 30 (i.e. the frame rate is 30 frames/second), the number is marked as [ f0, f2, …, f29], at this time, the number of drawing parameters is the same as the number of animation transition frames, each drawing parameter can be allocated to 1 animation transition frame, and the target drawing parameter of each animation transition frame can be obtained. That is, the target rendering parameter for f0 is determined to be 20, the target rendering parameter for f1 is determined to be 40, the target rendering parameter for f2 is determined to be 60 … …, and so on.
In this case, the number of the animation transition frames is the same as the number of the drawing parameters, that is, the frame rate of the animation transition frames is the same as the drawing frequency, and because the drawing frequency reflects the drawing times of the adjustment point in 1 second, the frame rate of the animation transition frames is the number of the video frames played in 1 second, so that under the condition that the frame rates are the same, the drawing parameters and the animation transition frames are in one-to-one correspondence, each animation transition frame corresponds to a different target drawing parameter, and the target drawing parameters corresponding to each animation transition frame are more close to the expected drawing parameters. Therefore, after the adjusting points in each animation transition frame are drawn according to the corresponding target drawing parameters, for the animation transition frames which are continuously played, the adjusting points in each 1-frame animation transition frame change once in the direction approaching to the pattern corresponding to the expected drawing parameters, and the effect that the adjusting points change continuously in the direction approaching to the pattern corresponding to the expected drawing parameters is shown.
Case 2: the number of animated transition frames is greater than the number of determined drawing parameters.
The determined drawing parameters can be complemented so that the number of the complemented drawing parameters is equal to the number of the animation transition frames, and each complemented drawing parameter is distributed to each animation transition frame to serve as a target drawing parameter of each animation transition frame.
The method for complementing the determined drawing parameters may be various, and is described below:
In the first embodiment, if the number of animation transition frames is an integer multiple of the number of determined drawing parameters, the drawing parameters may be copied for each determined drawing parameter, and the copied parameters are inserted into the drawing parameters to obtain the completed drawing parameters.
If the number of the animation transition frames is 2 times of the number of the determined drawing parameters, 1 part of each drawing parameter is copied, and 1 parameter obtained by copying is inserted into the drawing parameters to obtain drawing parameters with the original number of 2 times; if the number of the animation transition frames is 3 times of the number of the determined drawing parameters, each drawing parameter is copied for 2 times, the copied 2 parameters are inserted into the drawing parameters to obtain drawing parameters with the number of the original 3 times, and the rest times are the same and are not repeated here.
For example, the expected drawing parameter is 600 pixels, the preset drawing frequency is 30 times/second, the ratio 600/30=20 between the expected drawing parameter and the preset drawing frequency can be used as a parameter variation step, then, the corresponding drawing parameter at each drawing is determined to be [20, 40, …,600], if the number of animation transition frames is 60 (i.e. the frame rate is 60 frames/second), it is marked as [ f0, f2, …, f59], at this time, since the number of animation transition frames is 2 times the number of drawing parameters, each drawing parameter can be duplicated 1 time, and after inserting the duplicated 1 drawing parameter into the drawing parameter, the completed drawing parameters [20, 20, 40, 40, …,600, 600] are obtained. It can be seen that the number of the completed drawing parameters is 60, which is the same as the number of the animation transition frames, and at this time, each completed drawing parameter can be allocated to 1 animation transition frame to obtain the target drawing parameter of each animation transition frame. That is, the target rendering parameters of the 2 frames of animation transition frames f0 and f1 are each determined to be 20, the target rendering parameters of the 2 frames of animation transition frames f3 and f4 are each determined to be 40, the target rendering parameters of f4 and f5 are each determined to be 60 … …, and so on.
In this case, the number of the animation transition frames is 2 times the number of the drawing parameters, and is larger than the number of the drawing parameters, that is, the frame rate of the animation transition frames is 2 times the drawing frequency, and since the drawing frequency reflects the number of times of drawing of the adjustment point in 1 second, the frame rate of the animation transition frames is the number of video frames played in 1 second, so that in the case that the frame rate of the animation transition frames is larger than the drawing frequency, the drawing parameters and the animation transition frames do not correspond to each other one by one, but each 2 animation transition frames correspond to one same drawing parameter. Wherein, the setpoint in the 1 st frame of animation transition frame corresponding to the same drawing parameter is drawn according to the drawing parameter, and the setpoint in the 2 nd frame of animation transition frame corresponding to the same drawing parameter can be kept unchanged. Therefore, after the adjusting points in the animation transition frames are drawn according to the corresponding target drawing parameters, for the animation transition frames which are continuously played, the adjusting points in every 2 frames of animation transition frames change once in the direction approaching the pattern corresponding to the expected parameter parameters, and still the effect that the adjusting points change continuously in the direction approaching the pattern corresponding to the expected drawing parameters is displayed.
In the second embodiment, interpolation processing may be performed on the determined adjacent drawing parameters, and the interpolated parameter obtained by interpolation may be inserted between the adjacent drawing parameters to obtain the completed drawing parameters.
The present embodiment can be further divided into the following two cases:
Under the condition that the number of the animation transition frames is an integer multiple of the number of the determined drawing parameters, interpolation processing can be carried out on each pair of adjacent drawing parameters, the obtained interpolation parameters are inserted between the adjacent drawing parameters, the determined last 1 drawing parameters are duplicated, and the duplicated parameters are inserted into the last 1 drawing parameters to obtain the completed drawing parameters.
If the number of the animation transition frames is 2 times of the number of the determined drawing parameters, interpolation processing is carried out on each pair of adjacent drawing parameters to obtain 1 interpolation parameter, the obtained 1 interpolation parameter is inserted between the adjacent drawing parameters, the last 1 drawing parameters are copied for 1 part, and the obtained 1 copying parameters are inserted into the last 1 drawing parameters, so that the drawing parameters with the original 2 times are obtained; if the number of the animation transition frames is 3 times of the number of the determined drawing parameters, interpolation processing is carried out on each pair of adjacent drawing parameters to obtain 2 interpolation parameters, the obtained 2 interpolation parameters are inserted between the adjacent drawing parameters, the last 1 drawing parameters are copied for 2 copies, the obtained 2 copying parameters are inserted into the last 1 drawing parameters to obtain drawing parameters of which the number is 3 times, and the rest times are the same and are not repeated here.
And under the condition that the number of the animation transition frames is not an integer multiple of the number of the determined drawing parameters, selecting adjacent drawing parameters to be interpolated from each pair of adjacent drawing parameters, and inserting the interpolation parameters obtained by interpolation between the selected adjacent drawing parameters to obtain the completed drawing parameters.
In this case, the number of interpolation parameters obtained by interpolation is equal to the difference between the number of animation transition frames and the number of drawing parameters, so that after the parameters obtained by interpolation are inserted between the selected adjacent drawing parameters, the number of drawing parameters after complementation can be equal to the number of animation transition frames.
It should be noted that, the embodiment of the present application does not limit the manner of selecting the adjacent drawing parameters to be interpolated from the adjacent drawing parameters, and only needs to ensure that the number of parameters obtained after interpolation is equal to the difference between the number of animation transition frames and the number of drawing parameters.
For example, if the number of animation transition frames is 50 and the number of determined drawing parameters is 30, 20 pairs of adjacent drawing parameters can be selected from the determined 30 adjacent drawing parameters, each selected adjacent drawing parameter is interpolated to obtain 1 interpolation parameter, so that 20 interpolation parameters can be obtained in total, and after the interpolation parameters are inserted between the selected adjacent drawing parameters, the number of the drawing parameters after the completion is 50 and is equal to the number of the animation transition frames.
Step S3: and according to the frame refreshing rate, adjusting the drawing parameters of the determined adjusting points into the target drawing parameters corresponding to each animation transition frame by frame, and drawing each animation transition frame.
Each of the above-described animation transition frames may be referred to as a gradual animation, and the setpoint may be referred to as a setpoint object.
In particular, the data of the fade animation may be attached to the point object in the form of one member variable, wherein the data of the fade animation includes: the number of animation transition frames, the current frame number, the current drawing parameters, and the expected drawing parameters. When the frame is refreshed, all the point objects which are not drawn are traversed, the number of animation transition frames of the current frame number in the animation data is increased by one, the current drawing parameters are calculated, and then an output event containing the current drawing parameters is output.
Therefore, the target drawing parameters of each animation transition frame can be determined according to the drawing frequency, the drawing parameters of the determined adjusting points are adjusted to the target drawing parameters corresponding to each animation transition frame by frame according to the frame refreshing rate, and each animation transition frame is drawn, so that when each animation transition frame is drawn, the gradual change animation effect when the adjusting points are drawn can be realized, the process of adjusting the drawn characters through the drawing adjusting points can be more smoothly transited and is not abrupt, and the user experience is improved.
In addition, after setting the life cycle of the point object and the number of the animation transition frames, the parameters of the point object can be freely and smoothly changed after drawing through drawing the animation process frames, namely, the drawn characters can be more smoothly and efficiently adjusted.
In one embodiment of the present application, before the step S204, the method may further include the following steps:
and smoothing the sub-strokes formed by the second stroke point.
The sub-strokes are as follows: the sub-strokes included in the overall text stroke formed by the second stroke point, i.e., the individual stroke structures used to make up a complete text stroke.
For example, a full stroke of a cross-fold includes 2 stroke structures of a cross-and a vertical, i.e., sub-strokes of the cross-fold; as another example, the full stroke of a vertical hook includes 3 stroke structures of vertical, horizontal and hook, which are sub-strokes of a vertical hook.
The manner of determining the sub-strokes formed by the second stroke point is detailed in the following step S810 in fig. 8, which is not described in detail here.
Specifically, the width of each stroke part can be adjusted by adopting modes such as average value smoothing and the like, so that the width change of each stroke part of each sub-stroke is smoother, and the sub-strokes are integrally reduced in jitter and smoother.
Therefore, before the integral stroke type of the character stroke formed by the second stroke point is identified, the sub-strokes formed by the second stroke point are subjected to smoothing processing, so that the width change of each stroke part of the sub-strokes after the smoothing processing is smoother, the shake of the sub-strokes is reduced, the sub-strokes are smoother, and the characters obtained after the processing are in line with the expectations of users.
The following describes a specific text adjustment procedure provided by the present application.
Referring to fig. 7, a schematic diagram of a text adjustment process according to an embodiment of the present application is provided.
The following describes the text drawing process in steps.
First, touch events of an operating system callback are received.
The above-mentioned touch event is also referred to as a screen touch event generated in the case that a new pen-lifting action is not recognized.
After the user opens the text drawing application program, at least one of a finger and a touch pen can be used for text writing on the screen, the text writing can be perceived by the operating system as a series of touch events, and the touch events are recalled to a process for executing the application program.
In a second step, touch events can be converted to algorithmic input events.
The touch event may include a variety of data, and the required data is extracted so that the extracted data may be converted into an input event of the algorithm.
The required data may be at least one of a position of the touch point and an instantaneous speed.
And thirdly, converting the input event into skeleton points, and generating filling points among the skeleton points.
The skeleton points, i.e., the skeleton stroke points described above, and the fill points, i.e., the fill stroke points described above, may be collectively referred to as point objects.
The above-mentioned manner of determining the positions of the skeleton points and the filling points is described in the above-mentioned step S202, and is not described here again.
And fourthly, performing width smoothing treatment on the lines formed by the skeleton points and the filling points.
Specifically, all the generated point objects can be input into a smoothing processing module of a data smoothing processor, the data smoothing processor groups the point objects according to the sub-strokes to which the point objects belong, smoothing is performed on object parameters of each group of point objects by adopting methods such as average value smoothing and the like to obtain updated object parameters of each group of point objects, and then the updated object parameters of each group of point objects are returned through an interface, so that each point object can be adjusted according to the updated object parameters, and the width smoothing effect of the line is realized.
The object parameter may be at least one of a size, a position, a transparency, and a rotation angle of the point object.
Fifth, recognizing strokes through vector calculation.
And sixthly, adjusting the line shape according to the recognized stroke type.
In the two steps, recognizing strokes through vector calculation, namely recognizing the type of the whole stroke according to the direction distribution information of the vector direction corresponding to the drawn stroke point; the specific implementation manner of adjusting the line shape according to the recognized stroke type, that is, determining the adjustment point according to the overall stroke type and drawing the determined adjustment point is described in the foregoing steps S204-S205, which are not repeated here.
In addition, it should be noted that the stroke adjustment in the third, fourth and sixth steps may be performed in real time, that is, the calculation result is drawn in real time, so that, as the stroke points are drawn, the stroke types of the drawn strokes formed by the drawn stroke points may be continuously determined in real time, and the drawn strokes may be adjusted according to the determined stroke types.
Based on the embodiment shown in fig. 2, when the overall stroke type of the text stroke formed by the second stroke point is identified based on the vector direction corresponding to the first stroke point and the vector direction corresponding to the third stroke point, it can be judged whether the vector direction corresponding to each first stroke point is located in the direction judgment range of the target stroke type, different direction objects are adopted to save the vector directions according to the judgment result, and finally the overall stroke type of the text stroke formed by the second stroke point is identified according to the distribution of the vector directions recorded in the direction objects. In view of the above, the embodiment of the application provides a second text drawing method.
Referring to fig. 8, a flow chart of a second text drawing method according to an embodiment of the present application is shown, where the method includes the following steps S801 to S810.
Similarly, it can be seen that in the solution provided in the embodiment shown in fig. 2, after executing step S801 to obtain each screen touch event generated when no new pen lifting action is recognized and obtain the screen touch information, the stroke drawing, the stroke type recognition and the stroke adjustment processes shown in step S802-step S810 are all executed. That is, for each screen touch event triggered before a user draws a pen, recognizes the pen, and adjusts the pen in real time, in other words, draws the pen, recognizes the pen, and adjusts the pen while drawing the pen.
Step S801: and responding to the screen touch event generated under the condition that a new pen lifting action is not recognized every time, and obtaining the screen touch information corresponding to the current event.
For the screen touch information corresponding to each screen touch event in step S801, steps S802 to S810 are executed.
Step S802: based on the screen touch information, a location of the first stroke point is determined.
Step S803: the first stroke point is drawn based on the determined position.
The implementation of the steps S801 to S803 is described in the steps S201 to S203 in the embodiment shown in fig. 2, and will not be described here again.
Step S804: if the direction object is created, judging whether the vector direction corresponding to each first stroke point is positioned in the direction judging range of the target stroke type.
The meaning of the direction determination range of the stroke type is already described in step S204 in the embodiment shown in fig. 2, and will not be described here again.
The above-described direction object will be described first.
The direction object is a data structure, and is used for recording a vector direction corresponding to a relatively independent simple stroke type, namely: vector directions within a direction determination range corresponding to the stroke type.
The relatively independent simple stroke types can be understood as the respective stroke structures included in a complex complete stroke, that is, as the respective sub-strokes included in a complex complete stroke.
In one case, different directional objects may be understood as being used to record the vector directions corresponding to the different sub-strokes.
As is apparent from the foregoing description, the first stroke point and the third stroke point are both relative concepts, and the third stroke point has also been determined as the first stroke point, and the vector direction corresponding to the third stroke point can be obtained at that time, in other words, the vector direction corresponding to the third stroke point has been obtained and stored in the created direction object. That is, the created direction object records the vector direction corresponding to the third stroke point.
The target stroke type is described below.
The target stroke types are: the stroke type corresponding to the created latest first direction object.
That is, the stroke type corresponding to the vector direction recorded by the created most recent first direction object.
For example, in the process of writing a horizontal stroke, a user writes 2 horizontal and vertical sub-strokes continuously, so that an application program draws 2 horizontal and vertical sub-strokes, a vector direction corresponding to a horizontally included stroke point is recorded in a first created direction object, a vector direction corresponding to a vertically included stroke point is recorded in a second created direction object, wherein the second created direction object is the latest created first direction object, and therefore, a vertical stroke type corresponding to the first direction object is the target stroke type.
Step S805: whether the number of the target stroke points is greater than the preset number is determined, if yes, step S806 is performed, otherwise step S807 is performed.
Wherein, the target stroke point is: the corresponding vector direction is not located at the first stroke point of the direction determination range.
Step S806: and creating a second direction object, and recording the vector direction corresponding to the first stroke point to the second direction object.
If the number of the target strokes is greater than the preset number, that is, more strokes in the first strokes do not belong to the direction determination range corresponding to the drawn last text stroke, it is indicated that the text strokes formed by the first strokes do not belong to the strokes corresponding to the first direction object, and since each direction object is used for storing one type of strokes, step S806 may be executed, a second direction object may be newly created, and the vector direction corresponding to the first strokes may be recorded to the second direction object.
For example, the last character stroke drawn is a horizontal stroke, and the newly created first direction object is a direction object for recording the stroke point corresponding to the horizontal stroke type, that is, the stroke type corresponding to the first direction object is a horizontal stroke; according to the newly generated screen touch event, the first stroke point is drawn again, and the number of target stroke points which are not positioned in the direction judging range of the transverse stroke type in the vector direction corresponding to the first stroke point is determined to be large, so that the character strokes formed by the first stroke point can be determined not to belong to the stroke transverse direction corresponding to the first direction object, a second direction object can be newly created, and the vector direction corresponding to the first stroke point can be recorded to the second direction object.
Step S807: and recording the vector direction corresponding to the first stroke point to the first direction object.
If the number of the target strokes is not greater than the preset number, that is, fewer strokes in the first strokes do not belong to the direction determination range corresponding to the drawn last text stroke, which indicates that the text strokes formed by the first strokes belong to the strokes corresponding to the first direction object, since each direction object is used for storing one type of strokes, step S807 may be executed to record the vector direction corresponding to the first strokes to the first direction object.
For example, if the stroke type corresponding to the first direction object is horizontal, the first stroke point is drawn according to the newly generated screen touch event, and it is determined that the number of target stroke points which are not located in the direction determination range of the horizontal stroke type in the vector direction corresponding to the first stroke point is small, so that it can be determined that the text strokes composed of the first stroke point belong to the stroke horizontal direction corresponding to the first direction object, and therefore the vector direction corresponding to the first stroke point can be directly recorded to the first direction object.
Step S808: and counting the distribution of vector directions recorded in the newly created direction object to obtain direction distribution information corresponding to the newly created direction object.
The statistical manner of the vector direction distribution and the meaning of the direction distribution information are already described in step S204 in the embodiment shown in fig. 2, and are not described here again.
Step S809: and identifying the integral stroke type of the character stroke formed by the second stroke point based on the direction distribution information corresponding to the target direction object.
Wherein the target direction object comprises: if there is a previous pen-up action, all the direction objects created after the previous pen-up action, or if there is no previous pen-up action, all the direction objects created.
It can be seen that the target direction object is substantially all direction objects for recording the second stroke point, so that the direction distribution information corresponding to the target direction object reflects the direction distribution information corresponding to the second stroke point, and further, the overall stroke type of the text stroke formed by the second stroke point can be identified according to the direction distribution information.
Specifically, the stroke type corresponding to the latest created direction object can be determined based on the direction distribution information corresponding to the latest created direction object, and the whole stroke type of the text stroke formed by the second stroke point is identified according to the creation sequence of each target direction object and the stroke type corresponding to each target direction object.
Wherein, the stroke type that the direction object corresponds to is: the stroke points corresponding to the vector direction recorded in the direction object form the type of stroke.
The manner of determining the stroke type corresponding to the newly created direction object according to the direction distribution information may be obtained based on the manner of determining the stroke type according to the direction distribution information described in the foregoing step S204, which is not described herein.
Also, as can be seen from the foregoing description, the first stroke point and the third stroke point are both relative concepts, the third stroke point has also been determined to be the first stroke point, the vector direction corresponding to the third stroke point can be obtained at that time, and stored in the created target direction object, and the stroke type corresponding to the target direction object can also be determined at that time.
Specifically, based on the creation sequence of each target direction object, the drawing sequence of the strokes corresponding to each target direction object can be determined, and further, based on the drawing sequence and the mapping relation between the pre-stored stroke order and the stroke types, the whole stroke types of the text strokes formed by the second stroke points can be identified.
For example, the creation order of the 2 target direction objects O1 and O2 is O1→o2, and the stroke types of the strokes corresponding to the 2 target direction objects are horizontal and vertical, respectively, that is, the drawing order of the strokes corresponding to the target direction objects is horizontal→vertical, and then, based on the above correspondence, it can be determined that the overall stroke type corresponding to the stroke order of horizontal→vertical is a horizontal fold.
For another example, the creation sequence of the 3 target direction objects O1-O3 is O1 → O2 → O3, and the stroke types of the strokes corresponding to the 3 target direction objects are respectively vertical, horizontal and hook, that is, the drawing sequence of the strokes corresponding to the target direction objects is vertical → horizontal and hook, so that based on the corresponding relationship, it can be determined that the overall stroke type corresponding to the stroke sequence of vertical → horizontal and hook is vertical hook.
Therefore, the stroke types corresponding to the direction objects represent the drawn sub-stroke types, and the creation sequence of the direction objects represents the drawing sequence of the drawn strokes, so that the whole stroke types of the character strokes formed by the second stroke point can be conveniently and accurately determined based on the stroke types and the drawing sequence.
Step S810: a setpoint is determined based on the target font style, the overall stroke type, and the second stroke point, and the determined setpoint is drawn.
The implementation of this step may be the same as that of step S205 in the embodiment shown in fig. 2, and will not be described here again.
In one case, if the overall stroke type represents that the text stroke includes a plurality of sub-strokes, sub-stroke demarcation points in the second stroke points can be determined based on the stroke points corresponding to the vector directions recorded in the target direction objects, the second stroke points are divided into the stroke points belonging to different sub-strokes based on the determined demarcation points, and the adjustment points of the sub-strokes are determined based on the target font style, the sub-stroke types of the sub-strokes and the stroke points of the sub-strokes.
Wherein, the stroke type of the sub-stroke is determined based on the vector direction corresponding to the stroke point of the sub-stroke.
The overall stroke type characterizes that the text stroke contains a plurality of sub-strokes, i.e., the overall stroke type is a complex stroke type.
Specifically, the complex stroke type may be preset by a worker, and may be at least one of a transverse fold, a transverse fold hook, and a vertical fold hook, for example.
The sub-stroke demarcation point can be understood as the demarcation point of the adjacent sub-strokes included in the whole stroke, and can be determined according to the rear end position of the forward adjacent sub-strokes or the rear end position of the forward adjacent sub-strokes.
For example, if the overall stroke is a horizontal stroke, including 2 adjacent sub-strokes in horizontal and vertical directions, the sub-stroke demarcation point may be determined according to the stroke point included in the rear end position of the horizontal sub-stroke, or may be determined according to the stroke point included in the front end position of the vertical sub-stroke.
The manner in which the above-described sub-stroke demarcation points are determined is described below.
For convenience of description, the vector direction of the stroke point obtained according to a certain screen touch event will be referred to as the vector direction of the certain screen touch event.
In the first embodiment, for 2 target direction objects adjacent to each other in the creation order, a first vector direction corresponding to the last screen touch event in the vector directions recorded in the forward adjacent direction objects may be determined, and the sub-stroke demarcation point may be determined according to the position of the stroke point corresponding to the first vector direction.
Wherein, the first vector direction is: according to the vector direction of the target stroke point drawn by the last screen touch event, namely, the vector direction of the last screen touch event is not positioned in the vector direction corresponding to the target stroke point of the target stroke type corresponding to the forward adjacent direction object.
For example, an average of the determined positions of the stroke points may be determined as the positions of the sub-stroke demarcation points described above.
As can be seen from the foregoing embodiments, for the vector directions of the stroke points drawn according to the one-time screen touch event, only when the number of target stroke points included therein is greater than the preset number, a new second direction object is created, the stroke points drawn for the time are all recorded to the second direction object, and when the number of target stroke points included therein is not greater than the preset number, the stroke points drawn for the time are still recorded to the existing first direction object. In the latter case, it will be seen that the first direction object will include target ones of the drawn strokes, which are not within the corresponding direction determination range of the first direction object, and are likely to be sub-stroke demarcation points for both sub-strokes.
For example, the stroke type corresponding to the forward adjacent direction object is a horizontal direction, the corresponding direction determination range is a direction determination range of the horizontal direction of the stroke, and if the screen touch event reflects that the user draws the horizontal and also draws the vertical beginning, the stroke point corresponding to the vertical beginning can reflect the sub-stroke demarcation point. Among the stroke points corresponding to the screen touch event, the target stroke point which is not located in the stroke horizontal direction judging range is the stroke point corresponding to the vertical beginning, and the number of the target stroke points is small and is not larger than the preset number, so that the vector directions of the stroke points corresponding to the screen touch event are recorded to the forward adjacent direction objects. In this case, the first vector direction in the forward adjacent direction object is the vector direction of the target stroke point corresponding to the beginning of the above-mentioned vertical stroke, that is, the vector direction corresponding to the sub-stroke boundary point. Accordingly, the sub-stroke demarcation point may be determined based on the location of the stroke point corresponding to the first vector direction.
It can be seen that this case corresponds to determining the sub-stroke demarcation point based on the rear end position of the forward adjacent stroke.
In a second embodiment, for 2 target direction objects adjacent to each other in the creation order, a second vector direction corresponding to the first screen touch event in the vector directions recorded by the backward adjacent direction objects may be determined, and the sub-stroke demarcation point may be determined according to the position of the stroke point corresponding to the second vector direction.
Wherein, the second vector direction is: according to the vector direction of the non-target stroke point drawn according to the first screen touch event, namely, in the vector direction of the first screen touch event, the vector direction corresponding to the stroke point of the target stroke type corresponding to the forward adjacent direction object is located.
For example, an average of the determined positions of the stroke points may be determined as the positions of the sub-stroke demarcation points described above.
This embodiment is similar to the previous embodiments and is specifically described below by way of example.
For example, the stroke type corresponding to the forward adjacent direction object is a horizontal direction, the corresponding direction determination range is a direction determination range of the horizontal direction of the stroke, and if the screen touch event reflects that the user draws the end of the horizontal direction and draws the vertical direction, the stroke point corresponding to the end of the horizontal direction can also reflect the sub-stroke demarcation point. Among the stroke points corresponding to the screen touch event, the target stroke point which is not located in the stroke horizontal direction judging range is the stroke point corresponding to the vertical stroke point, and the number of the target stroke points is more than the preset number, so that the vector directions of the stroke points corresponding to the screen touch event are recorded to the newly created direction object, namely the backward adjacent direction object. In this case, the second vector direction in the backward adjacent direction object is the vector direction corresponding to the stroke point at the end of the above-mentioned horizontal direction, that is, the vector direction corresponding to the sub-stroke demarcation point. Accordingly, the sub-stroke demarcation point may be determined based on the location of the stroke point corresponding to the second vector direction.
In this case, it can be seen that this corresponds to determining the sub-stroke demarcation point based on the front end position of the backward adjacent stroke.
In a third embodiment, the first and second embodiments may be combined, and the sub-stroke demarcation point may be determined based on a position of a stroke point corresponding to a first vector direction in the forward adjacent direction object and a position of a stroke point corresponding to a second vector direction in the backward adjacent direction object. The specific embodiments may be obtained by combining the first embodiment and the second embodiment, and will not be described herein.
For example, an average of the positions of the stroke points corresponding to the first vector direction and the second vector direction is used as the position of the sub-stroke demarcation point.
In this case, determining the sub-stroke demarcation point is equivalent to determining the sub-stroke demarcation point based on the rear end portion of the forward adjacent stroke and the front end portion of the backward adjacent stroke.
In the fourth embodiment, since the vector directions corresponding to the respective stroke points can be stored in the target direction object in the drawing order in some cases, the order of the vector directions recorded in the target direction object reflects the drawing order of the stroke points. Based on the above, in the 2 target direction objects with adjacent creation sequences, the stroke point corresponding to the last vector direction recorded in the forward target direction object corresponds to the last stroke point in the sub-stroke with the preceding writing sequence in the adjacent sub-stroke, and can be used as the sub-stroke demarcation point; the stroke point corresponding to the first vector direction recorded in the backward target direction object corresponds to the first stroke point in the sub-stroke with the writing sequence in the adjacent sub-stroke, and can also be used as the sub-stroke demarcation point.
In this case, it is equivalent to determining the last stroke point of the forward adjacent stroke or the first stroke point of the backward adjacent stroke as the sub-stroke demarcation point.
Thus, through the above embodiments, the second stroke point may be divided into the stroke points belonging to different sub-strokes by using the sub-stroke demarcation point according to the sub-stroke demarcation point, and further according to the drawing order of the second stroke point.
The manner of determining the adjustment point of each sub-stroke based on the target font style, the sub-stroke type of each sub-stroke, and the stroke point of each sub-stroke is the same as that of determining the adjustment point in the embodiment shown in fig. 2, and the difference is only that the drawn stroke is replaced by a sub-stroke, which is not described herein.
Therefore, for the complex stroke types comprising a plurality of sub-strokes, the demarcation point of the stroke can be quickly determined according to the vector direction recorded in each target direction object, the second stroke point is divided into the sub-strokes according to the demarcation point, the adjustment points are respectively determined for the sub-strokes, and the sub-strokes are adjusted by drawing the adjustment points, so that the complex strokes are treated differently, the adjustment of the more weight of the complex strokes can be realized, and the adjustment efficiency and the adjustment effect are improved.
If a direction object has been created, it may be determined whether the vector direction corresponding to each first stroke point is within the direction determination range of the target stroke type, and steps S805-S807 may be performed, that is, determining whether to record the vector direction corresponding to the first stroke point to the created first direction object or to record the vector direction corresponding to the first stroke point to the newly created second direction object according to the number of target stroke points.
Then, if no direction object has been created, since there is no direction object for saving the vector direction at this time, step S806 may be directly performed, that is, creating a second direction object, and recording the vector direction corresponding to the first stroke point to the second direction object.
As can be seen from the above, in this embodiment, it is determined whether the number of the first strokes in the direction determination range, where the corresponding vector direction is not located, is greater than the preset number, that is, it is equivalent to determining whether the text stroke formed by the first strokes is the last text stroke that has been drawn, if so, the text stroke formed by the first strokes is the same as the last text stroke that has been drawn, so that the vector direction corresponding to the first strokes can be recorded to the created latest first direction object; if not, the character strokes formed by the first stroke point are different from the last character stroke drawn, so that the second direction object can be newly created, and the vector direction corresponding to the first stroke point is recorded to the second direction object. Therefore, the vector directions corresponding to different sub-strokes can be stored by adopting different direction objects in a clear manner, and the overall stroke types of the character strokes formed by the second stroke points can be rapidly and accurately determined according to the distribution of the vector directions recorded in the direction objects.
The text drawing method provided by the embodiment of the present application will be described in detail below by way of specific examples shown in stages 1 to 19.
Stage 1: the user opens the application, the finger or stylus first touches the screen, writing in the writing area is started, the electronic device detects the screen touch event E1, and determines that E1 is the screen touch event generated without recognizing a new pen-lifting action.
Stage 2: the electronic device draws a stroke point P1 according to the screen touch information corresponding to the screen touch event E1, where P1 is referred to as a first stroke point, and P1 is also all drawn stroke points, i.e., P1 is also a second stroke point.
Stage 3: the electronic device judges that the direction object is not created, creates the direction object O1, and records the vector direction corresponding to the first stroke point P1 to the direction object O1.
Stage 4: the electronic device determines that the stroke type corresponding to O1 is horizontal based on the direction distribution information corresponding to the newly created direction object O1.
Stage 5: the electronic equipment judges that the target direction object is: all the direction objects that have been created, i.e. direction object O1.
Stage 6: the electronic equipment recognizes the overall stroke type of the text stroke formed by the second stroke point P1 as a horizontal stroke according to the creation sequence of the target direction object O1 and the stroke type corresponding to O1, and adjusts the stroke according to the determined overall stroke type.
Stage 7: since the user is not lifting the pen, but continues to write, the electronic device detects a screen touch event E2 and determines that E2 is a screen touch event that occurs without recognizing a new lifting action.
Stage 8: the electronic device draws a stroke point P2 according to the screen touch information corresponding to the screen touch event E2, where P2 is referred to as a first stroke point, P1 drawn previously is referred to as a third stroke point, and p1+p2 is all the drawn stroke points, that is, p1+p2 is a second stroke point.
Stage 9: the electronic device judges that the direction object is created, determines that the direction object O1 is the latest first direction object which is created, and the direction judging range corresponding to the direction O1 is the direction judging range corresponding to the transverse direction, and the number of target stroke points which are not positioned in the direction judging range corresponding to the transverse direction in the corresponding vector direction in the judgment of the electronic device P2 is smaller than the preset number, so that the strokes formed by the first stroke point P2 are determined to still belong to the transverse direction, and the vector direction corresponding to the P2 is recorded to the first direction object O1.
Stage 10: the electronic device determines that the stroke type corresponding to O1 is horizontal based on the direction distribution information corresponding to the newly created direction object O1.
Stage 11: the electronic equipment judges that the target direction object is: all the direction objects that have been created, i.e. direction object O1.
Stage 12: the electronic equipment recognizes that the whole stroke type of the text stroke formed by the second stroke point P1+P2 is still horizontal according to the creation sequence of the target direction object O1 and the stroke type corresponding to O1, and adjusts the stroke according to the determined whole stroke type.
Stage 13: since the user is not lifting the pen, but continues to write, the electronic device detects a screen touch event E3, and determines that E3 is a screen touch event generated if a new lifting action is not recognized.
Stage 14: the electronic device draws a stroke point P3 according to the screen touch information corresponding to the screen touch event E3, where P3 is referred to as a first stroke point, P1 and P2 drawn previously are referred to as third stroke points, and p1+p2+p3 is all drawn stroke points, that is, p1+p2+p3 is a second stroke point.
Stage 15: the electronic device judges that the direction object is created, determines that the direction object O1 is the latest first direction object which is created, and determines that the direction judging range corresponding to the direction object O1 is the direction judging range corresponding to the transverse direction, and judges that the number of target stroke points which are not located in the direction judging range corresponding to the transverse direction in the first stroke point P3 is larger than the preset number, so that it is determined that strokes formed by the first stroke point P3 do not belong to the transverse direction, and therefore, a second direction object O2 is newly created, and the vector direction corresponding to the first stroke point P3 is recorded to the second direction object O2.
Stage 16: the electronic device determines that the stroke type corresponding to O2 is vertical based on the direction distribution information corresponding to the newly created direction object O2.
Stage 17: the electronic equipment judges that the target direction object is: all the direction objects created, namely, the direction object O1 and the direction object O2.
Stage 18: the electronic equipment recognizes the overall stroke type of the text stroke formed by the second stroke point P1+P2+P3 as a cross-over according to the creation sequence of the target direction objects O1 and O2 and the stroke types corresponding to the O1 and O2, and adjusts the stroke according to the determined overall stroke type.
Stage 19: and (5) lifting the pen by the user, wherein the electronic equipment does not detect a screen touch event within a preset time interval, and determining that the stroke adjustment is completed, and ending the process.
Corresponding to the character drawing method, the embodiment of the application also provides a character drawing device.
Referring to fig. 9, a schematic structural diagram of a text drawing device according to an embodiment of the present application is provided, where the device includes the following modules:
The touch information obtaining module 901 is configured to obtain a screen touch signal corresponding to a current event in response to a screen touch event generated under a condition that a new pen-lifting action is not recognized every time, and trigger the following modules;
A stroke point position determining module 902, configured to determine a position of a first stroke point based on the screen touch information;
a stroke point drawing module 903 for drawing a first stroke point based on the determined position;
The stroke type recognition module 904 recognizes an overall stroke type of the text stroke formed by the second stroke point based on the relative positional relationship between the adjacent stroke points, wherein the second stroke point includes: under the condition that the previous pen-lifting action exists, all the drawn stroke points after the previous pen-lifting action exist, or under the condition that the previous pen-lifting action does not exist, all the drawn stroke points exist;
The setpoint determining module 905 is configured to determine a setpoint based on the target font style, the overall stroke type and the second stroke point, and draw the determined setpoint.
From the above, by applying the text drawing scheme provided by the embodiment of the application, the screen touch information corresponding to the current event can be obtained in response to the screen touch event generated under the condition that no new pen lifting action is recognized each time, and the following steps are executed: the method comprises the steps of determining the positions of stroke points based on screen touch information, drawing a first stroke point based on the determined positions, then identifying the overall stroke type of a character stroke formed by a second stroke point based on the relative position relation between adjacent stroke points, further determining an adjusting point based on a target font style, the overall stroke type and the second stroke point, drawing the determined adjusting point, and realizing the adjustment of drawn characters by drawing the adjusting point.
In the application scenario of the embodiment of the application, the touch action is actually a writing action of the user, so that the screen touch information is used as the description information of the touch action, and actually reflects the writing track of the user on the screen. Therefore, after the stroke points are drawn according to the screen touch information, the drawn characters formed by the stroke points, namely the characters drawn based on the writing track of the user, can retain the original font style of the user to a greater extent. On the basis, the drawn text is adjusted by drawing the adjusting points based on the whole stroke type of the drawn stroke, and the adjusting process is a process of fine adjustment and standardization of the drawn stroke by combining the characteristics of the stroke type of the drawn stroke based on the original font style of the user.
In a comprehensive view, the adjusted drawn characters not only keep the original font style of the user, but also combine the characteristics of the stroke types of the drawn strokes to perform fine adjustment and standardization processing on the drawn strokes, so that the situations that the drawn characters of the application program appear in a shaking manner due to the fact that the user is not used to write the characters in the drawing area in the application program can be reduced, and the application program can draw the characters with the expected effect of the user.
In addition, the target font style is also considered when determining the above-mentioned adjustment points, so that the drawn text is adjusted by drawing the adjustment points, that is, the font style of the drawn text is brought toward the target font style based on the original font style of the user himself. Because the adjusted characters are close to the target style, the user can write the characters with the target style without extra exercise, and the user experience is improved.
Moreover, according to the embodiment of the application, the drawing, the stroke identification and the stroke adjustment of the stroke points are carried out for each screen touch event generated under the condition that a new pen lifting action is not identified, so that the stroke points can be drawn in real time according to the writing action before the pen lifting of a user, the integral stroke type of the strokes formed by the drawn second stroke points is continuously determined in real time, and the real-time integral adjustment is carried out on the strokes formed by the second stroke points according to the integral stroke type. Therefore, the effect of drawing and adjusting at the same time can be realized, and the strokes can be gradually and gradually adjusted for a plurality of times before the user lifts the pen, so that the adjusted effect can be presented to the user after the user lifts the pen, instead of carrying out stroke type identification and stroke adjustment once after the user lifts the pen, the abrupt change of the strokes is prevented from being felt after the user lifts the pen, the abrupt sense during the stroke adjustment is reduced, and the user experience is improved.
In one embodiment of the present application, the stroke type recognition module 904 includes:
The stroke type recognition sub-module is used for recognizing the whole stroke type of the text stroke formed by the second stroke point based on the vector direction corresponding to the first stroke point and the vector direction corresponding to the third stroke point, wherein the third stroke point is: the second stroke points are the stroke points except the first stroke points, and the vector directions corresponding to the stroke points are as follows: the forward adjacent stroke point of the stroke point is the vector direction of the stroke point.
Because the vector direction corresponding to the stroke point reflects the writing direction of the stroke point, the vector direction corresponding to the first stroke point and the vector direction corresponding to the third stroke point can reflect the writing direction of the second stroke point, so that the overall stroke type is determined according to the vector direction corresponding to the first stroke point and the vector direction corresponding to the third stroke point, namely, the overall stroke type is determined according to the writing direction of the second stroke point, and the accuracy of the determined overall stroke type is improved.
In one embodiment of the present application, the stroke type recognition sub-module includes:
The judging unit is configured to judge whether a vector direction corresponding to each first stroke point is located in a direction judging range of a target stroke type if the direction object is created, where the created direction object records a vector direction corresponding to a third stroke point, and the target stroke type is: a stroke type determined based on the vector direction recorded in the created up-to-date first direction object; if the number of the target stroke points is greater than the preset number, creating a second direction object, and recording the vector direction corresponding to the first stroke point to the second direction object, otherwise, recording the vector direction corresponding to the first stroke point to the first direction object, wherein the target stroke points are as follows: the corresponding vector direction is not located at the first stroke point of the direction judging range;
a direction distribution information obtaining unit, configured to count the distribution of vector directions recorded in the newly created direction object, and obtain direction distribution information corresponding to the newly created direction object;
The stroke type identifying unit is used for identifying the whole stroke type of the character stroke formed by the second stroke point based on the direction distribution information corresponding to the target direction object, wherein the target direction object comprises: if there is a previous pen-up action, all the direction objects created after the previous pen-up action, or if there is no previous pen-up action, all the direction objects created.
As can be seen from the above, in this embodiment, it is determined whether the number of the first strokes in the direction determination range, where the corresponding vector direction is not located, is greater than the preset number, that is, it is equivalent to determining whether the text stroke formed by the first strokes is the last text stroke that has been drawn, if so, the text stroke formed by the first strokes is the same as the last text stroke that has been drawn, so that the vector direction corresponding to the first strokes can be recorded to the created latest first direction object; if not, the character strokes formed by the first stroke point are different from the last character stroke drawn, so that the second direction object can be newly created, and the vector direction corresponding to the first stroke point is recorded to the second direction object. Therefore, the vector directions corresponding to different sub-strokes can be stored by adopting different direction objects in a clear manner, and the overall stroke types of the character strokes formed by the second stroke points can be rapidly and accurately determined according to the distribution of the vector directions recorded in the direction objects.
In one embodiment of the present application, the stroke type identifying unit is specifically configured to determine, based on direction distribution information corresponding to a newly created direction object, a stroke type corresponding to the newly created direction object, where the stroke type corresponding to the direction object is: the stroke points corresponding to the vector direction recorded in the direction object form the stroke types; and identifying the whole stroke type of the text stroke formed by the second stroke point according to the creation sequence of the target direction objects and the stroke type corresponding to the target direction objects.
Therefore, the stroke types corresponding to the direction objects represent the drawn sub-stroke types, and the creation sequence of the direction objects represents the drawing sequence of the drawn strokes, so that the whole stroke types of the character strokes formed by the second stroke point can be conveniently and accurately determined based on the stroke types and the drawing sequence.
In one embodiment of the present application, the adjustment point determining module 905 is specifically configured to determine, if the overall stroke type indicates that the text stroke includes a plurality of sub-strokes, sub-stroke boundary points in the second stroke point based on the stroke points corresponding to the vector directions recorded in the target direction objects; dividing the second stroke point into stroke points belonging to different sub-strokes based on the determined demarcation point; and determining an adjusting point of each sub-stroke based on the target font style, the sub-stroke type of each sub-stroke and the stroke point of each sub-stroke, wherein the stroke type of each sub-stroke is determined based on the vector direction corresponding to the stroke point of each sub-stroke.
Therefore, for the complex stroke types comprising a plurality of sub-strokes, the demarcation point of the stroke can be quickly determined according to the vector direction recorded in each target direction object, the second stroke point is divided into the sub-strokes according to the demarcation point, the adjustment points are respectively determined for the sub-strokes, and the sub-strokes are adjusted by drawing the adjustment points, so that the complex strokes are treated differently, the adjustment of the more weight of the complex strokes can be realized, and the adjustment efficiency and the adjustment effect are improved.
In one embodiment of the present application, the adjustment point determining module 905 is specifically configured to determine an adjustment location and an adjustment manner of the drawn stroke based on the target font style and the overall stroke type; and determining an adjusting point according to the adjusting mode based on the stroke point positioned at the adjusting part in the second stroke point.
In one embodiment of the present application, the adjustment portion includes: the front end part of the stroke comprises the following adjustment modes: adding a supplement point at the front section part; and/or
The adjustment portion includes: the main part of the stroke comprises the following adjustment modes: adjusting the stroke point of the trunk part to change the thickness degree of the trunk part; and/or
The adjustment portion includes: the adjustment mode comprises the following steps of: adjusting the stroke point of the bending part to increase the sharpness of the bending part; and/or
The adjustment portion includes: the tail end part of the stroke comprises the following adjustment modes: and adjusting the stroke point of the tail end part to increase the thickness convergence degree of the tail end part, or adding a supplement point at the tail end part.
Therefore, the adjustment positions of drawn strokes and the adjustment modes corresponding to the adjustment positions of various types are different along with the difference of the types of the whole strokes and the styles of the target fonts, so that the adjustment points can be flexibly determined based on the adjustment positions rich in types and the adjustment modes corresponding to the adjustment positions, and the styles of drawn characters can be flexibly adjusted to various target fonts through drawing the adjustment points, the flexibility of the drawn characters is improved, and the adjusted characters tend to various target font effects expected by users.
From the above, the adjustment position and the adjustment manner of the drawn stroke are determined based on the target font style and the overall stroke type, and the adjustment point is determined based on the adjustment manner and the stroke point of the adjustment position, that is, when the adjustment point is determined, the target font style, the overall stroke type and the drawn stroke point are comprehensively considered, so that the adjustment point for adjusting and normalizing the drawn stroke can be comprehensively and reasonably determined based on the above, and after the drawn stroke is adjusted through the drawing adjustment point, the adjusted drawn text can be enabled to conform to the user expectation.
In one embodiment of the present application, the setpoint determining module 905 is specifically configured to determine a setpoint based on a target font style, the overall stroke type, and the drawn stroke point; obtaining expected drawing parameters of the determined setpoint based on the determined location of the setpoint; obtaining target drawing parameters of the determined adjusting points in each animation transition frame based on the expected drawing parameters, preset drawing frequency and the number of animation transition frames; and according to the frame refreshing rate, adjusting the drawing parameters of the determined adjusting points into the target drawing parameters corresponding to each animation transition frame by frame, and drawing each animation transition frame.
Therefore, the target drawing parameters of each animation transition frame can be determined according to the drawing frequency, the drawing parameters of the determined adjusting points are adjusted to the target drawing parameters corresponding to each animation transition frame by frame according to the frame refreshing rate, and each animation transition frame is drawn, so that when each animation transition frame is drawn, the gradual change animation effect when the adjusting points are drawn can be realized, the process of adjusting the drawn characters through the drawing adjusting points can be more smoothly transited and is not abrupt, and the user experience is improved.
In addition, after setting the life cycle of the point object and the number of the animation transition frames, the parameters of the point object can be freely and smoothly changed after drawing through drawing the animation process frames, namely, the drawn characters can be more smoothly and efficiently adjusted.
In one embodiment of the present application, the stroke point position determining module 902 is specifically configured to determine a position and an instantaneous speed of a touch point based on the screen touch information; determining the position of a skeleton stroke point based on the position of the touch point; determining a stroke width based on the instantaneous speed; and determining the position of the filling stroke point based on the skeleton stroke point position and the stroke width.
The position of the touch point is determined according to the touch operation of the user on the screen, so that the touch track of the user on the screen can be accurately reflected, and the position of the skeleton stroke point is determined based on the position of the touch point, so that the overall track of the determined skeleton stroke point can accurately reflect the writing track of the user on the screen.
In addition, a stroke width is determined based on the instantaneous speed of the touch point and a position of the fill stroke point is determined based on the stroke width before determining the position of the fill stroke point. When the position of the filling stroke point is determined based on the stroke width, the influence of the instantaneous speed of the touch point on the stroke width is also considered, so that the character strokes formed by the stroke points can be more consistent with the thickness variation characteristics of the character strokes written in real life when the stroke points are drawn subsequently, and the drawn characters can be favorably consistent with the expectations of users.
In one embodiment of the present application, the smoothing module is configured to smooth the sub-strokes formed by the second stroke point before the stroke type recognition module 904 triggers.
Therefore, before the integral stroke type of the character stroke formed by the second stroke point is identified, the sub-strokes formed by the second stroke point are subjected to smoothing processing, so that the width change of each stroke part of the sub-strokes after the smoothing processing is smoother, the shake of the sub-strokes is reduced, the sub-strokes are smoother, and the characters obtained after the processing are in line with the expectations of users.
In the technical scheme of the application, related operations such as acquisition, storage, use, processing, transmission, provision, disclosure and the like of the personal information of the user are performed under the condition that the authorization of the user is obtained.
The embodiment of the application also provides an electronic device, as shown in fig. 10, including:
a memory 1001 for storing a computer program;
the processor 1002 is configured to implement the character drawing method when executing the program stored in the memory 1001.
And the electronic device may further comprise a communication bus and/or a communication interface, through which the processor 1002, the communication interface, and the memory 1001 communicate with each other.
The communication bus mentioned above for the electronic device may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
In yet another embodiment of the present application, a computer readable storage medium is provided, in which a computer program is stored, which when executed by a processor, implements any of the above-mentioned text rendering methods.
In yet another embodiment of the present application, a computer program product containing instructions that, when run on a computer, cause the computer to perform any of the text rendering methods of the above embodiments is also provided.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a Solid state disk (Solid STATE DISK, SSD), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus, electronic device and storage medium embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and references to the parts of the description of the method embodiments are only needed.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (10)

1. A text drawing method, characterized by comprising:
Responding to the screen touch event generated under the condition that a new pen lifting action is not recognized every time, obtaining the screen touch information corresponding to the current event, and executing the following steps:
Determining the position of a first stroke point based on the screen touch information;
Drawing a first stroke point based on the determined position;
Based on the relative position relation between the adjacent stroke points, the integral stroke type of the character stroke formed by the second stroke point is identified, wherein the second stroke point comprises: under the condition that the previous pen-lifting action exists, all the drawn stroke points after the previous pen-lifting action exist, or under the condition that the previous pen-lifting action does not exist, all the drawn stroke points exist;
And determining a setpoint based on the target font style, the overall stroke type and the second stroke point, and drawing the determined setpoint.
2. The method of claim 1, wherein identifying the overall stroke type of the text stroke formed by the second stroke point based on the relative positional relationship between adjacent stroke points comprises:
Based on the vector direction corresponding to the first stroke point and the vector direction corresponding to the third stroke point, identifying the whole stroke type of the text stroke formed by the second stroke point, wherein the third stroke point is as follows: the second stroke points are the stroke points except the first stroke points, and the vector directions corresponding to the stroke points are as follows: the forward adjacent stroke point of the stroke point is the vector direction of the stroke point.
3. The method of claim 2, wherein identifying the overall stroke type of the text stroke formed by the second stroke point based on the vector direction corresponding to the first stroke point and the vector direction corresponding to the third stroke point comprises:
if the direction object is created, judging whether the vector direction corresponding to each first stroke point is located in a direction judging range of a target stroke type, wherein the created direction object records the vector direction corresponding to a third stroke point, and the target stroke type is as follows: a stroke type determined based on the vector direction recorded in the created up-to-date first direction object;
if the number of the target stroke points is greater than the preset number, creating a second direction object, and recording the vector direction corresponding to the first stroke point to the second direction object, wherein the target stroke points are as follows: the corresponding vector direction is not located at the first stroke point of the direction judging range;
otherwise, recording the vector direction corresponding to the first stroke point to the first direction object;
Counting the distribution of vector directions recorded in the newly created direction object to obtain direction distribution information corresponding to the newly created direction object;
Based on direction distribution information corresponding to a target direction object, identifying the overall stroke type of the text stroke formed by the second stroke point, wherein the target direction object comprises: if there is a previous pen-up action, all the direction objects created after the previous pen-up action, or if there is no previous pen-up action, all the direction objects created.
4. The method of claim 3, wherein identifying the overall stroke type of the text stroke formed by the second stroke point based on the direction distribution information corresponding to the target direction object comprises:
Determining a stroke type corresponding to the latest created direction object based on direction distribution information corresponding to the latest created direction object, wherein the stroke type corresponding to the direction object is as follows: the stroke points corresponding to the vector direction recorded in the direction object form the stroke types;
and identifying the whole stroke type of the text stroke formed by the second stroke point according to the creation sequence of the target direction objects and the stroke type corresponding to the target direction objects.
5. The method of claim 3, wherein the determining the setpoint based on the target font style, the overall stroke type, and the second stroke point comprises:
If the overall stroke type represents that the text stroke comprises a plurality of sub-strokes, determining sub-stroke demarcation points in a second stroke point based on the stroke points corresponding to the vector directions recorded in each target direction object;
Dividing the second stroke point into stroke points belonging to different sub-strokes based on the determined demarcation point;
And determining an adjusting point of each sub-stroke based on the target font style, the sub-stroke type of each sub-stroke and the stroke point of each sub-stroke, wherein the stroke type of each sub-stroke is determined based on the vector direction corresponding to the stroke point of each sub-stroke.
6. The method according to any one of claims 1 to 5, wherein,
The determining an adjustment point based on the target font style, the overall stroke type, and the second stroke point, comprising:
Determining an adjustment part and an adjustment mode of the drawn strokes based on the target font style and the overall stroke type; determining an adjusting point according to the adjusting mode based on the stroke point positioned at the adjusting part in the second stroke point; wherein,
The adjustment portion includes: the front end part of the stroke comprises the following adjustment modes: adding a supplement point at the front section part; and/or
The adjustment portion includes: the main part of the stroke comprises the following adjustment modes: adjusting the stroke point of the trunk part to change the thickness degree of the trunk part; and/or
The adjustment portion includes: the adjustment mode comprises the following steps of: adjusting the stroke point of the bending part to increase the sharpness of the bending part; and/or
The adjustment portion includes: the tail end part of the stroke comprises the following adjustment modes: and adjusting the stroke point of the tail end part to increase the thickness convergence degree of the tail end part, or adding a supplement point at the tail end part.
7. The method of any of claims 1-5, wherein the mapping the determined setpoint comprises:
obtaining expected drawing parameters of the determined setpoint based on the determined location of the setpoint;
Obtaining target drawing parameters of the determined adjusting points in each animation transition frame based on the expected drawing parameters, preset drawing frequency and the number of animation transition frames;
and according to the frame refreshing rate, adjusting the drawing parameters of the determined adjusting points into the target drawing parameters corresponding to each animation transition frame by frame, and drawing each animation transition frame.
8. A character drawing apparatus, comprising:
The touch information acquisition module is used for responding to a screen touch event generated under the condition that a new pen lifting action is not recognized every time, acquiring screen touch information corresponding to the current event and triggering the following modules;
The stroke point position determining module is used for determining the position of the first stroke point based on the screen touch information;
A stroke point drawing module for drawing a first stroke point based on the determined position;
And the stroke type recognition module is used for recognizing the integral stroke type of the text stroke formed by the second stroke point based on the relative position relation between the adjacent stroke points, wherein the second stroke point comprises: under the condition that the previous pen-lifting action exists, all the drawn stroke points after the previous pen-lifting action exist, or under the condition that the previous pen-lifting action does not exist, all the drawn stroke points exist;
And the adjusting point determining module is used for determining an adjusting point based on the target font style, the whole stroke type and the second stroke point and drawing the determined adjusting point.
9. An electronic device, comprising:
A memory for storing a computer program;
a processor for implementing the method of any of claims 1-7 when executing a program stored on a memory.
10. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of claims 1-7.
CN202410338536.8A 2024-03-22 2024-03-22 Text drawing method and device Active CN117930995B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410338536.8A CN117930995B (en) 2024-03-22 2024-03-22 Text drawing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410338536.8A CN117930995B (en) 2024-03-22 2024-03-22 Text drawing method and device

Publications (2)

Publication Number Publication Date
CN117930995A true CN117930995A (en) 2024-04-26
CN117930995B CN117930995B (en) 2024-07-02

Family

ID=90761465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410338536.8A Active CN117930995B (en) 2024-03-22 2024-03-22 Text drawing method and device

Country Status (1)

Country Link
CN (1) CN117930995B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6973214B1 (en) * 2001-07-30 2005-12-06 Mobigence, Inc. Ink display for multi-stroke hand entered characters
WO2015075931A1 (en) * 2013-11-19 2015-05-28 Wacom Co., Ltd. Method and system for ink data generation, ink data rendering, ink data manipulation and ink data communication
US20170236020A1 (en) * 2016-02-12 2017-08-17 Wacom Co., Ltd. Method and system for generating and selectively outputting two types of ink vector data
CN110175539A (en) * 2019-05-10 2019-08-27 广东智媒云图科技股份有限公司 A kind of text creation method, device, terminal device and readable storage medium storing program for executing
CN112215061A (en) * 2020-08-27 2021-01-12 拓尔思信息技术股份有限公司 Detection method and device for copying on handwriting screen, electronic equipment and storage medium
CN112269481A (en) * 2020-10-27 2021-01-26 维沃移动通信有限公司 Method and device for controlling friction force adjustment and electronic equipment
CN113191257A (en) * 2021-04-28 2021-07-30 北京有竹居网络技术有限公司 Order of strokes detection method and device and electronic equipment
CN113657330A (en) * 2021-08-24 2021-11-16 深圳市快易典教育科技有限公司 Font writing stroke order generation method and system and application method thereof
CN115330916A (en) * 2022-09-01 2022-11-11 北京字跳网络技术有限公司 Method, device and equipment for generating drawing animation, readable storage medium and product
CN117406903A (en) * 2023-06-06 2024-01-16 深圳Tcl新技术有限公司 Handwriting adjusting method, device, medium and equipment for touch screen

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6973214B1 (en) * 2001-07-30 2005-12-06 Mobigence, Inc. Ink display for multi-stroke hand entered characters
WO2015075931A1 (en) * 2013-11-19 2015-05-28 Wacom Co., Ltd. Method and system for ink data generation, ink data rendering, ink data manipulation and ink data communication
US20170236020A1 (en) * 2016-02-12 2017-08-17 Wacom Co., Ltd. Method and system for generating and selectively outputting two types of ink vector data
CN110175539A (en) * 2019-05-10 2019-08-27 广东智媒云图科技股份有限公司 A kind of text creation method, device, terminal device and readable storage medium storing program for executing
CN112215061A (en) * 2020-08-27 2021-01-12 拓尔思信息技术股份有限公司 Detection method and device for copying on handwriting screen, electronic equipment and storage medium
CN112269481A (en) * 2020-10-27 2021-01-26 维沃移动通信有限公司 Method and device for controlling friction force adjustment and electronic equipment
CN113191257A (en) * 2021-04-28 2021-07-30 北京有竹居网络技术有限公司 Order of strokes detection method and device and electronic equipment
CN113657330A (en) * 2021-08-24 2021-11-16 深圳市快易典教育科技有限公司 Font writing stroke order generation method and system and application method thereof
CN115330916A (en) * 2022-09-01 2022-11-11 北京字跳网络技术有限公司 Method, device and equipment for generating drawing animation, readable storage medium and product
WO2024046284A1 (en) * 2022-09-01 2024-03-07 北京字跳网络技术有限公司 Drawing animation generation method and apparatus, and device, readable storage medium and product
CN117406903A (en) * 2023-06-06 2024-01-16 深圳Tcl新技术有限公司 Handwriting adjusting method, device, medium and equipment for touch screen

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
戴庆辉;张俊松;: "考虑笔画和拓扑结构的字形美化方法", 中国科学:信息科学, no. 04, 20 April 2017 (2017-04-20) *
曹喆炯, 王永成: "笔顺连笔自由的联机手写汉字识别", 计算机工程与应用, no. 29, 1 May 2007 (2007-05-01) *
蒙世斌, 周明全: "基于实时视频的笔画信息提取技术", 计算机工程, no. 06, 5 June 2005 (2005-06-05) *

Also Published As

Publication number Publication date
CN117930995B (en) 2024-07-02

Similar Documents

Publication Publication Date Title
JP7078808B2 (en) Real-time handwriting recognition management
CN109215098B (en) Handwriting erasing method and device
CN112558812B (en) Pen point generation method and device, intelligent device and storage medium
CN112114734B (en) Online document display method, device, terminal and storage medium
CN111475045A (en) Handwriting drawing method, device, equipment and storage medium
JP2013546081A (en) Method, apparatus, and computer program product for overwriting input
CN115048027A (en) Handwriting input method, device, system, electronic equipment and storage medium
CN117930995B (en) Text drawing method and device
US20220284169A1 (en) Font customization based on stroke properties
CN116521043B (en) Method, system and computer program product for quick response of drawing
CN115774513B (en) System, method, electronic device and medium for determining drawing direction based on ruler
CN113760167B (en) Method for copying object by using gesture, electronic equipment and storage medium
CN113157194B (en) Text display method, electronic equipment and storage device
CN118915912A (en) Interaction method, device, equipment, medium and product
CN118819325A (en) Electronic whiteboard writing method, system, readable storage medium and computer
CN117813636A (en) Text conversion method and device, storage medium and interaction equipment
CN118823803A (en) Hand-drawing figure recognition method and device, computer program product and electronic equipment
CN115083230A (en) Learning assisting method and device
CN118302739A (en) Method, device, display system and medium for editing space-free gestures
CN116597463A (en) Text image generation method and device
CN114756143A (en) Handwriting element deleting method and device, storage medium and electronic equipment
CN114237432A (en) Handwriting processing method and device, electronic equipment and storage medium
CN115705139A (en) Note generation method and device, storage medium and computer equipment
CN114356205A (en) Note processing method, electronic device and computer storage medium
CN115167750A (en) Handwritten note processing method, computer equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant