CN111488104B - Font editing method and electronic equipment - Google Patents

Font editing method and electronic equipment Download PDF

Info

Publication number
CN111488104B
CN111488104B CN202010298914.6A CN202010298914A CN111488104B CN 111488104 B CN111488104 B CN 111488104B CN 202010298914 A CN202010298914 A CN 202010298914A CN 111488104 B CN111488104 B CN 111488104B
Authority
CN
China
Prior art keywords
image
region
target
editing
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010298914.6A
Other languages
Chinese (zh)
Other versions
CN111488104A (en
Inventor
毛爱玲
程林
孙东慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010298914.6A priority Critical patent/CN111488104B/en
Publication of CN111488104A publication Critical patent/CN111488104A/en
Application granted granted Critical
Publication of CN111488104B publication Critical patent/CN111488104B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention provides a font editing method and electronic equipment, wherein the method comprises the following steps: acquiring a first image, wherein the first image comprises first characters of a first font; determining a target editing area of a first character in the first image based on a first input of a user; carrying out deformation processing on the target editing area to obtain the first image after deformation processing; and packaging the first image after the deformation processing into a font file. The invention enables the user to edit the character pattern of the character according to the preference of the user, thereby enabling the character pattern of the character used on the electronic equipment to better meet the individual requirements of the user.

Description

Font editing method and electronic equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a font editing method and an electronic device.
Background
In recent years, with the spread of electronic devices, users have made higher demands on fonts of characters used on electronic devices, and thus various fonts such as Q-type fonts of different fonts have come to be used on electronic devices.
At present, various fonts used on electronic equipment are often edited in advance by a font designer, that is, for each font used on the electronic equipment, the font designer can edit the font of each Chinese character by one stroke in advance, and then the font is packaged into a font file through software for downloading and using by a user. Because the fonts used on the electronic devices are edited by font designers in advance, that is, the fonts of the characters used on the electronic devices are edited by the font designers in advance, the fonts of the characters used on the electronic devices are difficult to meet the personalized requirements of users.
Disclosure of Invention
The embodiment of the invention provides a font editing method and electronic equipment, and aims to solve the problem that the font of characters used on the electronic equipment is difficult to meet the personalized requirements of users because the font of the characters used on the electronic equipment is edited in advance by a font designer.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a font editing method, including:
acquiring a first image, wherein the first image comprises first characters of a first font;
determining a target editing area of a first character in the first image based on a first input of a user;
carrying out deformation processing on the target editing area to obtain the first image after deformation processing;
and packaging the first image after the deformation processing into a font file.
In a second aspect, an embodiment of the present invention further provides an electronic device, including:
the acquisition module is used for acquiring a first image, wherein the first image comprises first characters of a first font;
the determining module is used for determining a target editing area of a first character in the first image based on first input of a user;
the deformation module is used for carrying out deformation processing on the target editing area to obtain the first image after the deformation processing;
and the packaging module is used for packaging the first image after the deformation processing into a font file.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the above font editing method.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps of the above font editing method.
In the embodiment of the invention, the target editing area of the first character of the first font in the first image can be determined based on the first input of the user, the target editing area is subjected to deformation processing, and the first image subjected to deformation processing can be packaged into the font file, so that the deformed first character can be used on the electronic equipment, the user can edit the font of the character according to the preference of the user, and the font of the character used on the electronic equipment can better meet the personalized requirements of the user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flow chart of a method for editing a font according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an exemplary method for editing a font according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a second exemplary method for editing a font according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a third exemplary method for editing a font according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an example of a method for editing a font according to an embodiment of the present invention;
FIG. 6 is a fifth exemplary diagram of a font editing method according to an embodiment of the present invention;
FIG. 7 is a sixth exemplary diagram of a method for editing a font according to an embodiment of the present invention;
FIG. 8 is a seventh exemplary diagram illustrating a method for editing a font according to an embodiment of the present invention;
FIG. 9 is an eighth exemplary diagram of a method for editing a font according to an embodiment of the present invention;
FIG. 10 is a ninth illustration showing an example of a method for editing a font according to an embodiment of the present invention;
FIG. 11 is a block diagram of an electronic device provided in accordance with an embodiment of the invention;
fig. 12 is a block diagram of an electronic device according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a font editing method according to an embodiment of the present invention. The font editing method provided by the embodiment of the invention can be applied to electronic equipment, wherein the electronic equipment comprises but is not limited to: a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), or the like.
As shown in fig. 1, a font editing method provided by an embodiment of the present invention may include the following steps:
step 101, obtaining a first image, wherein the first image comprises a first character of a first font.
In this embodiment of the present invention, the acquiring of the first image may refer to receiving the first image uploaded by the user, receiving the first image sent by another electronic device, or generating the first image according to an input of the user, which is not limited in this embodiment of the present invention.
The first font can be one of the existing fonts, such as a regular font, a song font or a black body font; the first font may be a font designed by the user. The first character may be any one of existing characters, and specifically, the first character may be various characters such as a chinese character, an english character, or a number.
The number of the first images may be one or more, for example, a user may upload a font packet, and the font packet may include one or more first images. The first image may include one or more first characters.
Step 102, determining a target editing area of a first character in the first image based on a first input of a user.
In the embodiment of the present invention, the first input may be various touch inputs such as clicking, double clicking, sliding, long pressing, or dragging, or may be a voice control input.
The position of the target editing region may be determined according to a first input of a user. The shape of the target editing region may be various regular shapes such as a circle, a square, a rectangle, a parallelogram, or a trapezoid, or may be an irregular shape. The shape of the target editing region may be set in advance, or may be determined based on a first input by a user. The area of the target editing region may be set in advance, or may be determined according to a first input by a user.
The step 102 may be directly determining a target editing region of the first text in the first image based on the first input of the user, and for understanding, here, for example:
for example one, assume that the first word is: and if the first image acquired in step 101 is shown in fig. 2, the user may circle the target editing region directly on the first image by hand, and the display effect of the first image after the target editing region is circled may be shown in fig. 3, where 31 shown in fig. 3 represents the target editing region.
For example two, assume the first word is: and if the first image acquired in step 101 is shown in fig. 2, and the shape of the target editing region is preset to be circular, the user may mark the center point and the radius of the target editing region with a hand, and then the system determines the target editing region according to the center point and the radius marked by the user, and the first image after determining the target editing region may be shown in fig. 4, where 41 shown in fig. 4 represents the target editing region.
The step 102 may also be to indirectly determine the target editing region of the first character in the first image based on the first input of the user, for example, the first input may be based on the first input of the user to obtain a template image, where the template image includes the first character with the preset font and an indication frame for indicating the editing region of the first character in the template image, and then determine the target editing region of the first character in the first image based on the editing region of the first character in the template image.
And 103, performing deformation processing on the target editing area to obtain the first image after the deformation processing.
In an embodiment of the present invention, the font of the first character in the first image after the transformation may be a target font, where the target font is a different font from the first font.
The deformation process may include at least one of: expanding and extruding. The above-mentioned deforming the target editing region may refer to deforming a first target editing region in the first image.
And 104, packaging the first image after the deformation processing into a font file.
In this embodiment of the present invention, step 104 may specifically include: cutting out a minimum area of the first image after deformation processing according to the minimum circumscribed graph to obtain a cut first image; converting the format of the cut first image from a bitmap format into a vector diagram format to obtain a first image in the vector diagram format; and packaging the first image in the vector diagram format into a font file for downloading and using on different electronic equipment by a user. Here, the minimum circumscribed figure may be a minimum circumscribed rectangle, a minimum circumscribed square, or another figure.
It should be noted that, when the number of the first images is multiple, the above steps 101 to 104 may be performed for each first image to edit the glyphs of multiple characters, and in addition, in step 104, the multiple first images after deformation processing may be packaged into multiple font files respectively, or may be packaged into one font file collectively.
According to the embodiment of the invention, the target editing area of the first character of the first font in the first image can be determined based on the first input of the user, the target editing area is subjected to deformation processing, and the first image subjected to deformation processing can be packaged into the font file, so that the deformed first character can be used on the electronic equipment, the user can edit the font of the character according to the preference of the user, the font of the character used on the electronic equipment can better meet the personalized requirements of the user, and the use experience of the user is improved; meanwhile, the workload of a font designer can be reduced.
Optionally, the determining a target editing region of a first text in the first image based on the first input of the user includes:
acquiring a target template image based on first input of a user, wherein the target template image comprises first characters of a second font and a first indication frame, and the first indication frame is used for indicating a first editing area of the first characters in the target template image;
and determining a target editing area of the first character in the first image based on the first editing area.
In the embodiment of the present invention, acquiring the target template image based on the first input of the user may refer to determining the target template image from a plurality of preset template images based on the first input of the user, or may refer to generating the target template image based on the first input of the user.
The second font may be one of various existing fonts, such as regular font, song font or black font; the second font may be a font designed by the user. The second font may be a different font than the first font.
The first indication frame may be used to indicate a first editing region of the first text in the target template image, and the first indication frame and the first editing region may correspond to each other. The shape of the first editing region may be various regular shapes such as a circle, a square, a rectangle, a parallelogram, or a trapezoid, or may be an irregular shape.
The determining the target editing region of the first character in the first image based on the first editing region may refer to determining the target editing region of the first character in the first image corresponding to the first editing region based on the first editing region.
The shape of the target editing region may be the same as the shape of the first editing region, and the area of the target editing region may be the same as the area of the first editing region. The position of the target editing region may correspond to the position of the first editing region.
The target template image is determined based on the first input of the user, and the target editing area of the first character in the first image is determined based on the first editing area of the first character in the target template image, so that the user can select the character part needing to be deformed according to the preference of the user, and the character pattern of the character used on the electronic equipment can better meet the personalized requirements of the user.
Optionally, the obtaining the target template image based on the first input of the user includes:
displaying a template selection interface, wherein the template selection interface comprises N preset template images, N is a positive integer, and each template image in the N template images comprises a first character of a second font and an indication frame for indicating an editing area of the first character in the template image;
determining a target template image from the N template images based on a first input of a user.
In the embodiment of the present invention, the template selection interface may be displayed when the user does not select the free design template. Specifically, displaying the template selection interface may include:
displaying a first query message for querying a user whether to select a free design template;
receiving a first response message input by a user in response to the first inquiry message;
in a case where the first response message indicates that the user does not select a free-form design template, a template selection interface is displayed.
N may be equal to 1, or a positive integer greater than 1.
When N is equal to 1, the displaying a template selection interface may include:
displaying a template category selection interface, wherein the template category selection interface comprises P template categories, and P is a positive integer greater than 1;
receiving a second input of a user to a target template category in the P template categories;
and responding to the second input, and displaying a template selection interface corresponding to the target template category.
Therefore, the templates are classified and displayed, and a user can more conveniently and accurately select the template images required by the user.
When N is a positive integer greater than 1, the N template images may be N template images different from each other, and specifically, the editing regions of the first text in each of the N template images may be different from each other, that is, the indication frames in each of the N template images may be different from each other, where the indication frames different from each other may include at least one of: the positions of the indication frames are different from each other, the shapes of the indication frames are different from each other, and the areas of the indication frames are different from each other.
The determining the target template image from the N template images based on the first input of the user may specifically include:
receiving a first input of a user to a first template image in the N template images;
in response to the first input, determining the first template image as a target template image.
The template selection interface is displayed and comprises N preset template images, and the target template image is determined from the N template images based on the first input of the user, so that the user can directly select the required template image from the N preset template images, and the acquisition efficiency of the target template image can be further improved.
Optionally, the obtaining the target template image based on the first input of the user includes:
displaying a preset target editing image, wherein the target editing image comprises first characters of a second font;
determining a first editing area of a first character in the target editing image based on a first input of a user, and adding a first indication frame corresponding to the first editing area in the target editing image;
and determining the target editing image added with the first indication frame as a target template image.
In the embodiment of the present invention, displaying the preset target editing image may be displaying the preset target editing image under the condition that the user selects the free design template. Specifically, displaying the preset target editing image may include:
displaying a first query message for querying a user whether to select a free design template;
receiving a first response message input by a user in response to the first inquiry message;
and displaying a preset target editing image under the condition that the first response message indicates that the user selects the free design template.
The background color of the target edited image may be white, and the font color of the first letter in the target edited image may be black. The size of the target editing image may be 256 × 256.
The determining of the first editing region of the first character in the target editing image based on the first input of the user may be directly determining the first editing region of the first character in the target editing image based on the first input of the user. Here, for the description of "directly determining the first editing region of the first character in the target editing image based on the first input by the user", reference may be made to the description of relevant parts in the above description of step 102, and thus, details are not repeated here.
The first indication frame may be used to indicate a first editing region of a first character in the target editing image.
The method comprises the steps that a preset target editing image is displayed, a first editing area of a first character in the target editing image is determined based on first input of a user, a first indication frame corresponding to the first editing area is added in the target editing image, and the target editing image with the first indication frame added is determined as a target template image, so that the user can freely design the target template image, the target template image can further meet personalized requirements of the user, and further the character form of the character used on electronic equipment can better meet the personalized requirements of the user; meanwhile, the target template image acquired based on the first input of the user can be repeatedly used, and therefore labor can be saved.
Optionally, the determining a target editing region of a first text in the first image based on the first editing region includes:
determining M first key points of a first character in the target template image, acquiring relative position relations between the M first key points and a central point of the first editing area, and acquiring the shape and the area of the first editing area, wherein M is a positive integer;
determining M second key points of a first character in the first image based on the M first key points;
determining a central point of a target editing area based on the M second key points and the relative position relation;
and determining the target editing area of the first character in the first image based on the central point of the target editing area and the shape and the area of the first editing area.
In the embodiment of the present invention, the M first key points may be any M key points in all key points of the first text in the target template image; here, the keypoint of the first word may refer to an important track point in the stroke of the first word, such as an inflection point in the stroke.
For ease of understanding, the first word is illustrated as an eating word:
assuming that the target template image is as shown in fig. 5, 51 shown in fig. 5 may represent a first indication frame, the area within the first indication frame 51 represents a first editing area, and the center point of the first editing area 51 is 511, each white dot shown in fig. 5 may represent one of the key points of the "eat" word, and the 10 key points located within the first indication frame 51 may be regarded as the M first key points, and the obtaining of the relative positional relationship between the M first key points and the center point of the first editing area may refer to obtaining the relative positional relationship between the 10 key points within the first indication frame 51 and the center point 511.
The determining M second key points of the first text in the first image based on the M first key points may be: and determining M second key points of the first characters in the first image, which are in one-to-one correspondence with the M first key points, based on the M first key points.
The determining, based on the central point of the target editing region and the shape and the area of the first editing region, the target editing region of the first text in the first image may specifically be: in a first image, taking the central point of the target editing area as a geometric center, making a geometric figure which is completely the same as the shape and area of the first editing area, and determining an image area covered by the geometric figure as the target editing area of the first character in the first image. Here, the image area covered by the geometry may be understood as an image area within the boundaries of the geometry.
The target editing region of the first character in the first image is determined based on the M first key points of the first character in the target template image, the relative position relationship between the M first key points and the central point of the first editing region, and the shape and the area of the first editing region, so that the accuracy of the determined target editing region can be higher.
Optionally, the target editing area includes: an expanded subregion and an extruded subregion;
the deforming the target editing area to obtain the deformed first image includes:
and expanding the expanded subarea, and extruding the extruded subarea to obtain the first image after deformation processing.
In the embodiment of the present invention, the expanding the expanded sub-region and the extruding the extruded sub-region may be performed by respectively expanding the expanded sub-region and extruding the extruded sub-region by using a convex lens effect algorithm.
For ease of understanding, the first word is still exemplified as the eating word herein:
assuming that the first image acquired in step 101 is as shown in fig. 2, the target editing region determined in step 102 may be as 61 shown in fig. 6, that is, the target editing region 61 may include an expansion sub-region 611 and a squeezing sub-region 612, and the expansion process is to be performed on the expansion sub-region 611 system and the squeezing process is to be performed on the squeezing sub-region 612 system.
The target editing area comprises the expansion sub-area and the extrusion sub-area, and under the condition that the target editing area comprises the expansion sub-area and the extrusion sub-area, the expansion sub-area is expanded, and the extrusion sub-area is extruded, so that the character patterns of characters used on the electronic equipment can better meet the personalized requirements of users.
Optionally, there is a coinciding sub-region where the expanded sub-region coincides with the squeezed sub-region;
the expanding the expanded sub-region and the extruding the extruded sub-region to obtain the first image after the deformation processing includes:
expanding the expanded subareas except the overlapped subareas, and extruding the extruded subareas except the overlapped subareas; and under the condition that the distance between the center points of the overlapped sub-region and the expanded sub-region is smaller than the distance between the center points of the overlapped sub-region and the extruded sub-region, expanding the overlapped sub-region; and under the condition that the distance between the center points of the overlapped sub-region and the expanded sub-region is larger than the distance between the center points of the overlapped sub-region and the extruded sub-region, extruding the overlapped sub-region to obtain the first image after deformation processing.
For ease of understanding, the first word is still exemplified as the eating word herein:
assuming that the first image acquired in step 101 is as shown in fig. 2, the target editing region determined in step 102 is as shown in fig. 7 71, that is, the target editing region 71 includes an expanded sub-region 711 and a squeezed sub-region 712, and there is an overlapped sub-region 713 where the expanded sub-region 711 and the squeezed sub-region 712 are overlapped, the center point of the expanded sub-region 711 is 7111, and the center point of the squeezed sub-region 712 is 7121, as can be seen from fig. 7, the overlapped sub-region 713 is closer to the center point 7111 and farther from the center point 7121, so that the expansion processing is performed on the parts of the expanded sub-region 711 other than the overlapped sub-region 713, the squeezing processing is performed on the parts of the squeezed sub-region 612 other than the overlapped sub-region 713, and the expansion processing is also performed on the overlapped sub-region 713.
When the overlapped subarea exists between the expansion subarea and the extrusion subarea, the deformation mode of the deformation area closer to the overlapped subarea is adopted by the system aiming at the overlapped subarea, so that the integral deformation effect of the first character is better.
Optionally, after the deforming processing is performed on the target editing region to obtain the deformed first image, and before the deformed first image is packaged into a font file, the method further includes:
and adjusting the stroke width of the first character in the first image after the deformation processing to be the same as the stroke width of the first character in the first image before the deformation processing.
In an embodiment of the present invention, the adjusting the stroke width of the first character in the first image after the deformation processing to be the same as the stroke width of the first character in the first image before the deformation processing specifically includes:
determining T third key points of the first character in the first image before deformation processing, and acquiring the stroke width of each third key point, wherein T is a positive integer greater than 1;
determining T fourth key points of the first character in the first image after deformation processing, wherein the T fourth key points correspond to the T third key points one by one;
respectively determining a first vertex and a second vertex corresponding to each fourth key point according to the stroke width of each third key point, wherein the first vertex and the second vertex corresponding to each fourth key point are symmetrical about the fourth key point, and the length of a connecting line of the first vertex and the second vertex corresponding to each fourth key point is equal to the stroke width of the third key point corresponding to the fourth key point;
determining a character adjustment contour of a first character in the first image after deformation processing based on all the first vertexes and all the second vertexes;
and filling the character adjustment outline with the color which is the same as the font color of the first character in the first image after deformation processing to obtain the character adjustment outline.
Here, the T third key points may be any T key points among all key points of the first character in the first image before the deformation processing; the keypoints of the first word may refer to important track points in the stroke of the first word, such as inflection points in the stroke.
Determining the adjusted contour of the first text in the first image after the deformation processing based on all the first vertices and all the second vertices may be: and respectively connecting a first vertex or a second vertex corresponding to an adjacent fourth key point in the same stroke to obtain a character adjusting contour of the first character in the first image after deformation processing.
For ease of understanding, the first letter is used herein as the friend letter:
assuming that the first image after the deformation process is shown in fig. 8, all the first vertices and all the second vertices may be shown in fig. 9, and each of the small white dots shown in fig. 9 represents one first vertex or one second vertex; on the basis shown in fig. 9, by connecting the first vertex or the second vertex of the adjacent fourth keypoint in the same stroke, respectively, the character adjustment contour of the first character in the first image after the deformation processing can be obtained, as shown in fig. 10, where 100 shown in fig. 10 represents the character adjustment contour; and finally, filling black in the character adjusting outline.
It should be noted that, various optional implementations described in the embodiments of the present invention may be implemented in combination with each other or implemented separately, and the embodiments of the present invention are not limited thereto.
Referring to fig. 11, fig. 11 is a structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 11, the electronic device 1100 includes:
an obtaining module 1101, configured to obtain a first image, where the first image includes a first character of a first font;
a determining module 1102, configured to determine, based on a first input of a user, a target editing region of a first text in the first image;
a deformation module 1103, configured to perform deformation processing on the target editing region to obtain the first image after the deformation processing;
and an encapsulating module 1104, configured to encapsulate the first image after the deformation processing into a font file.
Optionally, the determining module 1102 includes:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a target template image based on first input of a user, the target template image comprises first characters of a second font and a first indication frame, and the first indication frame is used for indicating a first editing area of the first characters in the target template image;
and the determining unit is used for determining a target editing area of the first character in the first image based on the first editing area.
Optionally, the obtaining unit includes:
the template selection device comprises a first display subunit, a second display subunit and a display unit, wherein the template selection interface comprises N preset template images, N is a positive integer, and each template image in the N template images comprises a first character of a second character form and an indication frame for indicating an editing area of the first character in the template image;
a first determining subunit, configured to determine, based on a first input of a user, a target template image from the N template images.
Optionally, the obtaining unit includes:
the second display subunit is used for displaying a preset target editing image, and the target editing image comprises first characters in a second font;
the second determining subunit is configured to determine, based on a first input of a user, a first editing region of a first text in the target editing image, and add a first instruction frame corresponding to the first editing region in the target editing image;
and the third determining subunit is used for determining the target editing image added with the first indication frame as a target template image.
Optionally, the determining unit includes:
an obtaining subunit, configured to determine M first key points of a first text in the target template image, obtain a relative position relationship between the M first key points and a center point of the first editing region, and obtain a shape and an area of the first editing region, where M is a positive integer;
a fourth determining subunit, configured to determine, based on the M first key points, M second key points of the first text in the first image;
a fifth determining subunit, configured to determine a central point of the target editing region based on the M second key points and the relative position relationship;
a sixth determining subunit, configured to determine, based on a center point of the target editing region and the shape and the area of the first editing region, the target editing region of the first text in the first image.
Optionally, the target editing area includes: an expanded subregion and an extruded subregion;
the deformation module 1103 is configured to:
and expanding the expanded subarea, and extruding the extruded subarea to obtain the first image after deformation processing.
Optionally, there is a coinciding sub-region where the expanded sub-region coincides with the squeezed sub-region;
the deformation module 1103 is configured to:
expanding the expanded subareas except the overlapped subareas, and extruding the extruded subareas except the overlapped subareas; and under the condition that the distance between the center points of the overlapped sub-region and the expanded sub-region is smaller than the distance between the center points of the overlapped sub-region and the extruded sub-region, expanding the overlapped sub-region; and under the condition that the distance between the center points of the overlapped sub-region and the expanded sub-region is larger than the distance between the center points of the overlapped sub-region and the extruded sub-region, extruding the overlapped sub-region to obtain the first image after deformation processing.
Optionally, the electronic device 1100 further includes:
and the stroke width adjusting module is used for adjusting the stroke width of the first character in the first image after the deformation processing to be the same as the stroke width of the first character in the first image before the deformation processing.
The electronic device 1100 is capable of implementing each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 10, and is not described here again to avoid repetition.
According to the electronic device 1100 of the embodiment of the present invention, based on the first input of the user, the target editing region of the first character of the first font in the first image can be determined, the target editing region is subjected to deformation processing, and the first image after deformation processing can be packaged into the font file, so that the deformed first character can be used on the electronic device, and thus the user can edit the font of the character according to the preference of the user, and further the font of the character used on the electronic device can better meet the personalized requirements of the user, and the user experience can be improved; meanwhile, the workload of a font designer can be reduced.
Fig. 12 is a schematic diagram of a hardware structure of an electronic device for implementing various embodiments of the present invention, where the electronic device 1200 includes, but is not limited to: radio frequency unit 1201, network module 1202, audio output unit 1203, input unit 1204, sensor 1205, display unit 1206, user input unit 1207, interface unit 1208, memory 1209, processor 1210, and power source 1211. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 12 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the processor 1210 is configured to:
acquiring a first image, wherein the first image comprises first characters of a first font;
determining a target editing area of a first character in the first image based on a first input of a user;
carrying out deformation processing on the target editing area to obtain the first image after deformation processing;
and packaging the first image after the deformation processing into a font file.
Optionally, the determining, by the processor 1210, a target editing region of a first text in the first image based on the first input of the user includes:
acquiring a target template image based on first input of a user, wherein the target template image comprises first characters of a second font and a first indication frame, and the first indication frame is used for indicating a first editing area of the first characters in the target template image;
and determining a target editing area of the first character in the first image based on the first editing area.
Optionally, the obtaining of the target template image based on the first input of the user performed by the processor 1210 includes:
controlling a display unit 1206 to display a template selection interface, where the template selection interface includes N preset template images, where N is a positive integer, and each of the N template images includes a first character of a second font and an indication frame for indicating an editing area of the first character in the template image;
determining a target template image from the N template images based on a first input of a user.
Optionally, the obtaining of the target template image based on the first input of the user performed by the processor 1210 includes:
controlling a display unit 1206 to display a preset target editing image, wherein the target editing image comprises first characters of a second font;
determining a first editing area of a first character in the target editing image based on a first input of a user, and adding a first indication frame corresponding to the first editing area in the target editing image;
and determining the target editing image added with the first indication frame as a target template image.
Optionally, the determining, by the processor 1210, a target editing region of a first text in the first image based on the first editing region includes:
determining M first key points of a first character in the target template image, acquiring relative position relations between the M first key points and a central point of the first editing area, and acquiring the shape and the area of the first editing area, wherein M is a positive integer;
determining M second key points of a first character in the first image based on the M first key points;
determining a central point of a target editing area based on the M second key points and the relative position relation;
and determining the target editing area of the first character in the first image based on the central point of the target editing area and the shape and the area of the first editing area.
Optionally, the target editing area includes: an expanded subregion and an extruded subregion;
the deforming processing performed on the target editing area by the processor 1210 to obtain the deformed first image includes:
and expanding the expanded subarea, and extruding the extruded subarea to obtain the first image after deformation processing.
Optionally, there is a coinciding sub-region where the expanded sub-region coincides with the squeezed sub-region;
the expanding the expanded sub-region and the compressing the compressed sub-region performed by the processor 1210 to obtain the first image after the deformation processing includes:
expanding the expanded subareas except the overlapped subareas, and extruding the extruded subareas except the overlapped subareas; and under the condition that the distance between the center points of the overlapped sub-region and the expanded sub-region is smaller than the distance between the center points of the overlapped sub-region and the extruded sub-region, expanding the overlapped sub-region; and under the condition that the distance between the center points of the overlapped sub-region and the expanded sub-region is larger than the distance between the center points of the overlapped sub-region and the extruded sub-region, extruding the overlapped sub-region to obtain the first image after deformation processing.
Optionally, the processor 1210 is further configured to:
and adjusting the stroke width of the first character in the first image after the deformation processing to be the same as the stroke width of the first character in the first image before the deformation processing.
The electronic device 1200 can implement the processes implemented by the electronic device in the foregoing embodiments, and in order to avoid repetition, the details are not described here.
According to the electronic device 1200 of the embodiment of the invention, the target editing area of the first character of the first font in the first image can be determined based on the first input of the user, the target editing area is subjected to deformation processing, and the first image subjected to deformation processing can be packaged into the font file, so that the deformed first character can be used on the electronic device, the user can edit the font of the character according to the preference of the user, the font of the character used on the electronic device can better meet the personalized requirements of the user, and the use experience of the user is improved; meanwhile, the workload of a font designer can be reduced.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 1201 may be used for receiving and sending signals during information transmission and reception or during a call, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 1210; in addition, the uplink data is transmitted to the base station. Typically, the radio frequency unit 1201 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 1201 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 1202, such as to assist the user in emailing, browsing web pages, and accessing streaming media.
The audio output unit 1203 may convert audio data received by the radio frequency unit 1201 or the network module 1202 or stored in the memory 1209 into an audio signal and output as sound. Also, the audio output unit 1203 may also provide audio output related to a specific function performed by the electronic apparatus 1200 (e.g., a call signal reception sound, a message reception sound, and the like). The audio output unit 1203 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1204 is used to receive audio or video signals. The input Unit 1204 may include a Graphics Processing Unit (GPU) 12041 and a microphone 12042, and the Graphics processor 12041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 1206. The image frames processed by the graphics processor 12041 may be stored in the memory 1209 (or other storage medium) or transmitted via the radio frequency unit 1201 or the network module 1202. The microphone 12042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 1201 in case of the phone call mode.
The electronic device 1200 also includes at least one sensor 1205, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 12061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 12061 and/or the backlight when the electronic device 1200 moves to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 1205 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., and will not be described further herein.
The display unit 1206 is used to display information input by the user or information provided to the user. The Display unit 1206 may include a Display panel 12061, and the Display panel 12061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1207 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus. Specifically, the user input unit 1207 includes a touch panel 12071 and other input devices 12072. The touch panel 12071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 12071 (e.g., operations by a user on or near the touch panel 12071 using a finger, a stylus, or any suitable object or attachment). The touch panel 12071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1210, receives a command from the processor 1210, and executes the command. In addition, the touch panel 12071 may be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 1207 may include other input devices 12072 in addition to the touch panel 12071. In particular, the other input devices 12072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 12071 may be overlaid on the display panel 12061, and when the touch panel 12071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 1210 to determine the type of the touch event, and then the processor 1210 provides a corresponding visual output on the display panel 12061 according to the type of the touch event. Although the touch panel 12071 and the display panel 12061 are shown as two separate components in fig. 12 to implement the input and output functions of the electronic device, in some embodiments, the touch panel 12071 and the display panel 12061 may be integrated to implement the input and output functions of the electronic device, and this is not limited herein.
The interface unit 1208 is an interface for connecting an external device to the electronic apparatus 1200. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 1208 may be used to receive input from an external device (e.g., data information, power, etc.) and transmit the received input to one or more elements within the electronic apparatus 1200 or may be used to transmit data between the electronic apparatus 1200 and the external device.
The memory 1209 may be used to store software programs as well as various data. The memory 1209 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1209 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 1210 is a control center of the electronic device, connects various parts of the whole electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 1209 and calling data stored in the memory 1209, thereby performing overall monitoring of the electronic device. Processor 1210 may include one or more processing units; preferably, the processor 1210 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1210.
The electronic device 1200 may further include a power source 1211 (e.g., a battery) for providing power to the various components, and preferably, the power source 1211 may be logically coupled to the processor 1210 via a power management system, such that the power management system may be configured to manage charging, discharging, and power consumption.
In addition, the electronic device 1200 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 1210, a memory 1209, and a computer program stored in the memory 1209 and capable of running on the processor 1210, where the computer program, when executed by the processor 1210, implements each process of the above-mentioned font editing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned font editing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (14)

1. A method for font editing, comprising:
acquiring a first image, wherein the first image comprises first characters of a first font;
determining a target editing area of a first character in the first image based on a first input of a user;
carrying out deformation processing on the target editing area to obtain the first image after deformation processing;
packaging the first image after the deformation processing into a font file;
the target editing region includes: an expanded subregion and an extruded subregion;
the deforming the target editing area to obtain the deformed first image includes:
expanding the expanded subarea, and extruding the extruded subarea to obtain the first image after deformation;
the expansion subarea and the extrusion subarea are overlapped;
the expanding the expanded sub-region and the extruding the extruded sub-region to obtain the first image after the deformation processing includes:
expanding the expanded subareas except the overlapped subareas, and extruding the extruded subareas except the overlapped subareas; and under the condition that the distance between the center points of the overlapped sub-region and the expanded sub-region is smaller than the distance between the center points of the overlapped sub-region and the extruded sub-region, expanding the overlapped sub-region; and under the condition that the distance between the center points of the overlapped sub-region and the expanded sub-region is larger than the distance between the center points of the overlapped sub-region and the extruded sub-region, extruding the overlapped sub-region to obtain the first image after deformation processing.
2. The method of claim 1, wherein determining the target editing region for the first text in the first image based on the first input from the user comprises:
acquiring a target template image based on first input of a user, wherein the target template image comprises first characters of a second font and a first indication frame, and the first indication frame is used for indicating a first editing area of the first characters in the target template image;
and determining a target editing area of the first character in the first image based on the first editing area.
3. The method of claim 2, wherein obtaining the target template image based on the first input of the user comprises:
displaying a template selection interface, wherein the template selection interface comprises N preset template images, N is a positive integer, and each template image in the N template images comprises a first character of a second font and an indication frame for indicating an editing area of the first character in the template image;
determining a target template image from the N template images based on a first input of a user.
4. The method of claim 2, wherein obtaining the target template image based on the first input of the user comprises:
displaying a preset target editing image, wherein the target editing image comprises first characters of a second font;
determining a first editing area of a first character in the target editing image based on a first input of a user, and adding a first indication frame corresponding to the first editing area in the target editing image;
and determining the target editing image added with the first indication frame as a target template image.
5. The method of claim 2, wherein determining a target editing region for a first text in the first image based on the first editing region comprises:
determining M first key points of a first character in the target template image, acquiring relative position relations between the M first key points and a central point of the first editing area, and acquiring the shape and the area of the first editing area, wherein M is a positive integer;
determining M second key points of a first character in the first image based on the M first key points;
determining a central point of a target editing area based on the M second key points and the relative position relation;
and determining the target editing area of the first character in the first image based on the central point of the target editing area and the shape and the area of the first editing area.
6. The method according to claim 1, wherein after the morphing of the target editing region to obtain the first image after the morphing, before the packaging of the first image after the morphing into a font file, the method further comprises:
and adjusting the stroke width of the first character in the first image after the deformation processing to be the same as the stroke width of the first character in the first image before the deformation processing.
7. An electronic device, comprising:
the acquisition module is used for acquiring a first image, wherein the first image comprises first characters of a first font;
the determining module is used for determining a target editing area of a first character in the first image based on first input of a user;
the deformation module is used for carrying out deformation processing on the target editing area to obtain the first image after the deformation processing;
the packaging module is used for packaging the first image after the deformation processing into a font file;
the target editing region includes: an expanded subregion and an extruded subregion;
the deformation module is configured to:
expanding the expanded subarea, and extruding the extruded subarea to obtain the first image after deformation;
the expansion subarea and the extrusion subarea are overlapped;
the deformation module is configured to:
expanding the expanded subareas except the overlapped subareas, and extruding the extruded subareas except the overlapped subareas; and under the condition that the distance between the center points of the overlapped sub-region and the expanded sub-region is smaller than the distance between the center points of the overlapped sub-region and the extruded sub-region, expanding the overlapped sub-region; and under the condition that the distance between the center points of the overlapped sub-region and the expanded sub-region is larger than the distance between the center points of the overlapped sub-region and the extruded sub-region, extruding the overlapped sub-region to obtain the first image after deformation processing.
8. The electronic device of claim 7, wherein the determining module comprises:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a target template image based on first input of a user, the target template image comprises first characters of a second font and a first indication frame, and the first indication frame is used for indicating a first editing area of the first characters in the target template image;
and the determining unit is used for determining a target editing area of the first character in the first image based on the first editing area.
9. The electronic device according to claim 8, wherein the acquisition unit includes:
the template selection device comprises a first display subunit, a second display subunit and a display unit, wherein the template selection interface comprises N preset template images, N is a positive integer, and each template image in the N template images comprises a first character of a second character form and an indication frame for indicating an editing area of the first character in the template image;
a first determining subunit, configured to determine, based on a first input of a user, a target template image from the N template images.
10. The electronic device according to claim 8, wherein the acquisition unit includes:
the second display subunit is used for displaying a preset target editing image, and the target editing image comprises first characters in a second font;
the second determining subunit is configured to determine, based on a first input of a user, a first editing region of a first text in the target editing image, and add a first instruction frame corresponding to the first editing region in the target editing image;
and the third determining subunit is used for determining the target editing image added with the first indication frame as a target template image.
11. The electronic device according to claim 8, wherein the determination unit includes:
an obtaining subunit, configured to determine M first key points of a first text in the target template image, obtain a relative position relationship between the M first key points and a center point of the first editing region, and obtain a shape and an area of the first editing region, where M is a positive integer;
a fourth determining subunit, configured to determine, based on the M first key points, M second key points of the first text in the first image;
a fifth determining subunit, configured to determine a central point of the target editing region based on the M second key points and the relative position relationship;
a sixth determining subunit, configured to determine, based on a center point of the target editing region and the shape and the area of the first editing region, the target editing region of the first text in the first image.
12. The electronic device of claim 7, further comprising:
and the stroke width adjusting module is used for adjusting the stroke width of the first character in the first image after the deformation processing to be the same as the stroke width of the first character in the first image before the deformation processing.
13. An electronic device comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the glyph editing method of any of claims 1 to 6.
14. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, realizes the steps of the glyph editing method according to anyone of the claims 1 to 6.
CN202010298914.6A 2020-04-16 2020-04-16 Font editing method and electronic equipment Active CN111488104B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010298914.6A CN111488104B (en) 2020-04-16 2020-04-16 Font editing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010298914.6A CN111488104B (en) 2020-04-16 2020-04-16 Font editing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111488104A CN111488104A (en) 2020-08-04
CN111488104B true CN111488104B (en) 2021-10-12

Family

ID=71798403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010298914.6A Active CN111488104B (en) 2020-04-16 2020-04-16 Font editing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111488104B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112312022B (en) * 2020-10-30 2022-04-15 维沃移动通信有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN113362426B (en) * 2021-06-21 2023-03-31 维沃移动通信(杭州)有限公司 Image editing method and image editing device
CN113515919B (en) * 2021-09-14 2022-01-07 北京江融信科技有限公司 Method and system for generating Chinese TrueType font
CN114415912A (en) * 2021-12-31 2022-04-29 乐美科技股份私人有限公司 Element editing method and device, electronic equipment and storage medium
CN114117366B (en) * 2022-01-25 2022-04-08 合肥高维数据技术有限公司 Character deformation method and system based on full character transformation
CN115048915B (en) * 2022-08-17 2022-11-01 国网浙江省电力有限公司 Data processing method and system of electric power file based on operation platform

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667102A (en) * 2009-09-21 2010-03-10 宇龙计算机通信科技(深圳)有限公司 Realizing method for personalized fonts and electronic terminal
CN101699518A (en) * 2009-10-30 2010-04-28 华南理工大学 Method for beautifying handwritten Chinese character based on trajectory analysis
CN102955765A (en) * 2011-08-22 2013-03-06 文鼎科技开发股份有限公司 Method for finely adjusting Chinese characters according to font sizes and Chinese character fine adjustment system
CN104077269A (en) * 2013-03-25 2014-10-01 三星电子株式会社 Display apparatus and method of outputting text thereof
CN104834389A (en) * 2015-05-13 2015-08-12 安阳师范学院 Chinese character Webfont generation method
CN108459999A (en) * 2018-02-05 2018-08-28 杭州时趣信息技术有限公司 A kind of font design method, system, equipment and computer readable storage medium
CN109242796A (en) * 2018-09-05 2019-01-18 北京旷视科技有限公司 Character image processing method, device, electronic equipment and computer storage medium
CN110377167A (en) * 2019-07-08 2019-10-25 三星电子(中国)研发中心 Font production method and font generation device
CN110968991A (en) * 2018-09-28 2020-04-07 北京国双科技有限公司 Method and related device for editing characters

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1302010A (en) * 1999-12-29 2001-07-04 陈青 Highly compressed stroke-base hand writing Chinese character processing technology
CN102541393A (en) * 2010-12-09 2012-07-04 上海无戒空间信息技术有限公司 Handwritten text editing method
KR20130004654A (en) * 2011-07-04 2013-01-14 삼성전자주식회사 Method and device for editing text in wireless terminal
CN105653165A (en) * 2015-12-24 2016-06-08 小米科技有限责任公司 Method and device for regulating character display
US9971854B1 (en) * 2017-06-29 2018-05-15 Best Apps, Llc Computer aided systems and methods for creating custom products
CN108132727A (en) * 2017-12-07 2018-06-08 青岛海信电器股份有限公司 Handwriting regulation method and device based on touch control face plates
CN108132749B (en) * 2017-12-21 2020-02-11 维沃移动通信有限公司 Image editing method and mobile terminal
CN109448069B (en) * 2018-10-30 2023-07-18 维沃移动通信有限公司 Template generation method and mobile terminal
CN110909524B (en) * 2019-11-27 2023-12-26 维沃移动通信有限公司 Editing method and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667102A (en) * 2009-09-21 2010-03-10 宇龙计算机通信科技(深圳)有限公司 Realizing method for personalized fonts and electronic terminal
CN101699518A (en) * 2009-10-30 2010-04-28 华南理工大学 Method for beautifying handwritten Chinese character based on trajectory analysis
CN102955765A (en) * 2011-08-22 2013-03-06 文鼎科技开发股份有限公司 Method for finely adjusting Chinese characters according to font sizes and Chinese character fine adjustment system
CN104077269A (en) * 2013-03-25 2014-10-01 三星电子株式会社 Display apparatus and method of outputting text thereof
CN104834389A (en) * 2015-05-13 2015-08-12 安阳师范学院 Chinese character Webfont generation method
CN108459999A (en) * 2018-02-05 2018-08-28 杭州时趣信息技术有限公司 A kind of font design method, system, equipment and computer readable storage medium
CN109242796A (en) * 2018-09-05 2019-01-18 北京旷视科技有限公司 Character image processing method, device, electronic equipment and computer storage medium
CN110968991A (en) * 2018-09-28 2020-04-07 北京国双科技有限公司 Method and related device for editing characters
CN110377167A (en) * 2019-07-08 2019-10-25 三星电子(中国)研发中心 Font production method and font generation device

Also Published As

Publication number Publication date
CN111488104A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
CN111488104B (en) Font editing method and electronic equipment
CN107977674B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107943390B (en) Character copying method and mobile terminal
CN108132752B (en) Text editing method and mobile terminal
CN111402367B (en) Image processing method and electronic equipment
CN110209452B (en) Page display method of mobile terminal and mobile terminal
CN110750189B (en) Icon display method and device
EP4131067A1 (en) Detection result output method, electronic device, and medium
CN109448069B (en) Template generation method and mobile terminal
CN110750187A (en) Icon moving method of application program and terminal equipment
CN110806832A (en) Parameter adjusting method and electronic equipment
CN111522613B (en) Screen capturing method and electronic equipment
CN111028686B (en) Image processing method, image processing apparatus, electronic device, and medium
CN110413363B (en) Screenshot method and terminal equipment
CN110536005B (en) Object display adjustment method and terminal
CN111274842A (en) Method for identifying coded image and electronic equipment
CN110908517A (en) Image editing method, image editing device, electronic equipment and medium
CN108491128B (en) Application program management method and terminal
CN106204588B (en) Image processing method and device
CN111190528B (en) Brush display method, electronic equipment and storage medium
CN109842722B (en) Image processing method and terminal equipment
CN110007821B (en) Operation method and terminal equipment
CN109491631B (en) Display control method and terminal
CN109634503B (en) Operation response method and mobile terminal
CN108664929B (en) Fingerprint acquisition method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant