CN112840308A - Method for optimizing font and related equipment - Google Patents

Method for optimizing font and related equipment Download PDF

Info

Publication number
CN112840308A
CN112840308A CN201880098687.3A CN201880098687A CN112840308A CN 112840308 A CN112840308 A CN 112840308A CN 201880098687 A CN201880098687 A CN 201880098687A CN 112840308 A CN112840308 A CN 112840308A
Authority
CN
China
Prior art keywords
font
data
character
image data
library
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880098687.3A
Other languages
Chinese (zh)
Other versions
CN112840308B (en
Inventor
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd, Shenzhen Huantai Technology Co Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN112840308A publication Critical patent/CN112840308A/en
Application granted granted Critical
Publication of CN112840308B publication Critical patent/CN112840308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A method for optimizing fonts is applied to an electronic device, and comprises the following steps: acquiring character data, and preprocessing the character data to obtain font data (201); inputting the font data into a trained font scoring model to obtain a font score (202) corresponding to the font data; judging whether the font score is smaller than a score threshold value, if so, determining a target font (203) according to the font data; and acquiring characters corresponding to the font data from the target font, and displaying the characters (204). According to the method, the marked low-handwriting fonts are optimized, so that the attractiveness of the display of the handwriting fonts of the mobile phone is improved, and the method has the advantage of high user experience.

Description

Method for optimizing font and related equipment Technical Field
The application relates to the field of intelligent equipment, in particular to a method for optimizing fonts and related equipment.
Background
With the popularization of electronic devices such as smart phones, mobile phones are more and more widely applied in real life, more mobile phone users want to use own handwriting fonts as mobile phone fonts in the aspect of setting the mobile phone fonts, however, different people have different handwriting fonts, the direct use of the handwriting fonts easily causes the aesthetic property of a display interface of the mobile phone to be reduced, and the user experience degree is not high.
Disclosure of Invention
The embodiment of the application discloses a method and related equipment for optimizing fonts, which can score handwritten fonts and optimize the handwritten fonts with low scores.
In a first aspect, an embodiment of the present application discloses a method for optimizing a font, which is applied to an electronic device, and the method includes:
acquiring character data, and preprocessing the character data to obtain font data;
inputting the font data into a trained font scoring model to obtain a font score corresponding to the font data;
judging whether the font score is smaller than a score threshold value, and if the font score is smaller than the score threshold value, determining a target font according to the font data;
and acquiring characters corresponding to the font data from the target font, and displaying the characters.
In a second aspect, an embodiment of the present application discloses an optimized font device, where the optimized font device includes:
the character preprocessing unit is used for preprocessing the character data to obtain character font data;
the scoring unit is used for inputting the font data into a trained font scoring model to obtain a font score corresponding to the font data;
the determining unit is used for judging whether the font score is smaller than a score threshold value or not, and if the font score is smaller than the score threshold value, determining a target font according to the font data;
and the display unit is used for acquiring the characters corresponding to the font data from the target font and displaying the characters.
In a third aspect, an embodiment of the present application discloses a mobile terminal, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application discloses a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps as described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application disclose a computer program product, wherein the computer program product comprises a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the application, the electronic device acquires character data, obtains font data through preprocessing, scores the font data, optimizes a font with low score according to a target font, and acquires and displays characters corresponding to the font data. Therefore, the electronic equipment displays the characters after optimizing the low-score handwritten fonts, the attractiveness of a display interface is improved, and the experience degree of a user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic system structure diagram of a method for font optimization disclosed in an embodiment of the present application.
Fig. 2 is a flowchart illustrating a method for optimizing a font according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating a method for preprocessing text data according to an embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating a method for determining a target font, disclosed in an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an apparatus 500 for optimizing fonts disclosed in an embodiment of the present application.
Fig. 6 is a block diagram of a partial structure of a mobile phone related to a mobile terminal disclosed in an embodiment of the present application
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The mobile terminal according to the embodiments of the present application may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and so on. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
The following describes embodiments of the present application in detail.
Referring to fig. 1, fig. 1 is a schematic system structure diagram of a method for optimizing fonts according to an embodiment of the present application, wherein the mobile terminal 101 is in communication connection with the cloud server 102, the mobile terminal 101 collects text data, preprocessing the character data to obtain font data, sending the font data to the cloud server 102, after the cloud server 102 receives the font data, the font data is used as the input of the font scoring model to be calculated to obtain the font score corresponding to the font data, whether the font score is smaller than the score threshold value or not is judged, the cloud server 102 uses the font with the font score smaller than the score threshold value as the input of the feature extraction model to obtain the font feature, the font feature is matched with the font in the font library to determine the target font, the mobile terminal 101 receives the target font sent by the cloud server 102 and then obtains the characters corresponding to the font data, and the mobile terminal 101 displays the characters.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for optimizing a font according to an embodiment of the present application.
Step 201, obtaining text data, and preprocessing the text data to obtain font data.
Optionally, the text data includes: the electronic equipment comprises character image data and touch screen handwritten character data, wherein the character image data is picture data in the electronic equipment, and the touch screen handwritten character data is handwritten character data received by a touch screen of the electronic equipment.
Further, if the text data is text image data, detecting all straight lines included in the text image data, wherein detecting straight lines includes: mapping all straight lines y which may pass through the pixel point to a k-b space for each pixel point of the text image data edge, voting, wherein the slope of the straight line which is vertical to an x axis does not exist and cannot be represented by the equation y k x + b, so that a parameter equation r x cos (theta) + y sin (theta) is used for representing, wherein r represents the distance from the straight line passing through the point to an original point, theta represents the included angle between r and an x positive axis, after mapping each edge point, voting is carried out in a Hough space, each time a straight line equation satisfies (r, theta) points, the pixel value at the point is added with 1 to obtain a Hough-space image, filtering the Hough-space image, calculating local maximum values, and obtaining a plurality of straight line equations according to the local maximum values to obtain all straight lines in the text image data.
Further, all straight lines in the character image data are obtained, the inclination angle of each straight line is calculated, the average value of all the inclination angles is calculated, the average value is the inclination of the character image data, and the character image data are subjected to rotation correction according to the average value to obtain first character image data.
Optionally, a search frame and a search step length are set, full-image search is performed on the first text image data from a first pixel point of the first text image data, and the first text image data is cut to obtain second text image data, where the second text image data includes: and the plurality of test samples are used for thinning the second character image data to obtain thinned second character image data, normalizing the thinned second character image data to obtain normalized second character image data, acquiring a matching template, and matching the normalized second character image data with the matching template to obtain third character image data.
Further, the third character image data is used as the input of the feature extraction model to obtain the character features corresponding to the third character image, the feature weight corresponding to the character features is determined, the font library is obtained, the sample character data in the font library is used as the input of the trained feature extraction model to obtain the sample features corresponding to the sample character data, a character classifier is established according to the sample features and trained to obtain the trained character classifier, the character features and the feature weight are used as the input of the trained character classifier to obtain the classification result, the recognition characters corresponding to the character features of the classification result are determined, and the character features, the feature weight and the recognition characters form the font data.
Step 202, inputting the font data into the trained font scoring model to obtain the font score corresponding to the font data.
Optionally, before inputting the font data into the trained font scoring model, a font scoring model is constructed, a plurality of handwritten font training samples are obtained, the handwritten font training samples are scored, and the trained font scoring model is obtained according to scores of the handwritten font training samples and the handwritten font training samples.
And further, inputting the font data into the trained font scoring model to obtain a calculation result, and determining the calculation result as the font score corresponding to the font data.
Step 203, determining whether the font score is smaller than a score threshold, and if the font score is smaller than the score threshold, determining a target font according to the font data.
Optionally, a score threshold is obtained, whether the font score is smaller than the score threshold is judged, if the font score is not smaller than the score threshold, a mapping relation between the font data and the font score is established, and the mapping relation is stored in the individual font library; and if the font score is smaller than the score threshold value, acquiring a font library, and selecting one font from the font library as the target font according to the font data.
And 204, acquiring characters corresponding to the font data from the target font, and displaying the characters.
Optionally, after the target font is determined, a text library corresponding to the target font is obtained, the text corresponding to the font data is obtained in the text library corresponding to the target font and serves as the replacement text, the text data is replaced with the replacement text, and the replacement text is displayed in the display interface.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for preprocessing text data according to an embodiment of the present disclosure.
Step 301, obtaining text data, where the text data includes: and character image data, wherein straight lines in the character image data are detected.
Optionally, before detecting a straight line in the text image data, performing binarization processing on the text image data to obtain binarized text image data, wherein the content of the text image data is divided into foreground information and background information, the foreground information is black, and the background information is white; and denoising the binarized character image data, wherein the denoising mode comprises the following steps: the average filter, the median filter, the wavelet de-noising and the like are not limited, the de-noised character image data is obtained after de-noising, and the character image data is updated according to the de-noised character image data.
Optionally, the method includes obtaining text data, and if the text data is text image data, performing a straight line detection step on the text data to obtain all straight lines included in the text image data, where the straight line detection step includes: mapping all straight lines y which may pass through the pixel point to a k-b space for each pixel point of the text image edge, voting, wherein the slope of the straight line which is vertical to an x axis does not exist and cannot be represented by the equation y k x + b, so that the slope is represented by a parameter equation r x cos (theta) + y sin (theta), wherein r represents the distance from the straight line passing through the point to an original point, theta represents the included angle between r and an x positive axis, after mapping each edge point, voting is carried out in a Hough space, each time a straight line equation meets the (r, theta) point, the pixel value at the point is added with 1 to obtain a Hough-space image, the Hough-space image is filtered, a local maximum value is calculated, and a plurality of straight lines in the text image data are obtained according to the local maximum value; for example, if the pixel value of the first pixel is 210 and the pixel value of the second pixel is 10, the pixel value of the first pixel in the hough-space image is larger than the pixel value of the second pixel, which indicates that more lines pass through the first pixel.
Step 302, calculating the inclination angle of the straight line, calculating the average value of the inclination angles, and determining the average value as the inclination angle of the character image data.
Optionally, all the straight lines in the text data subjected to the straight line detection step are obtained, the inclination angle of each straight line is respectively calculated, the average value of the inclination angles is calculated according to all the inclination angles, the average value is determined as an average inclination angle, and the inclination angle and the inclination direction of the text image data are determined according to the average inclination angle.
And 303, performing rotation correction on the character image data according to the inclination angle to obtain first character image data.
Optionally, according to the tilt angle and the tilt direction of the text image data, performing rotation correction on the text image data, that is, rotating the text image data by the tilt angle in the opposite direction of the tilt direction, to obtain first text image data.
And step 304, setting a search box and a search step length, and performing search cutting on the first character image data to obtain second character image data.
Optionally, a search box and a search step length are set, where the search box is used to search the characters in the first character image data, the search step length is used to set the number of pixel points of each search movement of the search box, and according to the search box and the search step length, a full-image search is performed on the first character image data and the characters in the first character image data are cut to obtain second character image data.
And 305, thinning the second character image data to obtain thinned second character image data, normalizing the thinned second character image data to obtain normalized second character image data, and performing template matching on the normalized second character image data to obtain third character image data.
Optionally, the second text image is refined to obtain refined second text image data, where the refining of the second text image data includes: continuously erasing edge pixels of the second character image data to enable the second character image data to become an image skeleton with original character topological connection relation; normalizing the refined second character image data to obtain normalized second character image data, wherein the normalization method comprises the following steps: min-max normalization, Z-score normalization, etc., without limitation; and acquiring a template, and matching the normalized second character image data with the template to obtain third character image data, wherein the third character image data is consistent with the structure of the template.
Step 306, inputting the third text image as a trained feature extraction model to obtain text features corresponding to the third text image, and determining feature weights corresponding to the text features.
Optionally, a trained feature extraction model is obtained, wherein the method for feature extraction model includes: and performing convolutional neural network operation and the like, wherein the third text image data is used as a feature extraction model to be input without limitation, so as to obtain text features corresponding to the third text image, and determining feature weights corresponding to the text features according to the feature extraction model.
Step 307, obtaining a character library, using the sample character data in the character library as the input of the trained feature extraction model to obtain sample features corresponding to the sample character data, and training a character classifier according to the sample features to obtain a trained character classifier.
Optionally, a text library is obtained, sample text in the text library is matched with a template to obtain sample text data consistent with the structure of the template, the sample text data is used as input of a trained feature extraction model to obtain sample features corresponding to the sample text data, sample feature weights corresponding to the sample features are determined according to the feature extraction model, a text classifier is constructed according to the sample features and the sample feature weights and trained to obtain the trained text classifier.
And 308, taking the character features and the feature weights as the input of the trained character classifier to obtain a classification result, and determining that the classification result is the recognition characters corresponding to the character features, wherein the character features, the feature weights and the recognition characters form font data.
Optionally, the character features and the feature weights are used as input of a trained character classifier to obtain a classification result corresponding to the character features, the classification result is determined to be an identification character corresponding to the character features, and the character features, the feature weights and the identification character form font data.
Referring to fig. 4, fig. 4 is a flowchart illustrating a method for determining a target font according to an embodiment of the present application.
Step 401, obtaining a font library, and detecting whether the font library contains a set font.
Optionally, a font library is obtained, a setting mark is queried in the font library, the setting mark is used for marking a set font, if the font library includes the setting mark, the font library includes the set font, the font corresponding to the setting mark is determined to be the set font, and the set font is determined to be the target font; and if the font library does not contain the setting marks, determining that the font library does not contain the setting fonts.
Step 402, if the font library does not contain the set font, determining the font in the font library as a font to be selected.
Optionally, if it is determined that the font library does not contain the set font, the font in the font library is obtained, and the font in the font library is determined to be the font to be selected.
And 403, acquiring the identification characters from the character library of the characters to be selected as the characters to be selected.
Optionally, a character library of the fonts to be selected is obtained, the identification characters are matched in the character library of each font to be selected, and the characters corresponding to the identification characters in the character library of the fonts to be selected are determined as the characters to be selected.
Step 404, obtaining the feature of the character to be selected as a feature to be selected, determining a feature weight to be selected corresponding to the feature to be selected, matching the character feature with the feature to be selected, and calculating according to the feature weight and the feature weight to be selected to obtain a matching degree.
Optionally, the characters to be selected are used as input of a trained feature extraction model to obtain features to be selected corresponding to the characters to be selected, a feature weight to be selected corresponding to the features to be selected is determined according to the feature extraction model, the character features and the features to be selected are matched to obtain a first matching degree, the first matching degree is calculated according to the feature weight and the feature weight to be selected to obtain a calculation result, and the calculation result is determined to be the matching degree of the font data and the characters to be selected.
Step 405, determining the feature to be selected corresponding to the maximum value in the matching degree as a target feature, and determining the font to be selected corresponding to the target feature as the target font.
Optionally, the candidate feature corresponding to the maximum value in the matching degree is determined as the target feature, wherein the higher the matching degree is, the more similar the character feature is to the candidate feature, the font data is determined to be similar to the candidate font, and the candidate font corresponding to the target feature is determined as the target font.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an apparatus 500 for optimizing fonts according to an embodiment of the present disclosure.
An obtaining unit 501, configured to obtain text data, and perform preprocessing on the text data to obtain font data;
a scoring unit 502, configured to input the font data into a trained font scoring model to obtain a font score corresponding to the font data;
a determining unit 503, configured to determine whether the font score is smaller than a score threshold, and if the font score is smaller than the score threshold, determine a target font according to the font data;
a display unit 504, configured to obtain a text corresponding to the font data from the target font, and display the text.
In a possible example, the obtaining unit 501 is specifically configured to obtain text data, and perform preprocessing on the text data to obtain font data, and: acquiring text data, wherein the text data comprises: the character image data is picture data in the electronic equipment, and the touch handwritten character data is handwritten character data received by a touch screen of the electronic equipment; if the text data is text image data, detecting all straight lines contained in the text image data, wherein the step of detecting the straight lines comprises the following steps: for each pixel point of the text image data edge, mapping all straight lines which may pass through the pixel point, wherein y is k x + b, onto a k-b space, voting, wherein the slope of the straight line which is perpendicular to an x axis does not exist and cannot be represented by an equation, y is k x + b, so that a parameter equation, r is x cos (theta) + y sin (theta), is used for representing, wherein r represents the distance from the straight line passing through the point to an original point, theta represents the included angle between r and an x positive axis, after mapping each edge point, voting is carried out in a Hough space, each time a straight line equation meets the (r, theta) point, the pixel value at the point is added with 1 to obtain a hough-space image, the hough-space image is filtered, a local maximum value is calculated, and a plurality of straight line equations are obtained according to the local maximum value to obtain all straight lines in the text image data; acquiring all straight lines in character image data, calculating the inclination angle of each straight line, calculating the average value of all the inclination angles, wherein the average value is the inclination of the character image data, and performing rotation correction on the character image data according to the average value to obtain first character image data; setting a search frame and a search step length, carrying out full-image search on first character image data from a first pixel point of the first character image data, and cutting the first character image data to obtain second character image data, wherein the second character image data comprises: the method comprises the steps that a plurality of test samples are used for thinning second character image data to obtain thinned second character image data, normalization processing is carried out on the thinned second character image data to obtain normalized second character image data, a matching template is obtained, and the normalized second character image data and the matching template are matched to obtain third character image data; the third character image data is used as the input of the feature extraction model to obtain the character features corresponding to the third character image, the feature weight corresponding to the character features is determined, a character library is obtained, the sample character data in the character library is used as the input of the trained feature extraction model to obtain the sample features corresponding to the sample character data, a character classifier is established according to the sample features and trained to obtain the trained character classifier, the character features and the feature weight are used as the input of the trained character classifier to obtain a classification result, the recognition characters corresponding to the character features of the classification result are determined, and the character features, the feature weights and the recognition characters form character data.
In a possible example, the scoring unit 502 is configured to input the font data into a trained font scoring model to obtain a font score corresponding to the font data, and specifically: before inputting the font data into the trained font scoring model, constructing the font scoring model, obtaining a plurality of handwritten font training samples, scoring the handwritten font training samples, training the font scoring model according to scores of the handwritten font training samples and the handwritten font training samples to obtain the trained font scoring model, inputting the font data into the trained font scoring model to obtain a calculation result, and determining the calculation result as the font score corresponding to the font data.
In a possible example, the determining unit 503 is specifically configured to determine whether the font score is smaller than a score threshold, and if the font score is smaller than the score threshold, determine the target font according to the font data, where: acquiring a score threshold, judging whether the font score is smaller than the score threshold, if the font score is not smaller than the score threshold, establishing a mapping relation between font data and the font score, and storing the mapping relation into an individual font library; if the font score is smaller than the score threshold value, acquiring a font library, inquiring a setting mark in the font library, wherein the setting mark is used for marking a set font, if the font library comprises the setting mark, acquiring the font corresponding to the setting mark as the set font, and determining the set font as the target font; if the font library does not contain the setting mark, determining that the font library does not contain the setting font; if the font library is determined not to contain the set font, the fonts in the font library are obtained, the fonts in the font library are determined to be the fonts to be selected, the character library of the fonts to be selected is obtained, the identification characters are matched in the character library of each font to be selected, the characters corresponding to the identification characters in the character library of the fonts to be selected are determined to be the characters to be selected, the characters to be selected are used as the input of a trained feature extraction model, the features to be selected corresponding to the characters to be selected are obtained, the weight of the features to be selected is determined according to the feature extraction model, the character features are matched with the features to be selected to obtain a first matching degree, the first matching degree is calculated according to the weight of the features and the weight of the features to be selected, the calculated result is determined to be the matching degree of the font data and the fonts to be selected, the features corresponding to the maximum value in the matching degree are determined to be the target features, the higher the matching degree is, the more similar the character features and the features to be selected are, the font data is determined to be similar to the fonts to be selected, and the fonts to be selected corresponding to the target features are determined to be the target fonts.
In a possible example, the determining unit 503 is specifically configured to determine whether the font score is smaller than a score threshold, and if the font score is smaller than the score threshold, determine the target font according to the font data, where: and if the font library is determined not to contain the set font, acquiring the font in the font library as the font to be selected, acquiring the use frequency corresponding to the font to be selected, and determining the font to be selected corresponding to the maximum use frequency in the font library as the target font.
In a possible example, the display unit 504 is configured to obtain a text corresponding to the font data from the target font, and display the text, and specifically configured to: after the target font is determined, a character library corresponding to the target font is obtained, characters corresponding to the font data are obtained from the character library corresponding to the target font and serve as replacement characters, the character data are replaced with the replacement characters, and the replacement characters are displayed in a display interface.
In a possible example, the display unit 504 is configured to obtain a text corresponding to the font data from the target font, and display the text, and specifically configured to: acquiring fonts of the font library and acquiring fonts of the individual font library; displaying a magnification floating layer on the interface of the characters, wherein the magnification floating layer comprises: fonts of the font library and fonts of the personality font library
Referring to fig. 6, fig. 6 is a block diagram of a part of a structure of a mobile phone related to a mobile terminal disclosed in the embodiment of the present application. Referring to fig. 6, the handset includes: radio Frequency (RF) circuit 910, memory 920, input/output unit 930, sensor 950, audio collector 960, Wireless Fidelity (WiFi) module 970, application processor AP980, power supply 990, and the like. Those skilled in the art will appreciate that the handset configuration shown in fig. 6 is not intended to be limiting and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components, for example, the rf circuitry 910 may be coupled to multiple antennas.
The following describes each component of the mobile phone in detail with reference to fig. 6:
the input and output unit 930 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input-output unit 930 may include a touch display 933 and other input devices 932. The input-output unit 930 may also include other input devices 932. In particular, other input devices 932 may include, but are not limited to, one or more of physical keys, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like. Wherein the content of the first and second substances,
a radio frequency circuit 910 configured to receive a connection request of a wearable device;
an AP980 for establishing a connection with the wearable device according to the connection request.
The AP980 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions and processes of the mobile phone by operating or executing software programs and/or modules stored in the memory 920 and calling data stored in the memory 920, thereby integrally monitoring the mobile phone. Optionally, AP980 may include one or more processing units; alternatively, the AP980 may integrate an application processor that handles primarily the operating system, user interface, and applications, etc., and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the AP 980.
Further, the memory 920 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
RF circuitry 910 may be used for the reception and transmission of information. In general, the RF circuit 910 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 910 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol including, but not limited to, global system for mobile communications, general packet radio service, code division multiple access, wideband code division multiple access, long term evolution, new air interface, email, short message service, etc.
The handset may also include at least one sensor 950, such as an ultrasonic sensor, an angle sensor, a light sensor, a motion sensor, and others. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the touch display screen according to the brightness of ambient light, and the proximity sensor may turn off the touch display screen and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio collector 960, speaker 961, microphone 962 may provide an audio interface between the user and the handset. The audio collector 960 can transmit the received electrical signal converted from the audio data to the speaker 961, and the audio data is converted into a sound signal by the speaker 961 for playing; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal, and the electrical signal is received by the audio collector 960 and converted into audio data, and then the audio data is processed by the audio data playing AP980, and then the audio data is sent to another mobile phone through the RF circuit 910, or the audio data is played to the memory 920 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 970, and provides wireless broadband Internet access for the user. Although fig. 6 shows the WiFi module 970, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope of not changing the essence of the application.
The handset also includes a power supply 990 (e.g., a battery) for supplying power to various components, and optionally, the power supply may be logically connected to the AP980 via a power management system, so that functions of managing charging, discharging, and power consumption are implemented via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, a light supplement device, a light sensor, and the like, which are not described herein again.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to perform part or all of the steps of any method for optimizing a font as described in the above method embodiments.
Embodiments of the present application also provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform part or all of the steps of any one of the methods for optimizing a font as set forth in the above method embodiments.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The foregoing is an implementation of the embodiments of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the embodiments of the present application, and these modifications and decorations are also regarded as the protection scope of the present application.

Claims (20)

  1. A method for optimizing fonts, which is applied to an electronic device, and comprises the following steps:
    acquiring character data, and preprocessing the character data to obtain font data;
    inputting the font data into a trained font scoring model to obtain a font score corresponding to the font data;
    judging whether the font score is smaller than a score threshold value, and if the font score is smaller than the score threshold value, determining a target font according to the font data;
    and acquiring characters corresponding to the font data from the target font, and displaying the characters.
  2. The method of claim 1, wherein obtaining the text data and preprocessing the text data to obtain font data comprises:
    the text data includes: character image data;
    detecting a straight line in the character image data;
    calculating the inclination angle of the straight line, calculating the average value of the inclination angles, and determining the average value as the inclination angle of the character image data;
    carrying out rotation correction on the character image data according to the inclination angle to obtain first character image data;
    and executing a character recognition step on the first character image data to obtain font data.
  3. The method of claim 2, wherein said step of performing character recognition on said first text image data to obtain font data further comprises:
    setting a search box and a search step length, and performing search cutting on the first character image data to obtain second character image data;
    thinning the second character image data to obtain thinned second character image data, normalizing the thinned second character image data to obtain normalized second character image data, and performing template matching on the normalized second character image data to obtain third character image data;
    and taking the third character image data as the input of the trained feature extraction model to obtain character features corresponding to the third character image, and determining feature weights corresponding to the character features.
  4. The method of claim 3, wherein said step of performing character recognition on said first text image data to obtain font data comprises:
    obtaining a character library, and using sample character data in the character library as the input of the trained feature extraction model to obtain sample features corresponding to the sample character data;
    training a character classifier according to the sample characteristics to obtain a trained character classifier;
    the character features and the feature weights are used as the input of the trained character classifier to obtain a classification result, and the classification result is determined to be the recognition characters corresponding to the character features;
    the character features, the feature weights, and the recognition characters constitute the font data.
  5. The method of claim 1, wherein determining a target font from the font data comprises:
    detecting whether the font library contains a set font;
    if the font library contains the set font, determining the target font as the set font;
    and if the font library does not contain the set font, matching the font data with the font library to determine a target font.
  6. The method of claim 5, wherein said matching the font data to the font library to determine a target font comprises:
    determining the fonts in the font library as fonts to be selected;
    acquiring the identification characters from the font library of the characters to be selected as the characters to be selected;
    acquiring the characteristics of the characters to be selected as the characteristics to be selected, determining the weight of the characteristics to be selected corresponding to the characteristics to be selected, matching the characteristics of the characters with the characteristics to be selected, and calculating according to the weight of the characteristics and the weight of the characteristics to be selected to obtain the matching degree;
    and determining the to-be-selected feature corresponding to the maximum value in the matching degree as a target feature, and determining the font to be selected corresponding to the target feature as the target font.
  7. The method of claim 1, wherein determining a target font from the font data comprises:
    acquiring the font use frequency corresponding to the fonts in the text library;
    and determining the font corresponding to the maximum value of the font use frequency as the target font.
  8. The method of claim 1, further comprising:
    and if the font score is not less than the score threshold value, establishing a mapping relation between the font data and the font score, and storing the mapping relation into an individual font library.
  9. The method of claim 1, wherein the displaying the text further comprises:
    acquiring fonts of the font library and acquiring fonts of the individual font library;
    displaying a magnification floating layer on the interface of the characters, wherein the magnification floating layer comprises: fonts of the font library and fonts of the personality font library.
  10. An apparatus for optimizing font comprising:
    the character preprocessing unit is used for preprocessing the character data to obtain character font data;
    the scoring unit is used for inputting the font data into a trained font scoring model to obtain a font score corresponding to the font data;
    the determining unit is used for judging whether the font score is smaller than a score threshold value or not, and if the font score is smaller than the score threshold value, determining a target font according to the font data;
    and the display unit is used for acquiring the characters corresponding to the font data from the target font and displaying the characters.
  11. The apparatus of claim 10, wherein the means for obtaining text data and pre-processing the text data to obtain font data is configured to:
    the text data includes: character image data; detecting a straight line in the character image data; calculating the inclination angle of the straight line, calculating the average value of the inclination angles, and determining the average value as the inclination angle of the character image data; carrying out rotation correction on the character image data according to the inclination angle to obtain first character image data; and executing a character recognition step on the first character image data to obtain font data.
  12. The apparatus according to claim 11, wherein said obtaining unit is configured to, in a preceding aspect of said character recognition step performed on said first text image data to obtain font data:
    setting a search box and a search step length, and performing search cutting on the first character image data to obtain second character image data; thinning the second character image data to obtain thinned second character image data, normalizing the thinned second character image data to obtain normalized second character image data, and performing template matching on the normalized second character image data to obtain third character image data; and taking the third character image as the input of the trained feature extraction model to obtain character features corresponding to the third character image, and determining feature weights corresponding to the character features.
  13. The apparatus according to claim 12, wherein said first text image data performs a character recognition step to obtain font data, and said obtaining unit is configured to:
    obtaining a character library, and using sample character data in the character library as the input of the trained feature extraction model to obtain sample features corresponding to the sample character data; training a character classifier according to the sample characteristics to obtain a trained character classifier; the character features and the feature weights are used as the input of the trained character classifier to obtain a classification result, and the classification result is determined to be the recognition characters corresponding to the character features; the character features, the feature weights, and the recognition characters constitute the font data.
  14. The apparatus of claim 10, wherein the determination of the target font aspect from the font data comprises:
    detecting whether the font library contains a set font; if the font library contains the set font, determining the target font as the set font; and if the font library does not contain the set font, matching the font data with the font library to determine a target font.
  15. The apparatus of claim 14, wherein the matching the font data to the font library determines a target font aspect, and wherein the determining unit is configured to:
    determining the fonts in the font library as fonts to be selected; acquiring the identification characters from the font library of the characters to be selected as the characters to be selected; acquiring the characteristics of the characters to be selected as the characteristics to be selected, determining the weight of the characteristics to be selected corresponding to the characteristics to be selected, matching the characteristics of the characters with the characteristics to be selected, and calculating according to the weight of the characteristics and the weight of the characteristics to be selected to obtain the matching degree; and determining the to-be-selected feature corresponding to the maximum value in the matching degree as a target feature, and determining the font to be selected corresponding to the target feature as the target font.
  16. The apparatus of claim 10, wherein the determination of the target font aspect from the font data comprises:
    acquiring the font use frequency corresponding to the fonts in the text library; and determining the font corresponding to the maximum value of the font use frequency as the target font.
  17. The apparatus of claim 10, wherein the determining unit is configured to:
    and if the font score is not less than the score threshold value, establishing a mapping relation between the font data and the font score, and storing the mapping relation into an individual font library.
  18. The apparatus of claim 10, wherein the display unit is configured to:
    acquiring fonts of the font library and acquiring fonts of the individual font library; displaying a magnification floating layer on the interface of the characters, wherein the magnification floating layer comprises: fonts of the font library and fonts of the personality font library.
  19. A mobile terminal comprising a processor, memory, a communications interface, and one or more programs stored in the memory and configured for execution by the processor, the programs including instructions for performing the steps in the method of any of claims 1-9.
  20. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-9.
CN201880098687.3A 2018-12-19 2018-12-19 Font optimizing method and related equipment Active CN112840308B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/122152 WO2020124455A1 (en) 2018-12-19 2018-12-19 Font optimizing method and related device

Publications (2)

Publication Number Publication Date
CN112840308A true CN112840308A (en) 2021-05-25
CN112840308B CN112840308B (en) 2024-06-14

Family

ID=71102558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880098687.3A Active CN112840308B (en) 2018-12-19 2018-12-19 Font optimizing method and related equipment

Country Status (2)

Country Link
CN (1) CN112840308B (en)
WO (1) WO2020124455A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102021109845A1 (en) * 2021-04-19 2022-10-20 Technische Universität Darmstadt, Körperschaft des öffentlichen Rechts Method and device for generating optimized fonts
CN115620302B (en) * 2022-11-22 2023-12-01 山东捷瑞数字科技股份有限公司 Picture font identification method, system, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1755707A (en) * 2004-09-30 2006-04-05 德鑫科技股份有限公司 Automatic correction method for tilted image
CN101393645A (en) * 2008-09-12 2009-03-25 浙江大学 Hand-writing Chinese character computer generation and beautification method
CN102938062A (en) * 2012-10-16 2013-02-20 山东山大鸥玛软件有限公司 Document image slant angle estimation method based on content
CN103136769A (en) * 2011-12-02 2013-06-05 北京三星通信技术研究有限公司 Method and device of generation of writing style font of user
CN103164865A (en) * 2011-12-12 2013-06-19 北京三星通信技术研究有限公司 Method and device of beautifying handwriting input
US20150055869A1 (en) * 2013-08-26 2015-02-26 Samsung Electronics Co., Ltd. Method and apparatus for providing layout based on handwriting input
CN106778456A (en) * 2015-11-19 2017-05-31 北京博智科创科技有限公司 A kind of optimization method and device of handwriting input
CN107680108A (en) * 2017-07-28 2018-02-09 平安科技(深圳)有限公司 Inclination value-acquiring method, device, terminal and the storage medium of tilted image
CN108228069A (en) * 2017-12-21 2018-06-29 北京壹人壹本信息科技有限公司 Hand-written script input method, mobile terminal and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609735B (en) * 2012-02-06 2014-03-12 安徽科大讯飞信息科技股份有限公司 Method and apparatus for assessing standard fulfillment of character writing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1755707A (en) * 2004-09-30 2006-04-05 德鑫科技股份有限公司 Automatic correction method for tilted image
CN101393645A (en) * 2008-09-12 2009-03-25 浙江大学 Hand-writing Chinese character computer generation and beautification method
CN103136769A (en) * 2011-12-02 2013-06-05 北京三星通信技术研究有限公司 Method and device of generation of writing style font of user
CN103164865A (en) * 2011-12-12 2013-06-19 北京三星通信技术研究有限公司 Method and device of beautifying handwriting input
CN102938062A (en) * 2012-10-16 2013-02-20 山东山大鸥玛软件有限公司 Document image slant angle estimation method based on content
US20150055869A1 (en) * 2013-08-26 2015-02-26 Samsung Electronics Co., Ltd. Method and apparatus for providing layout based on handwriting input
CN106778456A (en) * 2015-11-19 2017-05-31 北京博智科创科技有限公司 A kind of optimization method and device of handwriting input
CN107680108A (en) * 2017-07-28 2018-02-09 平安科技(深圳)有限公司 Inclination value-acquiring method, device, terminal and the storage medium of tilted image
CN108228069A (en) * 2017-12-21 2018-06-29 北京壹人壹本信息科技有限公司 Hand-written script input method, mobile terminal and device

Also Published As

Publication number Publication date
WO2020124455A1 (en) 2020-06-25
CN112840308B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
US10169639B2 (en) Method for fingerprint template update and terminal device
CN111368934B (en) Image recognition model training method, image recognition method and related device
CN111260665B (en) Image segmentation model training method and device
CN108875781A (en) A kind of labeling method, apparatus, electronic equipment and storage medium
CN106156711B (en) Text line positioning method and device
CN106874906B (en) Image binarization method and device and terminal
CN107784271B (en) Fingerprint identification method and related product
CN108427873B (en) Biological feature identification method and mobile terminal
CN108764051B (en) Image processing method and device and mobile terminal
CN112802111B (en) Object model construction method and device
CN109993234B (en) Unmanned driving training data classification method and device and electronic equipment
CN111401463B (en) Method for outputting detection result, electronic equipment and medium
CN110555171A (en) Information processing method, device, storage medium and system
CN107545163B (en) Unlocking control method and related product
CN112840308B (en) Font optimizing method and related equipment
CN113015996A (en) Advertisement pushing method and related equipment
CN109726726B (en) Event detection method and device in video
CN112859136B (en) Positioning method and related device
CN112381798A (en) Transmission line defect identification method and terminal
CN107734049B (en) Network resource downloading method and device and mobile terminal
EP3627382A1 (en) Method for iris liveness detection and related product
CN116259083A (en) Image quality recognition model determining method and related device
CN114140655A (en) Image classification method and device, storage medium and electronic equipment
CN109976610B (en) Application program identifier classification method and terminal equipment
CN113706446A (en) Lens detection method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant