CN108459999B - Font design method, system, equipment and computer readable storage medium - Google Patents

Font design method, system, equipment and computer readable storage medium Download PDF

Info

Publication number
CN108459999B
CN108459999B CN201810113650.5A CN201810113650A CN108459999B CN 108459999 B CN108459999 B CN 108459999B CN 201810113650 A CN201810113650 A CN 201810113650A CN 108459999 B CN108459999 B CN 108459999B
Authority
CN
China
Prior art keywords
font
chinese character
character image
target
conversion model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810113650.5A
Other languages
Chinese (zh)
Other versions
CN108459999A (en
Inventor
黄文波
张洪明
张帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shiqu Information Technology Co ltd
Original Assignee
Hangzhou Shiqu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Shiqu Information Technology Co ltd filed Critical Hangzhou Shiqu Information Technology Co ltd
Priority to CN201810113650.5A priority Critical patent/CN108459999B/en
Publication of CN108459999A publication Critical patent/CN108459999A/en
Application granted granted Critical
Publication of CN108459999B publication Critical patent/CN108459999B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/109Font handling; Temporal or kinetic typography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/151Transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T3/18

Abstract

The application discloses a font design method, a system, equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring a Chinese character image to be designed in a source font; inputting the Chinese character image to be designed into a pre-trained font conversion model to obtain a target font Chinese character image output by the font conversion model; the font conversion model is obtained by training an asymmetric Transfer network constructed based on a deep learning theory by utilizing a training sample in advance, wherein the training sample comprises a target font Chinese character image and a corresponding source font Chinese character image which are designed by a font designer in advance. The font design method can effectively improve the universality of the font generation method, further remarkably reduce the difficulty of font design and greatly improve the efficiency of font design.

Description

Font design method, system, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of flat panel design technologies, and in particular, to a method, a system, a device, and a computer-readable storage medium for font design.
Background
Due to the complex diversity of Chinese characters, a font designer is very time-consuming and labor-consuming when designing a brand new font. In a conventional automatic font generation method, a method of splitting a Chinese character into different radicals and combining the radicals according to a font structure is generally adopted. However, this method has the following disadvantages: firstly, the method is only effective for partial Chinese character structures, for example, characters with upper and lower structures and left and right structures such as 'knot and spirit' are effective, and the difficulty of character combination of a semi-surrounding structure is high, for example, 'construction'; secondly, the structure proportion of the generated font cannot be dynamically adjusted, for example, the size proportion of the radical 'ten' in the Chinese characters 'thousand' and 'gram' is different; thirdly, too much man-machine interaction is needed, for example, when the 'seed' word is generated, the 'rice' and the 'middle' are manually selected, and then synthesis is carried out. Therefore, the traditional font generation method has low universality, so that the problems of high font design difficulty, low efficiency and the like are caused.
Therefore, technical problems to be solved by the technical staff in the art are needed to solve the problems of high difficulty and low efficiency of font design caused by low universality of the conventional font generation method.
Disclosure of Invention
In view of the above, the present invention provides a font design method, a system, a device and a computer readable storage medium, so as to solve the problems of high font design difficulty and low efficiency caused by low universality of the conventional font generation method. The specific scheme is as follows:
a font design method, comprising:
acquiring a Chinese character image to be designed in a source font;
inputting the Chinese character image to be designed into a pre-trained font conversion model to obtain a target font Chinese character image output by the font conversion model;
the font conversion model is obtained by training an asymmetric Transfer network constructed based on a deep learning theory by utilizing a training sample in advance, wherein the training sample comprises a target font Chinese character image and a corresponding source font Chinese character image which are designed by a font designer in advance.
Optionally, the step of obtaining the font conversion model includes:
obtaining the training sample;
establishing an asymmetric Transfer network by using a deep learning framework TensorFlow;
and training the asymmetric Transfer network by using the training samples to obtain the font conversion model.
Optionally, the step of training the asymmetric Transfer network by using the training samples to obtain the font conversion model includes:
preprocessing a target font Chinese character image and a corresponding source font Chinese character image in the training sample to obtain a corresponding Chinese character image pair;
carrying out data enhancement processing on the Chinese character image pair to obtain an enhanced Chinese character image pair;
and training the asymmetric Transfer network by using the enhanced Chinese character image pair to obtain the font conversion model.
Optionally, the step of training the asymmetric Transfer network by using the enhanced chinese character image pair includes:
acquiring an enhanced source font Chinese character image, and then acquiring the high-level characteristics of the processed source font Chinese character image through an Encoder network to obtain target high-level characteristics;
and performing reverse convolution operation on the high-level characteristics of the target through a Decoder network to obtain a corresponding target font Chinese character image, and finishing the training process of font style conversion of the asymmetric Transfer network.
Optionally, the step of training the asymmetric Transfer network by using the processed chinese character image pair includes:
and training the asymmetric Transfer network by utilizing the processed Chinese character image pair according to a preset loss function.
Optionally, the preset loss function includes GAN loss, Constant loss, and L1 loss.
Optionally, after the step of inputting the acquired image of the Chinese character to be designed to the pre-trained font conversion model to obtain the Chinese character image of the target font output by the font conversion model, the method further includes:
if any character does not accord with the preset target font style in the target font Chinese character image output by the font conversion model, acquiring the finely-tuned target font Chinese character;
performing Fine tuning operation on the font conversion network by using the finely tuned target font Chinese character to obtain a new font conversion network;
and the finely-tuned target font Chinese character is a new target font Chinese character obtained by finely tuning the Chinese character which does not accord with the preset target font style by a font designer.
Correspondingly, the invention also provides a font design system, which comprises:
the Chinese character image acquisition module is used for acquiring a Chinese character image to be designed in a source font;
the Chinese character image input module is used for inputting the Chinese character image to be designed to a pre-trained font conversion model to obtain a target font Chinese character image output by the font conversion model;
the font conversion model is obtained by training an asymmetric Transfer network constructed based on a deep learning theory by utilizing a training sample in advance, wherein the training sample comprises a target font Chinese character image and a corresponding source font Chinese character image which are designed by a font designer in advance.
Accordingly, the present invention also provides a font design apparatus comprising a memory and a processor, wherein the processor is configured to execute a computer program stored in the memory to implement the steps of the font design method as described above.
Accordingly, the present invention also provides a computer readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the font design method as described above.
The font design method disclosed by the invention obtains the Chinese character image to be designed in the source font; inputting the Chinese character image to be designed into a pre-trained font conversion model to obtain a target font Chinese character image output by the font conversion model; the font conversion model is obtained by training an asymmetric Transfer network constructed based on a deep learning theory by utilizing a training sample in advance, wherein the training sample comprises a target font Chinese character image and a corresponding source font Chinese character image which are designed by a font designer in advance.
Compared with the traditional font automatic generation method, the font design method has the following beneficial effects: and acquiring a Chinese character image to be designed in the source font, inputting the acquired Chinese character image to be designed into a font conversion obtained after training an asymmetric Transfer network constructed based on a deep learning theory by using a training sample in advance, and acquiring a target font Chinese character image output by the font conversion model. Therefore, the method and the device utilize the image conversion technology in deep learning to convert the Chinese characters to be designed in the source font into the corresponding Chinese character images, and the division and combination of the radicals and the font structures of the Chinese characters to be designed in the source font are not needed in the process, so that the operation of combining the Chinese characters with more complex font structures is avoided, the difficulty in font design is effectively reduced, and the universality of the font generation method is improved. It can be understood that, because the trained model is a model obtained by performing the asymmetric Transfer network training on a target font Chinese character image and a corresponding source font Chinese character image which are designed in advance by a font designer, that is, the model can identify the relevant characteristics and style of the target font, the image of the Chinese character to be designed in the source font can be converted into the corresponding target font Chinese character image only by acquiring the image of the Chinese character to be designed in the source font, and the corresponding target font Chinese character can be generated according to the learned relevant characteristics and style of the font conversion model, so that the defect that the structural proportion of the generated font cannot be dynamically adjusted in the traditional method can be overcome, and meanwhile, the man-machine interaction operation in the font design process can be reduced. Therefore, the font design method can effectively improve the universality of the font generation method, further remarkably reduce the difficulty of font design and greatly improve the efficiency of font design.
It should be noted that the font design system, the font design device and the computer readable storage medium disclosed in the present invention have similar or identical advantageous effects to the above-mentioned advantageous effects, and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a font design method disclosed in the embodiments of the present invention;
FIG. 2 is a flowchart of a specific font design method disclosed in the embodiment of the present invention;
fig. 3 is a schematic network structure diagram of an asymmetric Transfer network in a specific font design method disclosed in the embodiment of the present invention;
FIG. 4 is a diagram illustrating a specific structure of a discriminator network in a specific font design method according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a connection relationship between a discriminator network and an asymmetric Transfer network in a specific font design method disclosed in the embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a font design system according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a font designing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a font design method, which is shown in figure 1 and specifically comprises the following steps:
step S11: and acquiring the image of the Chinese character to be designed in the source font.
It should be noted that, in the embodiment of the present application, the image of the chinese character to be designed in the source font may be obtained by predetermined and corresponding technical means, and of course, may also be obtained according to the actual situation during the process of font design.
Step S12: and inputting the Chinese character image to be designed into a pre-trained font conversion model to obtain a target font Chinese character image output by the font conversion model.
The font conversion model is obtained by training an asymmetric Transfer network constructed based on a deep learning theory by utilizing a training sample in advance, wherein the training sample comprises a target font Chinese character image and a corresponding source font Chinese character image which are designed by a font designer in advance.
It can be understood that, in the embodiment of the application, font generation is abstracted into an image conversion problem, and a target font Chinese character image output by the font conversion model is obtained by obtaining an image of a Chinese character to be designed in a source font, and then inputting the obtained image of the Chinese character to be designed into a font conversion obtained by training an asymmetric Transfer network constructed based on a deep learning theory in advance by using a training sample.
Therefore, the method and the device utilize the image conversion technology in deep learning to convert the Chinese characters to be designed in the source font into the corresponding Chinese character images, and the division and combination of the radicals and the font structures of the Chinese characters to be designed in the source font are not needed in the process, so that the operation of combining the Chinese characters with more complex font structures is avoided, the difficulty in font design is effectively reduced, and the universality of the font generation method is improved. It can be understood that, because the trained model is a model obtained by performing the asymmetric Transfer network training on a target font Chinese character image and a corresponding source font Chinese character image which are designed in advance by a font designer, that is, the model can identify the relevant characteristics and style of the target font, the image of the Chinese character to be designed in the source font can be converted into the corresponding target font Chinese character image only by acquiring the image of the Chinese character to be designed in the source font, and the corresponding target font Chinese character can be generated according to the learned relevant characteristics and style of the font conversion model, so that the defect that the structural proportion of the generated font cannot be dynamically adjusted in the traditional method can be overcome, and meanwhile, the man-machine interaction operation in the font design process can be reduced. Therefore, the font design method can effectively improve the universality of the font generation method, further remarkably reduce the difficulty of font design and greatly improve the efficiency of font design.
The embodiment of the present application also correspondingly discloses a specific font design method, and this embodiment further describes and optimizes the technical solution, compared with the previous embodiment. Referring to fig. 2, the method specifically includes the following steps:
step S21: and acquiring the training sample.
The training samples are Chinese character images corresponding to a small number of target font Chinese characters and Chinese character images corresponding to predetermined source font Chinese characters, which are designed by a font designer according to actual conditions.
It should be noted that, in order to reduce the difficulty of learning the convolutional neural network in the asymmetric Transfer network, the source font chinese character may be a font close to the design style of the target font chinese character, for example, if the existing black body is relatively close to the new mushroom font to be designed, that is, the design style of the target font, the black body may be determined as the source font. It can be understood that the more the number of training samples, the more diversified the structure of the chinese character is, for example, the chinese character with left and right structures, the chinese character with upper and lower structures, the single-body character, the character with less stroke order, the character with more stroke order, and the like, the closer the style of the target font chinese character output by the finally trained font conversion model is to the design style of the font designer.
Step S22: and (3) creating an asymmetric Transfer network by using a deep learning framework Tensorflow.
It should be noted that, in contrast to the conventional Unet network (convolution network for biological image segmentation), the asymmetric Transfer network adds a convolution operation after each deconvolution in the Decoder network. Specifically, taking a bold face as a source font and a new mushroom garden as a target font as an example, see fig. 3, where fig. 3 is a network structure of an asymmetric Transfer network in the embodiment of the present application. By introducing an asymmetrical mode, the number of layers of the Decoder network is twice that of the number of layers of the Encoder network, so that the deconvolution expression capability of each layer in the Decoder network can be enhanced.
Step S23: and training the asymmetric Transfer network by using the training samples to obtain a font conversion model.
It should be noted that, in the font conversion model in the embodiment of the present application, the Encoder network mainly learns the abstract features of the font. Specifically, the Encoder network in the font conversion model obtains an image with a target size, for example, an image with a size of 256 × 1, and finally obtains a high-level feature with a corresponding dimension, for example, a dimension of 4 × 512, through a series of convolution operations, where the feature can be understood as an ideographic feature of the font, for example, the high-level feature of the bold "bright" obtained through the Encoder network should be as consistent as possible with the high-level feature of the regular font "bright". The Decoder network in the font conversion model is used for learning to generate the target font, and specifically, the Decoder network obtains the result of the Encoder network, takes the result as an input, and then performs a series of deconvolution operations on the input, so that the size of the finally generated target font chinese character image is the target size, for example, 256 × 1.
It should be noted that, the step of training the asymmetric Transfer network by using the training samples to obtain the font conversion model may specifically include:
preprocessing a target font Chinese character image and a corresponding source font Chinese character image in the training sample to obtain a corresponding Chinese character image pair; carrying out data enhancement processing on the Chinese character image pair to obtain an enhanced Chinese character image pair; and training the asymmetric Transfer network by using the enhanced Chinese character image pair to obtain the font conversion model.
Specifically, the step of training the asymmetric Transfer network by using the enhanced chinese character image pair may specifically include:
acquiring an enhanced source font Chinese character image, and then acquiring the high-level characteristics of the processed source font Chinese character image through an Encoder network to obtain target high-level characteristics; and performing reverse convolution operation on the high-level characteristics of the target through a Decoder network to obtain a corresponding target font Chinese character image, and finishing the training process of font style conversion of the asymmetric Transfer network.
It should be noted that, in the training process, after the target font is obtained through the asymmetric Transfer network, the discriminator network may be further used to determine whether the generated target font is a real font. The discriminator network may adopt a relatively simple two-class network, and it should be noted that if the discriminator network is too deep, the difficulty of generating the generator may be increased, so that only 4 layers of convolution operation are usually required. The specific structural schematic diagram of the arbiter network and the connection relationship with the asymmetric Transfer network can be seen in fig. 4 and 5. The positive sample acquired by the discriminator network is a Chinese character image corresponding to the target font Chinese character, and the negative sample is a target font Chinese character image generated by the conversion of the corresponding source font Chinese character through the font conversion network. In order to make the target font generated by the font conversion network as vivid as possible, that is, to conform to the design style of the font designer in the actual situation, in the embodiment of the present application, the step of training the asymmetric Transfer network by using the processed chinese character image pair may specifically include:
and training the asymmetric Transfer network by utilizing the processed Chinese character image according to a preset loss function.
Specifically, the preset loss function includes GAN loss, Constant loss, and L1 loss.
The design of the GAN loss fully embodies the idea of the game theory, and the definition of the GAN loss in the font conversion network is as follows:
Figure BDA0001570025940000081
in the formula, s represents a source font in a font conversion network, t represents a target font in the font conversion network, t(s) represents a font conversion network, which can be understood as a generator network in a GAN (generic adaptive Networks, namely a generation countermeasure network), d (t) represents a discrimination network, and d (t) mainly aims at determining the probability that the generated font is a real font.
In the training process, the semantics expressed by the source font and the target font of the same Chinese character are consistent, so that the training convergence speed of the font generation network can be accelerated by adding the guidance information in the font conversion process. Inspired by Domain Transfer Network (DTN), L2 loss of high-level semantic features of the source font and the target font can be extracted as a constant loss after the font enters the Encoder Network. The specific formula is as follows:
Figure BDA0001570025940000082
in the formula, f (t), f(s) represent the high-dimensional characteristic vector of the source font Chinese character image and the corresponding high-dimensional characteristic vector of the target font Chinese character image obtained by the Encoder network.
It should be noted that, because the stroke order changes of the chinese characters are complex and various, although GAN loss can generate the chinese characters as close to the target font as possible from the global information guidance generation network of the chinese characters, the guidance effect on the detailed information such as the stroke order changes is limited. Therefore, the embodiment of the present application introduces L1loss to constrain the detail information for generating the font. The formula is as follows:
Figure BDA0001570025940000083
finally, the three above-mentioned loss functions are weighted and summed in the training phase to form the final loss, which is expressed as follows.
Figure BDA0001570025940000091
In the formula, λ1、λ2Representing the weight corresponding to loss.
Step S24: and acquiring the image of the Chinese character to be designed in the source font.
Step S25: and inputting the Chinese character image to be designed into the font conversion model to obtain a target font Chinese character image output by the font conversion model.
It should be noted that, because the target font in the early training sample is very small, the target font trained by the machine may not be perfect, for example, some stroke style machines may not learn the target font, and the designer is required to manually modify the non-target font. In summary, after the step of inputting the acquired chinese character image to be designed to the pre-trained font conversion model to obtain the target chinese character image output by the font conversion model, the method may further include:
if any character does not accord with the preset target font style in the target font Chinese character image output by the font conversion model, acquiring the finely-tuned target font Chinese character; performing Fine tuning operation on the font conversion network by using the finely tuned target font Chinese character to obtain a new font conversion network; and the finely-tuned target font Chinese character is a new target font Chinese character obtained by finely tuning the Chinese character which does not accord with the preset target font style by a font designer.
Specifically, the adding condition of the trimmed target font chinese characters may be set, for example, if the number of the current trimmed target font chinese characters is 500, the trimmed target font chinese characters are used to perform a Fine tuning operation on the font conversion network, so as to obtain a new font conversion network.
Therefore, after multiple rounds of iterative training, the target font Chinese character generated by the font conversion network is closer to the style of a font designer, and the efficiency of the font designer in designing the font is further improved remarkably.
The method disclosed by the embodiment of the present application is explained in a relatively reasonable situation, wherein beneficial effects brought by related technical contents can be referred to the foregoing embodiment, and are not described herein again.
Correspondingly, an embodiment of the present application further discloses a font design system, as shown in fig. 6, the system specifically includes:
and the Chinese character image to be designed acquiring module 61 is used for acquiring the Chinese character image to be designed in the source font.
And the Chinese character image to be designed input module 62 is used for inputting the Chinese character image to be designed into a pre-trained font conversion model to obtain a target font Chinese character image output by the font conversion model.
The font conversion model is obtained by training an asymmetric Transfer network constructed based on a deep learning theory by utilizing a training sample in advance, wherein the training sample comprises a target font Chinese character image and a corresponding source font Chinese character image which are designed by a font designer in advance.
It should be noted that, for specific working processes between modules and beneficial effects brought by the working processes, please refer to the font design method disclosed in the foregoing embodiments of the present application, which is not described herein again.
Further, an embodiment of the present application also discloses a font design device, which is shown in fig. 7 and includes a memory and a processor, wherein the processor is configured to execute a computer program stored in the memory to implement the steps of the font design method disclosed in the foregoing embodiment.
It should be noted that specific contents of technical portions and corresponding advantageous effects of the embodiments of the present application can be referred to in the embodiments described herein, and are not described herein again.
Further, an embodiment of the present application also discloses a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the font design method disclosed in the foregoing embodiment.
It should be noted that specific contents of technical portions and corresponding advantageous effects of the embodiments of the present application can be referred to in the embodiments described herein, and are not described herein again.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The present invention provides a font design method, system, device and computer readable storage medium, which are introduced in detail above, and the specific examples are applied herein to illustrate the principle and implementation of the present invention, and the above descriptions of the embodiments are only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (7)

1. A font design method, comprising:
acquiring a Chinese character image to be designed in a source font;
inputting the Chinese character image to be designed into a pre-trained font conversion model to obtain a target font Chinese character image output by the font conversion model;
the font conversion model is a model obtained by training an asymmetric Transfer network constructed based on a deep learning theory by utilizing a training sample in advance, wherein the training sample comprises a target font Chinese character image and a corresponding source font Chinese character image which are designed by a font designer in advance;
wherein the loss function in the asymmetric Transfer network training process is a function constructed based on GAN loss, Constant loss and L1 loss; the Constant loss is a loss function constructed based on the high-dimensional feature vector of the source font Chinese character image and the high-dimensional feature vector of the corresponding target font Chinese character image, and the L1loss is a loss function constructed based on the target font Chinese character image and used for restricting font detail information, wherein the detail information comprises stroke order change;
after the step of inputting the acquired Chinese character image to be designed into the pre-trained font conversion model to obtain the target Chinese character image output by the font conversion model, the method further comprises the following steps:
if any character does not accord with the preset target font style in the target font Chinese character image output by the font conversion model, acquiring the finely-tuned target font Chinese character;
performing Fine tuning operation on the font conversion network by using the finely tuned target font Chinese character to obtain a new font conversion network;
and the finely-tuned target font Chinese character is a new target font Chinese character obtained by finely tuning the Chinese character which does not accord with the preset target font style by a font designer.
2. The font design method according to claim 1, wherein the step of obtaining the font conversion model comprises:
obtaining the training sample;
establishing an asymmetric Transfer network by using a deep learning framework TensorFlow;
and training the asymmetric Transfer network by using the training samples to obtain the font conversion model.
3. The font design method according to claim 2, wherein the step of training the asymmetric Transfer network by using the training samples to obtain the font conversion model comprises:
preprocessing a target font Chinese character image and a corresponding source font Chinese character image in the training sample to obtain a corresponding Chinese character image pair;
carrying out data enhancement processing on the Chinese character image pair to obtain an enhanced Chinese character image pair;
and training the asymmetric Transfer network by using the enhanced Chinese character image pair to obtain the font conversion model.
4. The font design method according to claim 3, wherein the step of training the asymmetric Transfer network using the enhanced Hanzi image pair comprises:
acquiring an enhanced source font Chinese character image, and then acquiring the high-level characteristics of the processed source font Chinese character image through an Encoder network to obtain target high-level characteristics;
and performing reverse convolution operation on the high-level characteristics of the target through a Decoder network to obtain a corresponding target font Chinese character image, and finishing the training process of font style conversion of the asymmetric Transfer network.
5. A font design system, comprising:
the Chinese character image acquisition module is used for acquiring a Chinese character image to be designed in a source font;
the Chinese character image input module is used for inputting the Chinese character image to be designed to a pre-trained font conversion model to obtain a target font Chinese character image output by the font conversion model;
the font conversion model is a model obtained by training an asymmetric Transfer network constructed based on a deep learning theory by utilizing a training sample in advance, wherein the training sample comprises a target font Chinese character image and a corresponding source font Chinese character image which are designed by a font designer in advance;
wherein the loss function in the asymmetric Transfer network training process is a function constructed based on GAN loss, Constant loss and L1 loss; the Constant loss is a loss function constructed based on the high-dimensional feature vector of the source font Chinese character image and the high-dimensional feature vector of the corresponding target font Chinese character image, and the L1loss is a loss function constructed based on the target font Chinese character image and used for restricting font detail information, wherein the detail information comprises stroke order change;
after the step of inputting the acquired Chinese character image to be designed into the pre-trained font conversion model to obtain the target Chinese character image output by the font conversion model, the method further comprises the following steps:
if any character does not accord with the preset target font style in the target font Chinese character image output by the font conversion model, acquiring the finely-tuned target font Chinese character;
performing Fine tuning operation on the font conversion network by using the finely tuned target font Chinese character to obtain a new font conversion network;
and the finely-tuned target font Chinese character is a new target font Chinese character obtained by finely tuning the Chinese character which does not accord with the preset target font style by a font designer.
6. A font design apparatus comprising a memory and a processor, wherein the processor is configured to execute a computer program stored in the memory to implement the steps of the font design method according to any one of claims 1 to 4.
7. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the font design method according to any one of claims 1 to 4.
CN201810113650.5A 2018-02-05 2018-02-05 Font design method, system, equipment and computer readable storage medium Active CN108459999B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810113650.5A CN108459999B (en) 2018-02-05 2018-02-05 Font design method, system, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810113650.5A CN108459999B (en) 2018-02-05 2018-02-05 Font design method, system, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108459999A CN108459999A (en) 2018-08-28
CN108459999B true CN108459999B (en) 2022-02-22

Family

ID=63239654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810113650.5A Active CN108459999B (en) 2018-02-05 2018-02-05 Font design method, system, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108459999B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222307B (en) * 2018-11-23 2024-03-12 珠海金山办公软件有限公司 Font editing method and device, computer storage medium and terminal
CN110033054B (en) * 2019-03-14 2021-05-25 上海交通大学 Personalized handwriting migration method and system based on collaborative stroke optimization
CN110533737A (en) * 2019-08-19 2019-12-03 大连民族大学 The method generated based on structure guidance Chinese character style
CN111488104B (en) * 2020-04-16 2021-10-12 维沃移动通信有限公司 Font editing method and electronic equipment
CN112183027B (en) * 2020-08-31 2022-09-06 同济大学 Artificial intelligence based artwork generation system and method
CN112861471A (en) * 2021-02-10 2021-05-28 上海臣星软件技术有限公司 Object display method, device, equipment and storage medium
CN112862025A (en) * 2021-03-08 2021-05-28 成都字嗅科技有限公司 Chinese character stroke filling method, system, terminal and medium based on computer

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577651A (en) * 2017-08-25 2018-01-12 上海交通大学 Chinese character style migratory system based on confrontation network
CN107644006A (en) * 2017-09-29 2018-01-30 北京大学 A kind of Chinese script character library automatic generation method based on deep neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9679558B2 (en) * 2014-05-15 2017-06-13 Microsoft Technology Licensing, Llc Language modeling for conversational understanding domains using semantic web resources

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577651A (en) * 2017-08-25 2018-01-12 上海交通大学 Chinese character style migratory system based on confrontation network
CN107644006A (en) * 2017-09-29 2018-01-30 北京大学 A kind of Chinese script character library automatic generation method based on deep neural network

Also Published As

Publication number Publication date
CN108459999A (en) 2018-08-28

Similar Documents

Publication Publication Date Title
CN108459999B (en) Font design method, system, equipment and computer readable storage medium
EP3660733B1 (en) Method and system for information extraction from document images using conversational interface and database querying
CN111260740B (en) Text-to-image generation method based on generation countermeasure network
CN111738251B (en) Optical character recognition method and device fused with language model and electronic equipment
CN109948149B (en) Text classification method and device
CN111259940A (en) Target detection method based on space attention map
CN113674140B (en) Physical countermeasure sample generation method and system
CN110825850B (en) Natural language theme classification method and device
CN113792850B (en) Font generation model training method, font library building method, font generation model training device and font library building equipment
CN110766050A (en) Model generation method, text recognition method, device, equipment and storage medium
CN107945210A (en) Target tracking algorism based on deep learning and environment self-adaption
CN113505854A (en) Method, device, equipment and medium for constructing facial image quality evaluation model
US20230082715A1 (en) Method for training image processing model, image processing method, apparatus, electronic device, and computer program product
CN113792851A (en) Font generation model training method, font library establishing method, device and equipment
CN113792853A (en) Training method of character generation model, character generation method, device and equipment
Zhuang et al. A handwritten Chinese character recognition based on convolutional neural network and median filtering
US20080147576A1 (en) Data processing apparatus, data processing method data processing program and computer readable medium
CN117237479A (en) Product style automatic generation method, device and equipment based on diffusion model
US20220004849A1 (en) Image processing neural networks with dynamic filter activation
CN116843785A (en) Artificial intelligence-based painting image generation method, display terminal and storage medium
CN113392640B (en) Title determination method, device, equipment and storage medium
CN114358019A (en) Method and system for training intention prediction model
CN113723108A (en) Event extraction method and device, electronic equipment and storage medium
CN111476867A (en) Hand-drawn sketch generation method based on variational self-coding and generation countermeasure network
CN110069770A (en) A kind of data processing system, method and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant