CN112381910A - Handwriting stylization method for characters of printed body - Google Patents

Handwriting stylization method for characters of printed body Download PDF

Info

Publication number
CN112381910A
CN112381910A CN202011574519.2A CN202011574519A CN112381910A CN 112381910 A CN112381910 A CN 112381910A CN 202011574519 A CN202011574519 A CN 202011574519A CN 112381910 A CN112381910 A CN 112381910A
Authority
CN
China
Prior art keywords
handwriting
picture
stylized
neural network
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011574519.2A
Other languages
Chinese (zh)
Inventor
罗中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yixin Huachen Software Co ltd Wuhan Branch
Original Assignee
Beijing Yixin Huachen Software Co ltd Wuhan Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yixin Huachen Software Co ltd Wuhan Branch filed Critical Beijing Yixin Huachen Software Co ltd Wuhan Branch
Priority to CN202011574519.2A priority Critical patent/CN112381910A/en
Publication of CN112381910A publication Critical patent/CN112381910A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/412Layout analysis of documents structured with printed lines or input boxes, e.g. business forms or tables

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Character Discrimination (AREA)

Abstract

The invention relates to a handwritten stylization method for characters in a printed body, which generates a character picture with a handwritten style of the characters according to a printed body picture of the characters. A handwriting stylization method for print characters builds a neural network model based on a generated confrontation network, wherein the neural network model comprises two parts: the method comprises the steps of generating a network by handwriting stylization and judging the network by handwriting stylization, constructing a training data set of a neural network model by utilizing printed pictures and handwritten pictures of common radicals and a Chinese character inter-frame structure template, training the neural network model, and comparing and verifying the effect of the network generated by handwriting stylization by utilizing the printed pictures and the handwritten pictures of real Chinese characters. The invention relates to a handwritten stylization method for characters in a printed matter, which can conveniently generate character pictures with a specific handwritten style.

Description

Handwriting stylization method for characters of printed body
Technical Field
The invention relates to a method for handwriting stylization of printed characters, belonging to the field of artificial intelligence data synthesis.
Background
With the popularization of personal electronic devices, people increasingly personalize the electronic devices, and some users want to replace fonts in a user interface of the electronic device with fonts with a handwriting style of the users. In general, there are many fonts available for users to select in electronic devices, including standardized printed fonts and some handwritten fonts (usually fonts written by software companies asking famous calligraphers or celebrities), but it is impossible to provide a user with a font having his own handwriting style. In order to meet the personalized requirements of users in terms of handwriting fonts, an innovative technology for generating a handwritten stylized font with a certain specific user is urgently needed.
Disclosure of Invention
A handwriting stylization method for printing characters comprises the following steps:
step 1) constructing a handwriting stylized neural network model: constructing a model based on a generated countermeasure network, wherein the neural network model comprises a handwriting stylized generation network and a handwriting stylized judgment network, the generation network generates a handwriting stylized character picture according to an input print character picture, and the judgment network is used for judging whether the input picture is a character picture with a specific handwriting style;
step 2) constructing a handwriting stylized neural network model training data set and a verification data set:
step 2A) constructing a training data set:
step 2A.1) collecting commonly used Chinese character radicals, and for each radical, respectively adopting printing fonts to obtain a print picture and manually handwriting to obtain a corresponding manual handwriting picture;
step 2A.2) constructing a common Chinese character inter-frame structure template picture, which comprises the following steps: the structure comprises a left structure template, a right structure template, a left middle structure template, a right structure template, an upper structure template, a middle structure template, a surrounding structure template, an upper structure template, a lower structure template, a left structure template, a right structure template, an upper;
step 2A.3) randomly selecting a template picture prepared in the step 2A.2), copying two parts to obtain P and H, randomly selecting the radicals with corresponding number from the radicals collected in the step 2A.1) according to the number of the areas in the template, filling the printed pictures of the radicals into the area position of the P, filling the manual handwriting pictures of the radicals into the corresponding area position of the H, and obtaining a picture pair (P, H) which is a randomly generated training data record;
step 2A.4) repeating step 2A.3) to randomly generate training data records with required number to form a training data set;
step 2B) constructing a verification data set:
step 2B.1) selecting a segment of characters, and obtaining a print picture and a corresponding manual handwriting picture by using a print font and manual handwriting for each character;
step 2B.2) the print picture of each character obtained in step 2B.1) and the corresponding artificial handwriting picture form a verification data record, and all the verification data records form a verification data set;
step 3), training a handwriting stylized neural network model: constructing an optimizer for the handwritten stylized neural network model constructed in the step 1), setting a loss function, an optimization algorithm and training parameters, and training the neural network model by using the training data set constructed in the step 2A) to obtain a trained neural network model;
step 4), verifying a handwriting stylized neural network model: inputting the verification data set constructed in the step 2B) into the neural network model trained in the step 3), manually and visually comparing style differences between an output picture of the handwriting stylization generation network and an artificial handwriting picture, and repeating the steps 1), 2), 3) and 4) (properly trimming the hyper-parameters of the network model, the scale of the training data set and the parameters of an optimizer when the style differences are large) until the styles of the output picture of the handwriting stylization generation network and the artificial handwriting picture are similar, so as to obtain a handwriting style generation network meeting requirements;
step 5) applying a handwriting stylized neural network model: inputting the printed picture of any character into the handwriting stylized generation network obtained in the step 4), wherein the output picture of the network is the handwriting stylized picture of the character; and (4) finishing.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is a flow chart of a method of stylizing handwritten forms of printed characters;
FIG. 2 is a diagram of a handwriting stylized generation network architecture;
FIG. 3 is a diagram of a handwriting stylized decision network architecture;
FIG. 4 is a printed picture and an artificial handwriting picture of 100 radicals;
FIG. 5 is a template of a conventional Chinese character frame structure;
FIG. 6 is a portion of a print volume in a piece of training data generated randomly;
FIG. 7 is a handwritten section of a piece of training data that is randomly generated;
FIG. 8 is a validation data set;
FIG. 9 is a comparison of a handwritten stylized character generated from a validation dataset with a Zhang three-script character.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this embodiment, we want to generate a character picture with a three-page handwritten style using a printed character picture, both 256 pixels wide and high.
A handwriting stylization method for print characters comprises the following specific implementation modes:
step 1) constructing a handwriting stylized neural network model: constructing a model based on a generated countermeasure network, wherein the neural network model comprises a handwriting stylized generation network and a handwriting stylized judgment network, the generation network generates a handwriting stylized character picture according to an input print character picture, and the judgment network is used for judging whether the input picture is a character picture with a specific handwriting style;
in this embodiment, a network structure of a handwriting stylized generation network (hereinafter referred to as "generation network" or "generator") is shown in fig. 2, where each node (network layer or sub-network) is set as follows:
PRINT _ CH (input layer): the input of the layer is a print character picture part in input data, and the tensor dimension of the input is (N, 256, 3) (the number of pictures is N, and the picture size is 256 pixels high, 256 pixels wide and 3 channels);
DOWN1 (downsampling sequential structure subnetwork): the sub-network sequentially comprises an input layer, a 2D convolutional layer (the kernel size is 4, the number of filters is 64, the sequences is 2, the boundary is filled with the same pixels) and a LeakyReLU active layer;
DOWN2 (downsampling sequential structure subnetwork): the sub-network sequentially comprises an input layer, a 2D convolution layer (the kernel size is 4, the number of filters is 128, the threads are 2, the boundary is filled with the same pixels), a batch normalization layer and a LeakyReLU activation layer;
DOWN3 (downsampling sequential structure subnetwork): the sub-network sequentially comprises an input layer, a 2D convolution layer (the kernel size is 4, the number of filters is 256, the number of stripes is 2, the boundary is filled with the same pixels), a batch normalization layer and a LeakyReLU activation layer;
DOWN4 (downsampling sequential structure subnetwork): the sub-network sequentially comprises an input layer, a 2D convolution layer (the kernel size is 4, the number of filters is 512, the number of stripes is 2, the boundary is filled with the same pixels), a batch normalization layer and a LeakyReLU activation layer;
DOWN5 (downsampling sequential structure subnetwork): the sub-network sequentially comprises an input layer, a 2D convolution layer (the kernel size is 4, the number of filters is 512, the number of stripes is 2, the boundary is filled with the same pixels), a batch normalization layer and a LeakyReLU activation layer;
DOWN6 (downsampling sequential structure subnetwork): the sub-network sequentially comprises an input layer, a 2D convolution layer (the kernel size is 4, the number of filters is 512, the number of stripes is 2, the boundary is filled with the same pixels), a batch normalization layer and a LeakyReLU activation layer;
DOWN7 (downsampling sequential structure subnetwork): the sub-network sequentially comprises an input layer, a 2D convolution layer (the kernel size is 4, the number of filters is 512, the number of stripes is 2, the boundary is filled with the same pixels), a batch normalization layer and a LeakyReLU activation layer;
DOWN8 (downsampling sequential structure subnetwork): the sub-network sequentially comprises an input layer, a 2D convolution layer (the kernel size is 4, the number of filters is 512, the number of stripes is 2, the boundary is filled with the same pixels), a batch normalization layer and a LeakyReLU activation layer;
UP7 (upsampling order structure sub-network): the sub-network sequentially comprises an input layer, a 2D deconvolution layer (the kernel size is 4, the number of filters is 512, the crosses are 2, the boundary is filled with the same pixels), a batch normalization layer, a Dropout layer and a ReLU activation layer;
CONC7 (splice layer): concatenating the output tensors of UP7 and DOWN7 in the last dimension into a new tensor;
UP6 (upsampling order structure sub-network): the sub-network sequentially comprises an input layer, a 2D deconvolution layer (the kernel size is 4, the number of filters is 512, the crosses are 2, the boundary is filled with the same pixels), a batch normalization layer, a Dropout layer and a ReLU activation layer;
CONC6 (splice layer): concatenating the output tensors of UP6 and DOWN6 in the last dimension into a new tensor;
UP5 (upsampling order structure sub-network): the sub-network sequentially comprises an input layer, a 2D deconvolution layer (the kernel size is 4, the number of filters is 512, the crosses are 2, the boundary is filled with the same pixels), a batch normalization layer, a Dropout layer and a ReLU activation layer;
CONC5 (splice layer): concatenating the output tensors of UP5 and DOWN5 in the last dimension into a new tensor;
UP4 (upsampling order structure sub-network): the sub-network sequentially comprises an input layer, a 2D deconvolution layer (the kernel size is 4, the number of filters is 512, the crosses are 2, the boundaries are filled with the same pixels), a batch normalization layer and a ReLU activation layer;
CONC4 (splice layer): concatenating the output tensors of UP4 and DOWN4 in the last dimension into a new tensor;
UP3 (upsampling order structure sub-network): the sub-network sequentially comprises an input layer, a 2D deconvolution layer (the kernel size is 4, the number of filters is 256, the crosses are 2, the boundaries are filled with the same pixels), a batch normalization layer and a ReLU activation layer;
CONC3 (splice layer): concatenating the output tensors of UP3 and DOWN3 in the last dimension into a new tensor;
UP2 (upsampling order structure sub-network): the sub-network sequentially comprises an input layer, a 2D deconvolution layer (the kernel size is 4, the number of filters is 128, the strings is 2, the boundary is filled with the same pixels), a batch normalization layer and a ReLU activation layer;
CONC2 (splice layer): concatenating the output tensors of UP2 and DOWN2 in the last dimension into a new tensor;
UP1 (upsampling order structure sub-network): the sub-network sequentially comprises an input layer, a 2D deconvolution layer (the kernel size is 4, the number of filters is 64, the crosses are 2, the boundary is filled with the same pixels), a batch normalization layer and a ReLU activation layer;
CONC1 (splice layer): concatenating the output tensors of UP1 and DOWN1 in the last dimension into a new tensor;
CONVT (deconvolution layer/output layer): the 2D deconvolution layer (kernel size 4, filters number 3, strings 2, borders filled with the same pixels) used to produce the final handwritten stylized character picture, the output tensor dimension is (N, 256, 3).
The related technologies of the input layer, the splicing layer, the 2D convolution layer, the 2D deconvolution layer, the batch normalization layer, the Dropout layer, the leakyreu active layer, and the ReLU active layer used in the above-mentioned generation network belong to the prior art, and are not described herein; the actual neural network is constructed according to the generated network structure diagram shown in fig. 2 and the above description of the structure of the nodes in the network structure diagram, which belongs to the prior knowledge and is not described herein again.
The network structure of the handwriting style determination network (hereinafter referred to as "determination network" or "resolver") is shown in fig. 3, in which the nodes (network layers or sub-networks) are arranged as follows:
PRINT _ CH (input layer): the input of the layer is a print character picture part in input data, and the tensor dimension of the input is (N, 256, 3);
GEN _ CH (input layer): the input of the layer is a picture part of handwritten characters in input data or a picture generated by a generation network, and tensor dimensions (N, 256, 3) of the input;
CONCAT (splice layer): splicing the output tensors of the PRINT _ CH and the GEN _ CH into a new tensor on the last dimension;
DOWN _ a (downsampling sequential structure sub-network): the sub-network sequentially comprises an input layer, a 2D convolutional layer (the kernel size is 4, the number of filters is 64, the sequences is 2, the boundary is filled with the same pixels) and a LeakyReLU active layer;
DOWN _ B (downsampling sequential structure subnetwork): the sub-network sequentially comprises an input layer, a 2D convolution layer (the kernel size is 4, the number of filters is 128, the threads are 2, the boundary is filled with the same pixels), a batch normalization layer and a LeakyReLU activation layer;
DOWN _ C (downsampling sequential structure subnetwork): the sub-network sequentially comprises an input layer, a 2D convolution layer (the kernel size is 4, the number of filters is 256, the number of stripes is 2, the boundary is filled with the same pixels), a batch normalization layer and a LeakyReLU activation layer;
PADDING1 (filling layer): expanding the periphery of the input picture by one pixel size, and filling the expanded part into a value of 0;
CONV1 (convolutional layer): 2D convolutional layers (kernel size 4, filters 512, threads 1);
BAT _ NRM (batch normalization layer);
LEAKY _ RE (active layer): the LeakyReLU active layer;
PADDING2 (filling layer): expanding the periphery of the input picture by one pixel size, and filling the expanded part into a value of 0;
CONV2 (convolutional layer/output layer): the 2D convolutional layer (kernel size 4, filters number 1, strings 1) outputs a tensor dimension of (N, 30,30, 1).
The related technologies of the input layer, the splicing layer, the 2D convolution layer, the 2D deconvolution layer, the batch normalization layer, the filling layer, the Dropout layer, the leakage ReLU active layer, and the ReLU active layer used in the above-mentioned discrimination network belong to the prior art, and are not described herein; the actual neural network is constructed according to the judgment network structure diagram shown in fig. 3 and the above description of the structure of the nodes in the network structure diagram, which belongs to the prior knowledge and is not described herein again.
In the above discrimination network, when the input picture (the artificial handwritten character picture or the handwritten stylized character picture generated by the generation network) of the GEN _ CH input layer completely has the desired handwriting style, the values of all the elements of the output tensor of the network are 1; when the input picture of the GEN _ CH input layer does not have the expected handwriting style at all, the values of all elements of the output tensor of the network are 0; when the input picture part of the GEN _ CH input layer has a desired handwriting style, the values of all elements of the output tensor of the network are between 0 and 1, and a larger value indicates that the input picture is more similar to the desired handwriting style.
Step 2) constructing a handwriting stylized neural network model training data set and a verification data set:
step 2A) constructing a training data set:
step 2A.1) collecting commonly used Chinese character radicals, and for each radical, respectively adopting printing fonts to obtain a print picture and manually handwriting to obtain a corresponding manual handwriting picture;
in this embodiment, 100 commonly used chinese character radicals (a horizontal second eight-stroke eight an ancient type of spoon lever knife vacuum control unit, two Contraband Jiong force u people ten and also torpedo all round -inch large flying dry industrial bow wide horse gate women mountain Chinese royal corpse unim sub-triangle doctor than long car fang father gouge fire see right wool wood 32896 bovine slice gas) are collected, computer characters are used to print the radicals, and the printed pictures of the radicals and the written three-dimensional hand pictures are shown in fig. 4, fig. 1, 3, 5, 7, 9, 10-th hand-written pictures of the radicals in the manual writing pictures of the radicals.
Step 2A.2) constructing a common Chinese character inter-frame structure template picture, which comprises the following steps: the structure comprises a left structure template, a right structure template, a left middle structure template, a right structure template, an upper structure template, a middle structure template, a surrounding structure template, an upper structure template, a lower structure template, a left structure template, a right structure template, an upper;
in the present embodiment, the size of the constructed template picture with the chinese character trellis structure is 256 pixels high, 256 pixels wide, and 3 channels, and the background color is white, as shown in fig. 5, the numbers 1, 2, 3, and 4 in each template picture represent position indications of different regions in the template picture (the numbers do not exist in the actual template picture), and the boundaries of the template picture and the boundaries of the different regions are indicated by being separated by dotted lines (the dotted lines do not exist in the actual template picture). The same template can adjust the size of each area under the condition of keeping the relative position relationship among the areas; for example: in the left and right structure templates, the sizes of region 1 and region 2 may be changed as long as region 1 is to the left of region 2; for the case of other templates, no further description is given here.
Step 2A.3) randomly selecting a template picture prepared in the step 2A.2), copying two parts to obtain P and H, randomly selecting the radicals with corresponding number from the radicals collected in the step 2A.1) according to the number of the areas in the template, filling the printed pictures of the radicals into the area position of the P, filling the manual handwriting pictures of the radicals into the corresponding area position of the H, and obtaining a picture pair (P, H) which is a randomly generated training data record;
in this embodiment, it is assumed that one template picture randomly selected this time is a left and right structure template picture (this step is repeatedly executed, different template pictures may be randomly selected), the template picture is copied into two parts to obtain P and H, there are 2 regions in the left and right structure template pictures, so 2 radicals are randomly selected from 100 radicals, it is assumed that the 2 radicals randomly selected this time are "early" and "two" (this step is repeatedly executed, different radicals may be randomly selected), print images of "early" and "two" are filled into regions 1 and 2 of the template picture P, and the obtained result is shown in fig. 6; the results of filling the artificial handwriting pictures of "you" and "two" into the region 1 and the region 2 of the template picture H are shown in fig. 7. The resulting (P, H) is a randomly generated training data record that includes two parts: a print volume portion P and a handwriting portion H. It should be noted that, if the randomly selected template picture is an enclosing structure template picture, in order to avoid the overlap between the radicals filling in the area 1 and the area 2 in the template picture, the radical filling in the area 1 randomly selected from the 100 radicals, which can satisfy the non-overlapping condition, should be " u Contraband Jiong u .
Step 2A.4) repeating step 2A.3) to randomly generate training data records with required number to form a training data set;
in this embodiment, if the required training data set size is 100000, step 2a.3) is repeated 100000 times, and the generated 100000 pairs of pictures form the training data set.
Step 2B) constructing a verification data set:
step 2B.1) selecting a segment of characters, and obtaining a print picture and a corresponding manual handwriting picture by using a print font and manual handwriting for each character;
in this embodiment, the selected segment of text is "china", and the print pictures and the manual handwriting pictures of "china" and "china" are obtained by using the print font and the three-page handwriting, respectively.
Step 2B.2) the print picture of each character obtained in step 2B.1) and the corresponding artificial handwriting picture form a verification data record, and all the verification data records form a verification data set;
in the present embodiment, the print picture and the artificial handwriting picture of the character "middle" constitute one verification data record, the print picture and the artificial handwriting picture of the character "nation" constitute one verification data record, and the two verification data records constitute a verification data set, as shown in fig. 8.
Step 3), training a handwriting stylized neural network model: constructing an optimizer for the handwritten stylized neural network model constructed in the step 1), setting a loss function, an optimization algorithm and training parameters, and training the neural network model by using the training data set constructed in the step 2A) to obtain a trained neural network model;
in the present embodiment, the neural network model is trained using a training strategy that generates a countermeasure network (GAN). According to the training strategy of the GAN network model, a generating network and a judging network in the model can be alternately trained in sequence, loss functions are required to be set for an optimizer of the generating network and an optimizer of the judging network respectively, and the loss functions of the two neural networks are set as follows: note that (P, H) is a piece of training data record (the meaning of P and H is as described in step 2 above), G = generator (P) is a picture generated by a network, and it is apparent that G and P have the same size (height 256 pixels, width 256 pixels, 3 channels), S1= discriminator (P, H) is an output when two inputs of the network are respectively P and H, S2= discriminator (P, G) is an output when two inputs of the network are respectively P and G, it is apparent that S1 and S2 have the same size (height 30 pixels, width 30 pixels, 1 channel), S _ ONE is a full 1 tensor (height 30 pixels, width 30 pixels, 1 channel, value all 1) having the same size as S1, and S _ ZERO is a full 0 tensor (height 30 pixels, width 30 pixels, 1 channel, value all 0) having the same size as S1; generating a network loss function: generating Loss (P, H) = binarycrosenstrophy (S _ ONE, S2) + LAMBDA @ MEAN (ABS (G-H)), where binarycrosenstrophy () is a cross entropy function, LAMBDA is a weight parameter (a model hyper parameter, which may be adjusted according to a training convergence speed and a training effect), MEAN () is a function averaging all elements of a tensor, and ABS () is a function taking an absolute value by a tensor element; and (3) judging a network loss function: there is discrimination Loss (P, H) = binarycrosssensortopy (S _ ONE, S1) + binarycrosssensorypy (S _ ZERO, S2). Usually, several pieces of training data are trained together as a batch during training, so the Loss function of the batch of training data = the average value of the Loss function value of each piece of training data, and Ps = [ P1, P2, … …, PN ], Hs = [ H1, H2, … …, HN ], Ps and Hs form a batch consisting of N pieces of training data, then Loss (Ps, Hs) = (generate Loss (P1, H1) + generate Loss (P2, H2) + … … + generate Loss (PN, HN))/N, and determine Loss (Ps, Hs) = (determine Loss (P1, H1) + determine Loss (P2, H2) + … … + determine Loss (PN, HN))/N).
In this example, the optimization algorithm used is Adam (learning rate 0.0002, beta1 0.5, beta2 0.999) and the value of the LAMBDA parameter is 100; the number of training rounds epochs is 50, and 100000 pieces of training data are divided into 1000 batches, each batch containing 100 pieces of training data. The training process for generating the network and discriminating the network is as follows:
forepoch = 1 to 50:
for batch = 1 to 1000:
ps and Hs are 100 training data of the current batch;
calculating to generate Loss (Ps, Hs) and judge Loss (Ps, Hs);
calculating gradients of all model parameters in the generated network according to the generated Loss (Ps, Hs), and updating all model parameters of the generated network by using an Adam optimization algorithm;
and calculating the gradients of all model parameters in the discrimination network according to the discrimination Loss (Ps, Hs), and updating all model parameters of the discrimination network by using an Adam optimization algorithm.
Step 4), verifying a handwriting stylized neural network model: inputting the verification data set constructed in the step 2B) into the neural network model trained in the step 3), manually and visually comparing style differences between an output picture of the handwriting stylization generation network and an artificial handwriting picture, and repeating the steps 1), 2), 3) and 4) (properly trimming the hyper-parameters of the network model, the scale of the training data set and the parameters of an optimizer when the style differences are large) until the styles of the output picture of the handwriting stylization generation network and the artificial handwriting picture are similar, so as to obtain a handwriting style generation network meeting requirements;
in this embodiment, the print images of the two characters "medium" and "country" in the verification data set are input into the trained generation network to obtain the generated handwritten stylized image (as shown in fig. 9), and the generated handwritten stylized image and the artificial handwritten image written in zhang san are observed and compared manually. Through comparison, the generated handwriting stylized picture is found to have a three-page handwriting style basically, and the generated network can meet the requirements.
Step 5) applying a handwriting stylized neural network model: inputting the printed picture of any character into the handwriting stylized generation network obtained in the step 4), wherein the output picture of the network is the handwriting stylized picture of the character; and (4) finishing.
In this embodiment, zhang san receives a trained generation network, and when a character picture with a self handwriting style is required, a printed picture of each character is generated by using a computer word at first, and then the printed picture is input into the generation network, so that a handwriting stylized picture of the corresponding character can be obtained; and (4) finishing.
The handwriting stylization method for the characters of the printed body, which is related by the invention, can realize the following technical effects: the user can generate very personalized handwritten stylized characters for all characters by only providing handwritten samples of commonly used radicals.

Claims (1)

1. A handwritten stylization method for characters of a print body is characterized by comprising the following steps:
step 1) constructing a handwriting stylized neural network model: constructing a model based on a generated countermeasure network, wherein the neural network model comprises a handwriting stylized generation network and a handwriting stylized judgment network, the generation network generates a handwriting stylized character picture according to an input print character picture, and the judgment network is used for judging whether the input picture is a character picture with a specific handwriting style;
step 2) constructing a handwriting stylized neural network model training data set and a verification data set:
step 2A) constructing a training data set:
step 2A.1) collecting commonly used Chinese character radicals, and for each radical, respectively adopting printing fonts to obtain a print picture and manually handwriting to obtain a corresponding manual handwriting picture;
step 2A.2) constructing a common Chinese character inter-frame structure template picture, which comprises the following steps: the structure comprises a left structure template, a right structure template, a left middle structure template, a right structure template, an upper structure template, a middle structure template, a surrounding structure template, an upper structure template, a lower structure template, a left structure template, a right structure template, an upper;
step 2A.3) randomly selecting a template picture prepared in the step 2A.2), copying two parts to obtain P and H, randomly selecting the radicals with corresponding number from the radicals collected in the step 2A.1) according to the number of the areas in the template, filling the printed pictures of the radicals into the area position of the P, filling the manual handwriting pictures of the radicals into the corresponding area position of the H, and obtaining a picture pair (P, H) which is a randomly generated training data record;
step 2A.4) repeating step 2A.3) to randomly generate training data records with required number to form a training data set;
step 2B) constructing a verification data set:
step 2B.1) selecting a segment of characters, and obtaining a print picture and a corresponding manual handwriting picture by using a print font and manual handwriting for each character;
step 2B.2) the print picture of each character obtained in step 2B.1) and the corresponding artificial handwriting picture form a verification data record, and all the verification data records form a verification data set;
step 3), training a handwriting stylized neural network model: constructing an optimizer for the handwritten stylized neural network model constructed in the step 1), setting a loss function, an optimization algorithm and training parameters, and training the neural network model by using the training data set constructed in the step 2A) to obtain a trained neural network model;
step 4), verifying a handwriting stylized neural network model: inputting the verification data set constructed in the step 2B) into the neural network model trained in the step 3), manually and visually comparing style differences between an output picture of the handwriting stylization generation network and an artificial handwriting picture, and repeating the steps 1), 2), 3) and 4) (properly trimming the hyper-parameters of the network model, the scale of the training data set and the parameters of an optimizer when the style differences are large) until the styles of the output picture of the handwriting stylization generation network and the artificial handwriting picture are similar, so as to obtain a handwriting style generation network meeting requirements;
step 5) applying a handwriting stylized neural network model: inputting the printed picture of any character into the handwriting stylized generation network obtained in the step 4), wherein the output picture of the network is the handwriting stylized picture of the character; and (4) finishing.
CN202011574519.2A 2020-12-28 2020-12-28 Handwriting stylization method for characters of printed body Pending CN112381910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011574519.2A CN112381910A (en) 2020-12-28 2020-12-28 Handwriting stylization method for characters of printed body

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011574519.2A CN112381910A (en) 2020-12-28 2020-12-28 Handwriting stylization method for characters of printed body

Publications (1)

Publication Number Publication Date
CN112381910A true CN112381910A (en) 2021-02-19

Family

ID=74589974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011574519.2A Pending CN112381910A (en) 2020-12-28 2020-12-28 Handwriting stylization method for characters of printed body

Country Status (1)

Country Link
CN (1) CN112381910A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052143A (en) * 2021-04-26 2021-06-29 中国建设银行股份有限公司 Handwritten digit generation method and device
CN114970447A (en) * 2022-05-26 2022-08-30 华侨大学 Chinese character font conversion method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853313A (en) * 2010-07-01 2010-10-06 无锡骏聿科技有限公司 Handwriting font object library generating method based on font categorization
CN104182520A (en) * 2014-08-26 2014-12-03 严永亮 Implementing method of automatic Chinese character generation through components
CN107644006A (en) * 2017-09-29 2018-01-30 北京大学 A kind of Chinese script character library automatic generation method based on deep neural network
CN109064522A (en) * 2018-08-03 2018-12-21 厦门大学 The Chinese character style generation method of confrontation network is generated based on condition
CN109165376A (en) * 2018-06-28 2019-01-08 西交利物浦大学 Style character generating method based on a small amount of sample
JP2019028094A (en) * 2017-07-25 2019-02-21 大日本印刷株式会社 Character generation device, program and character output device
CN110211032A (en) * 2019-06-06 2019-09-06 北大方正集团有限公司 Generation method, device and the readable storage medium storing program for executing of chinese character
CN110969681A (en) * 2019-11-29 2020-04-07 山东浪潮人工智能研究院有限公司 Method for generating handwriting characters based on GAN network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853313A (en) * 2010-07-01 2010-10-06 无锡骏聿科技有限公司 Handwriting font object library generating method based on font categorization
CN104182520A (en) * 2014-08-26 2014-12-03 严永亮 Implementing method of automatic Chinese character generation through components
JP2019028094A (en) * 2017-07-25 2019-02-21 大日本印刷株式会社 Character generation device, program and character output device
CN107644006A (en) * 2017-09-29 2018-01-30 北京大学 A kind of Chinese script character library automatic generation method based on deep neural network
CN109165376A (en) * 2018-06-28 2019-01-08 西交利物浦大学 Style character generating method based on a small amount of sample
CN109064522A (en) * 2018-08-03 2018-12-21 厦门大学 The Chinese character style generation method of confrontation network is generated based on condition
CN110211032A (en) * 2019-06-06 2019-09-06 北大方正集团有限公司 Generation method, device and the readable storage medium storing program for executing of chinese character
CN110969681A (en) * 2019-11-29 2020-04-07 山东浪潮人工智能研究院有限公司 Method for generating handwriting characters based on GAN network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052143A (en) * 2021-04-26 2021-06-29 中国建设银行股份有限公司 Handwritten digit generation method and device
CN114970447A (en) * 2022-05-26 2022-08-30 华侨大学 Chinese character font conversion method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109190722B (en) Font style migration transformation method based on Manchu character picture
CN107767328B (en) Migration method and system of any style and content generated based on small amount of samples
CN113674140B (en) Physical countermeasure sample generation method and system
Xue End-to-end chinese landscape painting creation using generative adversarial networks
US8508546B2 (en) Image mask generation
CN111242841B (en) Image background style migration method based on semantic segmentation and deep learning
CN109064522A (en) The Chinese character style generation method of confrontation network is generated based on condition
CN107180430A (en) A kind of deep learning network establishing method and system suitable for semantic segmentation
CN109360170B (en) Human face repairing method based on advanced features
CN112381910A (en) Handwriting stylization method for characters of printed body
CN108710893B (en) Digital image camera source model classification method based on feature fusion
EP3675034A1 (en) Image realism predictor
CN108898639A (en) A kind of Image Description Methods and system
CN113963409A (en) Training of face attribute editing model and face attribute editing method
CN107480688A (en) Fine granularity image-recognizing method based on zero sample learning
CN111931908B (en) Face image automatic generation method based on face contour
KR101888647B1 (en) Apparatus for classifying image and method for using the same
CN110188667A (en) It is a kind of based on tripartite fight generate network face ajust method
US20210056429A1 (en) Apparatus and methods for converting lineless tables into lined tables using generative adversarial networks
De los Reyes et al. Bilevel optimization methods in imaging
US20210303318A1 (en) Method to generate a familiar user interface based on coarse-grained specification
CN117315417A (en) Diffusion model-based garment pattern fusion method and system
CN116543437A (en) Occlusion face recognition method based on occlusion-feature mapping relation
CN116342385A (en) Training method and device for text image super-resolution network and storage medium
CN113744158A (en) Image generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination