CN110969681B - Handwriting word generation method based on GAN network - Google Patents

Handwriting word generation method based on GAN network Download PDF

Info

Publication number
CN110969681B
CN110969681B CN201911197267.3A CN201911197267A CN110969681B CN 110969681 B CN110969681 B CN 110969681B CN 201911197267 A CN201911197267 A CN 201911197267A CN 110969681 B CN110969681 B CN 110969681B
Authority
CN
China
Prior art keywords
handwriting
word
character
network
page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911197267.3A
Other languages
Chinese (zh)
Other versions
CN110969681A (en
Inventor
孙善宝
金长新
于�玲
谭强
徐驰
马辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Scientific Research Institute Co Ltd
Original Assignee
Shandong Inspur Scientific Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Scientific Research Institute Co Ltd filed Critical Shandong Inspur Scientific Research Institute Co Ltd
Priority to CN201911197267.3A priority Critical patent/CN110969681B/en
Publication of CN110969681A publication Critical patent/CN110969681A/en
Application granted granted Critical
Publication of CN110969681B publication Critical patent/CN110969681B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention provides a handwriting word generation method based on a GAN network, which belongs to the technical fields of handwriting generation, deep learning and neural network, and comprises the steps of collecting handwriting words into images, extracting features of the images, and completing the generation of handwriting word images based on styles and Chinese character contents by generating an countermeasure network; the whole GAN network model consists of a single word generation network and a page character generation network; through the GAN network alternating training generator and the discriminator, the learning of the single word network is completed, and through the LSTM network, the connection between single words is learned, and a final page handwriting word network model is formed and used for generating appointed word handwriting.

Description

Handwriting word generation method based on GAN network
Technical Field
The invention relates to handwriting generation, deep learning and neural network technologies, in particular to a handwriting word generation method based on a GAN network.
Background
The generated antagonism network (GAN, generative Adversarial Networks) is a deep learning model, originally proposed by Ian Goodfellow, and is one of the most important methods for unsupervised learning on complex distribution in recent years. The GAN model produces high quality outputs through the mutual game learning of two module generators (generators) and discrimmers in the framework, with the goal of training a model to perfectly fit the true data distribution so that the discriminant model is indistinguishable. The function of the generated model is to simulate the distribution of real data, the function of the judging model is to judge whether one sample is a real sample or a generated sample, the judgment device and the generator are trained in turn to make the judgment device and the generator mutually fight against each other, the sampling is carried out from complex probability distribution, and finally the training of the neural network is completed. Currently, GAN networks are widely used in the field of image generation to generate corresponding images, and become the most important framework for generating models for learning any complex data distribution.
The handwriting is a unique artistic expression form of characters in China and surrounding countries and regions which are deeply influenced by Chinese culture, and is a unique traditional art of Chinese characters. The handwriting is a character written by hard pen or soft pen, and the handwriting character represents the essence of Chinese character culture. The handwriting characters have different sizes and shapes, so that the staggered effect is difficult to realize in a computer character library, and the handwriting characters can be inherited only by using a handwriting method. With the popularization of computers and the development of mobile internet, people are increasingly looking at traditional handwriting and handwriting through electronic equipment, on one hand, the traditional handwriting is enjoyed through an electronic screen, and on the other hand, people also hope to read handwriting text articles with individuation. In this case, how to effectively form personalized image calligraphy characters by using the GAN network becomes a problem to be solved.
Disclosure of Invention
In order to solve the technical problems, the invention provides a handwriting word generation method based on a GAN (gas insulated network), which is characterized in that handwriting words are collected into images, the images are subjected to feature extraction, the image generation of single words based on styles and Chinese character contents is realized through the GAN, the relevance of word writing is fully considered, and the relationship among the single words is learned through a large amount of training through an LSTM (link state machine) network to form a final handwriting word network model for generating appointed word handwriting. In addition, through the personalized text content collection of the user, training is performed by utilizing the existing model base, and a personalized handwriting text generation model of the user can be formed.
The technical scheme of the invention is as follows:
a handwriting word generation method based on GAN network includes collecting a large number of handwriting words by a high-definition image acquisition device, preprocessing images to form independent word images, and recording the line text sequence of the words; the whole GAN network model consists of a single word generation network and a page character generation network, wherein the single word generation network consists of a style characteristic extractor Es, a semantic characteristic extractor Ec, a single word handwriting generator Gs and a single word handwriting discriminator Ds, and the page character generation network consists of a page character generator Gw and a page handwriting discriminator Dw; in the training process, firstly training a network composed of a style feature extractor Es, a semantic feature extractor Ec and a single-word handwriting generator Gs, and then alternately training the network composed of the style feature extractor Es, the semantic feature extractor Ec, the single-word handwriting generator Gs and the single-word handwriting discriminator Ds to finally form a single-word handwriting generation model; after the training of the single-word handwriting generator is completed, the page handwriting character generation model is finally formed by alternately training the page character generator Gw and the page handwriting discriminator Dw and is used for generating the appointed character handwriting. In addition, through the personalized text content collection of the user, training is performed by utilizing the existing model base, and a personalized handwriting text generation model of the user can be formed. Wherein, the liquid crystal display device comprises a liquid crystal display device,
the character pictures are real handwriting characters collected by a high-definition image collecting device, the images are preprocessed to form single-character images with consistent sizes, and the sequence of the characters is recorded; the single word generation network is a GAN network, feature vectors are formed by extracting features from the character images, the Gs generate calligraphic character images, the reconstruction errors of the character images are minimized, and the calligraphic discriminator Ds cannot distinguish whether the character images are generated by the Gs or actually acquired; the style feature extractor Es is a CNN neural network, extracts handwriting style features in the text image, can be handwriting fonts or personalized writing modes, and forms feature vectors from the extraction results; the semantic feature extractor Ec is a CNN neural network, extracts the semantic content of Chinese characters in the text image and forms semantic feature vectors; the single word handwriting generator Gs is a neural network and generates single word text pictures according to the feature vectors; the single-word handwriting discriminator Ds is a neural network, judges whether an input picture is real or not, accords with the handwriting word style or not, and is consistent with the input Chinese character semantics; the page character generation network consists of a page character generator Gw and a page handwriting discriminator Dw and is responsible for generating a page character; the page character generator Gw consists of a single character generator Gs and a single character combiner Gl, and generates a page image according to the character content and style; the single word combiner is an LSTM long and short time sequence network and is responsible for combining single words according to the sequence of the line text; the page handwriting discriminator D is a generating network and is used for judging whether the generated page characters are true or not, and accords with the handwriting character style and the semantics of the input page Chinese characters; the user personalized handwriting word model is used for performing targeted training and parameter adjustment by collecting handwriting word data of the writer based on the existing model, and generating a unique word generation model of the writer.
Training for handwriting generation network models, comprising the steps of:
step 101, collecting a large number of handwriting characters through the high-definition image acquisition device to form image data, preprocessing the image to form single-character images with consistent sizes, marking the single-character images with data, indicating the character semantics and the handwriting style, and recording the line-character sequence of the characters;
102, designing network structures and objective functions of the single word generation network and the page character generation network;
step 103, training the single-word generation network, and initializing the style feature extractor Es, the semantic feature extractor Ec, the single-word handwriting generator Gs and the single-word handwriting discriminator Ds;
104, performing Sample sampling Img for a plurality of times on the single-word image set acquired in the step 101 i Feature extraction is carried out through the extractor Es and the semantic feature extractor Ec, random content is added, and a feature vector z is formed i Then the character vector z is input through the single-word handwriting generator Gs i Generating a single-word picture GenImg i
Step 106, selecting a distribution P (such as normal distribution) for sampling the feature vector multiple times, adding random content, and forming a feature vector pz i Then the character vector pz is input through the single-word handwriting generator Gs i Generating a single-word picture PGenImg i
Step 107, updating parameters of the style feature extractor Es and the semantic feature extractor Ec so that a reconstruction error of the real image and the generated image is smaller than a threshold value, and the distribution of the feature vector z generated by the style feature extractor Es and the semantic feature extractor Ec is close to the distribution P selected in step 106 (for example, calculating KL divergences of two feature vectors);
step 108, updating parameters of the single-word handwriting generator Gs so that reconstruction errors of a real image and a generated image are smaller than a threshold value, and simultaneously cheating the single-word handwriting discriminator Ds so that the single-word handwriting discriminator Ds cannot distinguish a real picture from a picture generated by the single-word handwriting generator Gs and meets the handwriting style and semantic content of a single word;
step 109, the parameters of the single-word handwriting discriminator Ds are updated so that the real picture Img, the generated pictures GenImg and pgeneimg can be distinguished.
Step 110, alternately training to finally form the single word generation network model;
step 111, training the page character generation network, and initializing the page character handwriting generator Gw and the page character handwriting discriminator Dw;
step 112, processing the image PageImage of a page of characters, and obtaining the handwriting style and the semantic content of a single character; inputting characters into the network of the single-character handwriting generator Gs one by one according to the content of a page of characters, generating single character pictures, inputting the single character pictures into the single-character combiner LSTM network according to the sequence of the line characters, and finally generating the handwriting pictures GenPageimage of the page of characters;
step 113, updating parameters of the single word combiner LSTM network Gl, so that the page handwriting discriminator Dw cannot distinguish between a real image PageImage and an image generated by the page text handwriting generator Gw, and satisfies the handwriting style and semantic content of a single word;
step 114, updating parameters of the page text handwriting discriminator Dw, so that a real picture PageImage can be distinguished, and a picture GenPageImage can be generated.
And 115, alternately training to finally form the page character generation network model.
The calligraphic text generation comprises the following steps:
step 201, paging the text content of the handwriting image to be generated to form single-word semantic feature vectors, and recording the sequence of the text;
step 202, setting a target handwriting style vector;
step 203, (optional) adopting a user personalized handwriting word model, performing targeted training based on the existing model by collecting handwriting word data of the writer, adjusting parameters, and generating a unique word generation model Gw of the writer;
step 204, the single word semantic feature vector, the target calligraphic style vector and the random vector are added to form a single word feature vector, and the single word feature vector is input into the page character generation network Gw according to the line character sequence;
step 205, generating handwriting pictures according to the input vectors by the page character generation network Gw;
and 206, converting all the text contents to be generated into handwriting images to form multi-page contents.
The invention has the beneficial effects that
In addition, the method is used for generating the appointed text handwriting, training is carried out by utilizing the existing model foundation through the personalized text content acquisition of the user, a personalized handwriting text generation model of the user can be formed, and personalized handwriting image text pictures are formed.
Drawings
FIG. 1 is a schematic diagram of a handwriting generation network;
FIG. 2 is a flow chart of training a handwriting generating network model;
fig. 3 is a flow chart of handwriting generation.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
As shown in fig. 1, a large number of handwriting characters are collected through a high-definition image acquisition device, the images are preprocessed to form independent character images, and the line text sequence of the characters is recorded; the whole GAN network model consists of a single word generation network and a page character generation network, wherein the single word generation network consists of a style characteristic extractor Es, a semantic characteristic extractor Ec, a single word handwriting generator Gs and a single word handwriting discriminator Ds, and the page character generation network consists of a page character generator Gw and a page handwriting discriminator Dw; in the training process, firstly training a network composed of a style feature extractor Es, a semantic feature extractor Ec and a single-word handwriting generator Gs, and then alternately training the network composed of the style feature extractor Es, the semantic feature extractor Ec, the single-word handwriting generator Gs and the single-word handwriting discriminator Ds to finally form a single-word handwriting generation model; after the training of the single-word handwriting generator is completed, the page handwriting character generation model is finally formed by alternately training the page character generator Gw and the page handwriting discriminator Dw and is used for generating the appointed character handwriting. In addition, through the personalized text content collection of the user, training is performed by utilizing the existing model base, and a personalized handwriting text generation model of the user can be formed.
Wherein, the liquid crystal display device comprises a liquid crystal display device,
the text pictures are real handwriting texts collected by a high-definition image collecting device, the images are preprocessed to form single-word images with consistent sizes, and the sequence of the text is recorded; the single word generation network is a GAN network, feature vectors are formed by extracting features from the character images, the Gs generate calligraphic character images, the reconstruction errors of the character images are minimized, and the calligraphic discriminator Ds cannot distinguish whether the character images are generated by the Gs or actually acquired; the style feature extractor Es is a CNN neural network, extracts handwriting style features in the text image, can be handwriting fonts or personalized writing modes, and forms feature vectors from the extraction results; the semantic feature extractor Ec is a CNN neural network, extracts the semantic content of Chinese characters in the text image and forms semantic feature vectors; the single word handwriting generator Gs is a neural network and generates single word text pictures according to the feature vectors; the single-word handwriting discriminator Ds is a neural network, judges whether an input picture is real or not, accords with the handwriting word style or not, and is consistent with the input Chinese character semantics; the page character generation network consists of a page character generator Gw and a page handwriting discriminator Dw and is responsible for generating a page character; the page character generator Gw consists of a single character generator Gs and a single character combiner Gl, and generates a page image according to the character content and style; the single word combiner is an LSTM long and short time sequence network and is responsible for combining single words according to the sequence of the line text; the page handwriting discriminator D is a generating network and is used for judging whether the generated page characters are true or not, and accords with the handwriting character style and the semantics of the input page Chinese characters; the user personalized handwriting word model is used for performing targeted training and parameter adjustment by collecting handwriting word data of the writer based on the existing model, and generating a unique word generation model of the writer.
For convenience of description, the following process uses a high-definition digital camera as an acquisition device, the style feature extractor Es and the semantic feature extractor Ec may use a full convolution network, and the network main structures of a handwriting generator, a handwriting discriminator and the like may use a CNN network. Those skilled in the art will appreciate that in addition to using the above networks, configurations according to embodiments of the present invention can also be applied to other approaches.
1. Training of handwriting generation network model
As shown in fig. 2, the training of the handwriting-generated network model includes the steps of:
step 101, collecting a large number of handwriting characters through the high-definition image acquisition device to form image data, preprocessing the image to form single-character images with consistent sizes, marking the single-character images with data, indicating the character semantics and the handwriting style, and recording the line-character sequence of the characters;
102, designing network structures and objective functions of the single word generation network and the page character generation network;
step 103, training the single-word generation network, and initializing the style feature extractor Es, the semantic feature extractor Ec, the single-word handwriting generator Gs and the single-word handwriting discriminator Ds;
104, performing Sample sampling Img for a plurality of times on the single-word image set acquired in the step 101 i Feature extraction is carried out through the extractor Es and the semantic feature extractor Ec, random content is added, and a feature vector z is formed i Then the character vector z is input through the single-word handwriting generator Gs i Generating a single-word picture GenImg i
Step 106, selecting a distribution P (such as normal distribution) for sampling the feature vector multiple times, adding random content, and forming a feature vector pz i Then the character vector pz is input through the single-word handwriting generator Gs i Generating a single-word picture PGenImg i
Step 107, updating parameters of the style feature extractor Es and the semantic feature extractor Ec so that a reconstruction error of the real image and the generated image is less than 0.01 (e.g., an L1 distance of the image), and a distribution of feature vectors z generated by the style feature extractor Es and the semantic feature extractor Ec is close to a distribution P selected in step 106 (e.g., a KL divergence of two feature vectors is calculated);
step 108, updating parameters of the single-word handwriting generator Gs so that a reconstruction error of a real image and a generated image is less than 0.01 (for example, an L1 distance of the image), and simultaneously cheating the single-word handwriting discriminator Ds so that the single-word handwriting discriminator Ds cannot distinguish a real picture from a picture generated by the single-word handwriting generator Gs and satisfies the handwriting style and semantic content of a single word;
step 109, the parameters of the single-word handwriting discriminator Ds are updated so that the real picture Img, the generated pictures GenImg and pgeneimg can be distinguished.
Step 110, alternately training to finally form the single word generation network model;
step 111, training the page character generation network, and initializing the page character handwriting generator Gw and the page character handwriting discriminator Dw;
step 112, processing the image PageImage of a page of characters, and obtaining the handwriting style and the semantic content of a single character; inputting characters into the network of the single-character handwriting generator Gs one by one according to the content of a page of characters, generating single character pictures, inputting the single character pictures into the single-character combiner LSTM network according to the sequence of the line characters, and finally generating the handwriting pictures GenPageimage of the page of characters;
step 113, updating parameters of the single word combiner LSTM network Gl, so that the page handwriting discriminator Dw cannot distinguish between a real image PageImage and an image generated by the page text handwriting generator Gw, and satisfies the handwriting style and semantic content of a single word;
step 114, updating parameters of the page text handwriting discriminator Dw, so that a real picture PageImage can be distinguished, and a picture GenPageImage can be generated.
And 115, alternately training to finally form the page character generation network model.
2. Handwriting word generation
As shown in fig. 3, the calligraphic text generation includes the steps of:
step 201, paging the text content of the handwriting image to be generated to form single-word semantic feature vectors, and recording the sequence of the text;
step 202, setting a target handwriting style vector;
step 203, (optional) adopting a user personalized handwriting word model, performing targeted training based on the existing model by collecting handwriting word data of the writer, adjusting parameters, and generating a unique word generation model Gw of the writer;
step 204, the single word semantic feature vector, the target calligraphic style vector and the random vector are added to form a single word feature vector, and the single word feature vector is input into the page character generation network Gw according to the line character sequence;
step 205, generating handwriting pictures according to the input vectors by the page character generation network Gw;
and 206, converting all the text contents to be generated into handwriting images to form multi-page contents.
The above examples are only one of the specific embodiments of the present invention, and the ordinary changes and substitutions made by those skilled in the art within the scope of the technical solution of the present invention should be included in the scope of the present invention.
The foregoing description is only illustrative of the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (4)

1. A handwriting word generation method based on GAN network is characterized in that,
collecting handwriting characters into an image, extracting features of the image, and completing the generation of handwriting character images based on styles and Chinese character contents by generating an countermeasure network;
the whole GAN network model consists of a single word generation network and a page word generation network,
the single word generation network consists of a style characteristic extractor, a semantic characteristic extractor, a single word handwriting generator and a single word handwriting discriminator,
the page character generating network consists of a page character generator and a page handwriting discriminator;
the method comprises the steps of completing learning of a single word network through a GAN (gate-word network) alternating training generator and a discriminator, and learning the connection between single words through an LSTM (least squares) network to form a final page handwriting word network model for generating appointed word handwriting;
in addition, through the collection of the personalized text content of the user, training is carried out by utilizing the existing model base, a personalized handwriting text generation model of the user is formed, and personalized handwriting image text pictures are formed;
collecting handwriting characters through a high-definition image acquisition device, preprocessing the images to form independent character images, and recording the line character sequence of the characters;
in the training process, firstly training a network consisting of a style characteristic extractor, a semantic characteristic extractor and a single-word handwriting generator, and then alternately training the network consisting of the style characteristic extractor, the semantic characteristic extractor, the single-word handwriting generator and the single-word handwriting discriminator to finally form a single-word handwriting generation model;
after the training of the single-word handwriting generator is completed, the page handwriting character generation model is finally formed by alternately training the page character generator and the page handwriting discriminator and is used for generating appointed character handwriting;
the character pictures are real handwriting characters collected by a high-definition image collecting device, the images are preprocessed to form single-character images with consistent sizes, and the sequence of the characters is recorded;
the single-word generation network is a GAN network, feature vectors are formed by extracting features from the word images, the single-word handwriting generator generates handwriting word images, and the single-word handwriting discriminator cannot distinguish whether the single-word handwriting generator generates or actually collects the word images by minimizing the reconstruction errors of the word images;
the style feature extractor is a CNN neural network, extracts handwriting style features in the text images, is a handwriting font or personalized writing mode, and forms feature vectors from the extraction results;
the semantic feature extractor is a CNN neural network, extracts the semantic content of Chinese characters in the text image and forms semantic feature vectors;
the single-word handwriting generator is a neural network and generates single-word text pictures according to the feature vectors;
the single-word handwriting discriminator is a neural network, judges whether the input picture is real or not, accords with the handwriting word style or not, and is consistent with the input Chinese character semantics;
the page character generation network consists of a page character generator and a page handwriting discriminator and is responsible for generating a page character;
the page character generator consists of a single character generator and a single character combiner, and generates a page image according to the character content and the style;
the single word combiner is an LSTM long and short time sequence network and is responsible for combining single words according to the sequence of the line text;
the page handwriting discriminator is a generating network for judging whether the generated page characters are true or not, accords with the handwriting character style, and is consistent with the input page Chinese character semantics.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
training of a handwriting generating network model, comprising the following steps:
step 101, collecting handwriting characters through the high-definition image acquisition device to form image data, preprocessing the image to form single-character images with consistent sizes, marking the single-character images with data, indicating the character semantics and the handwriting style, and recording the line character sequence of the characters;
102, designing network structures and objective functions of the single word generation network and the page character generation network;
step 103, training the single-word generation network, and initializing the style feature extractor Es, the semantic feature extractor Ec, the single-word handwriting generator Gs and the single-word handwriting discriminator Ds;
104, sampling Img by Sample acquired in step 101 more than once i Feature extraction is carried out through an extractor Es and a semantic feature extractor Ec, random content is added, and a feature vector z is formed i Then the character vector z is input through a single-word handwriting generator Gs i Generating a single-word picture GenImg i
Step 106, selecting a Sample sampling feature vector with more than one distribution P, adding random content to form a feature vector pz i Then the character vector pz is input through a single-word handwriting generator Gs i Generating a single-word picture PGenImg i
Step 107, updating parameters of the style feature extractor Es and the semantic feature extractor Ec so that a reconstruction error of the real image and the generated image is smaller than a threshold value, and the distribution of the feature vector z generated by the style feature extractor Es and the semantic feature extractor Ec is close to the distribution P selected in the step 106;
step 108, updating parameters of a single-word handwriting generator Gs so that reconstruction errors of a real image and a generated image are smaller than a threshold value, and simultaneously cheating a single-word handwriting discriminator Ds, so that the single-word handwriting discriminator Ds cannot distinguish a real picture from a picture generated by the single-word handwriting generator Gs, and the single-word handwriting style and semantic content are met;
step 109, updating parameters of the single-word handwriting discriminator Ds so that the real picture Img can be distinguished, and the generated pictures GenImg and PgenImg;
step 110, alternately training to finally form a single word generation network model;
step 111, starting training a page character generation network, and initializing a page character handwriting generator Gw and a page character handwriting discriminator Dw;
step 112, processing the image PageImage of a page of characters, and obtaining the handwriting style and the semantic content of a single character; inputting characters into a network of a single-character handwriting generator Gs one by one according to the content of a page of characters, generating single-character pictures, inputting the single-character pictures into a single-character combiner LSTM network according to the sequence of the line characters, and finally generating a handwriting picture GenPageimage of the page of characters;
step 113, updating parameters of the single word combiner LSTM network Gl, so that the page handwriting discriminator Dw cannot distinguish a real picture Pageimage from a picture generated by the page text handwriting generator Gw, and meets the handwriting style and semantic content of the single word;
step 114, updating parameters of the page text handwriting discriminator Dw so that real pictures Pageimage can be distinguished and pictures GenPageImag can be generated;
and 115, alternately training to finally form the page character generation network model.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the calligraphic text generation comprises the following steps:
step 201, paging the text content of the handwriting image to be generated to form single-word semantic feature vectors, and recording the sequence of the text;
step 202, setting a target handwriting style vector;
step 203, a single word semantic feature vector, a target calligraphic style vector and a random vector are added to form a single word feature vector, and the single word feature vector is input into a page character generation network Gw according to a line character sequence;
step 202, generating handwriting pictures by a page character generation network Gw according to input vectors;
step 205, converting all the text contents to be generated into handwriting images to form multi-page contents.
4. The method of claim 3, wherein the step of,
the user personalized handwriting word model is adopted, the handwriting word data of the user is collected to be based on the existing model, the specific training is carried out, the parameters are adjusted, and the unique word generation model Gw of the user is generated.
CN201911197267.3A 2019-11-29 2019-11-29 Handwriting word generation method based on GAN network Active CN110969681B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911197267.3A CN110969681B (en) 2019-11-29 2019-11-29 Handwriting word generation method based on GAN network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911197267.3A CN110969681B (en) 2019-11-29 2019-11-29 Handwriting word generation method based on GAN network

Publications (2)

Publication Number Publication Date
CN110969681A CN110969681A (en) 2020-04-07
CN110969681B true CN110969681B (en) 2023-08-29

Family

ID=70032131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911197267.3A Active CN110969681B (en) 2019-11-29 2019-11-29 Handwriting word generation method based on GAN network

Country Status (1)

Country Link
CN (1) CN110969681B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132916B (en) * 2020-08-18 2023-11-14 浙江大学 Seal cutting work customized design generating device for generating countermeasure network
CN111898373B (en) * 2020-08-21 2023-09-26 中国工商银行股份有限公司 Handwriting date sample generation method and device
CN112183027B (en) * 2020-08-31 2022-09-06 同济大学 Artificial intelligence based artwork generation system and method
CN113326009B (en) * 2021-03-05 2022-05-31 临沂大学 Paper calligraphy work copying method and device
CN113807430B (en) * 2021-09-15 2023-08-08 网易(杭州)网络有限公司 Model training method, device, computer equipment and storage medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326907A (en) * 2015-06-23 2017-01-11 王东锐 Handwriting automatic evaluation method and system
CN106570456A (en) * 2016-10-13 2017-04-19 华南理工大学 Handwritten Chinese character recognition method based on full-convolution recursive network
CN107578017A (en) * 2017-09-08 2018-01-12 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
CN107644014A (en) * 2017-09-25 2018-01-30 南京安链数据科技有限公司 A kind of name entity recognition method based on two-way LSTM and CRF
CN107644006A (en) * 2017-09-29 2018-01-30 北京大学 A kind of Chinese script character library automatic generation method based on deep neural network
CN108170649A (en) * 2018-01-26 2018-06-15 广东工业大学 A kind of Hanzi font library generation method and device based on DCGAN depth networks
CN108268444A (en) * 2018-01-10 2018-07-10 南京邮电大学 A kind of Chinese word cutting method based on two-way LSTM, CNN and CRF
CN108304357A (en) * 2018-01-31 2018-07-20 北京大学 A kind of Chinese word library automatic generation method based on font manifold
CN109086408A (en) * 2018-08-02 2018-12-25 腾讯科技(深圳)有限公司 Document creation method, device, electronic equipment and computer-readable medium
CN109241904A (en) * 2018-08-31 2019-01-18 平安科技(深圳)有限公司 Text region model training, character recognition method, device, equipment and medium
CN109408776A (en) * 2018-10-09 2019-03-01 西华大学 A kind of calligraphy font automatic generating calculation based on production confrontation network
CN109492764A (en) * 2018-10-24 2019-03-19 平安科技(深圳)有限公司 Training method, relevant device and the medium of production confrontation network
CN109543165A (en) * 2018-11-21 2019-03-29 中国人民解放军战略支援部队信息工程大学 Document creation method and device based on cyclic convolution attention model
CN109635883A (en) * 2018-11-19 2019-04-16 北京大学 The Chinese word library generation method of the structural information guidance of network is stacked based on depth
CN110162751A (en) * 2019-05-13 2019-08-23 百度在线网络技术(北京)有限公司 Text generator training method and text generator training system
CN110196972A (en) * 2019-04-24 2019-09-03 北京奇艺世纪科技有限公司 Official documents and correspondence generation method, device and computer readable storage medium
CN110211032A (en) * 2019-06-06 2019-09-06 北大方正集团有限公司 Generation method, device and the readable storage medium storing program for executing of chinese character
CN110232337A (en) * 2019-05-29 2019-09-13 中国科学院自动化研究所 Chinese character image stroke extraction based on full convolutional neural networks, system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040253565A1 (en) * 1997-07-10 2004-12-16 Kyu Jin Park Caption type language learning system using caption type learning terminal and communication network

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326907A (en) * 2015-06-23 2017-01-11 王东锐 Handwriting automatic evaluation method and system
CN106570456A (en) * 2016-10-13 2017-04-19 华南理工大学 Handwritten Chinese character recognition method based on full-convolution recursive network
CN107578017A (en) * 2017-09-08 2018-01-12 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
CN107644014A (en) * 2017-09-25 2018-01-30 南京安链数据科技有限公司 A kind of name entity recognition method based on two-way LSTM and CRF
CN107644006A (en) * 2017-09-29 2018-01-30 北京大学 A kind of Chinese script character library automatic generation method based on deep neural network
CN108268444A (en) * 2018-01-10 2018-07-10 南京邮电大学 A kind of Chinese word cutting method based on two-way LSTM, CNN and CRF
CN108170649A (en) * 2018-01-26 2018-06-15 广东工业大学 A kind of Hanzi font library generation method and device based on DCGAN depth networks
CN108304357A (en) * 2018-01-31 2018-07-20 北京大学 A kind of Chinese word library automatic generation method based on font manifold
CN109086408A (en) * 2018-08-02 2018-12-25 腾讯科技(深圳)有限公司 Document creation method, device, electronic equipment and computer-readable medium
CN109241904A (en) * 2018-08-31 2019-01-18 平安科技(深圳)有限公司 Text region model training, character recognition method, device, equipment and medium
CN109408776A (en) * 2018-10-09 2019-03-01 西华大学 A kind of calligraphy font automatic generating calculation based on production confrontation network
CN109492764A (en) * 2018-10-24 2019-03-19 平安科技(深圳)有限公司 Training method, relevant device and the medium of production confrontation network
CN109635883A (en) * 2018-11-19 2019-04-16 北京大学 The Chinese word library generation method of the structural information guidance of network is stacked based on depth
CN109543165A (en) * 2018-11-21 2019-03-29 中国人民解放军战略支援部队信息工程大学 Document creation method and device based on cyclic convolution attention model
CN110196972A (en) * 2019-04-24 2019-09-03 北京奇艺世纪科技有限公司 Official documents and correspondence generation method, device and computer readable storage medium
CN110162751A (en) * 2019-05-13 2019-08-23 百度在线网络技术(北京)有限公司 Text generator training method and text generator training system
CN110232337A (en) * 2019-05-29 2019-09-13 中国科学院自动化研究所 Chinese character image stroke extraction based on full convolutional neural networks, system
CN110211032A (en) * 2019-06-06 2019-09-06 北大方正集团有限公司 Generation method, device and the readable storage medium storing program for executing of chinese character

Also Published As

Publication number Publication date
CN110969681A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN110969681B (en) Handwriting word generation method based on GAN network
CN110750959B (en) Text information processing method, model training method and related device
CN111582241B (en) Video subtitle recognition method, device, equipment and storage medium
CN113254599B (en) Multi-label microblog text classification method based on semi-supervised learning
Lian et al. EasyFont: a style learning-based system to easily build your large-scale handwriting fonts
CN109614944A (en) A kind of method for identifying mathematical formula, device, equipment and readable storage medium storing program for executing
CN108090400A (en) A kind of method and apparatus of image text identification
CN111507330B (en) Problem recognition method and device, electronic equipment and storage medium
CN114255159A (en) Handwritten text image generation method and device, electronic equipment and storage medium
CN107357785A (en) Theme feature word abstracting method and system, feeling polarities determination methods and system
CN109598185A (en) Image recognition interpretation method, device, equipment and readable storage medium storing program for executing
CN111444905B (en) Image recognition method and related device based on artificial intelligence
CN101339703A (en) Character calligraph exercising method based on computer
CN110427864B (en) Image processing method and device and electronic equipment
CN110659702A (en) Calligraphy copybook evaluation system and method based on generative confrontation network model
CN107463624A (en) A kind of method and system that city interest domain identification is carried out based on social media data
CN112839185B (en) Method, apparatus, device and medium for processing image
CN113627260A (en) Method, system and computing device for recognizing stroke order of handwritten Chinese characters
CN116469111B (en) Character generation model training method and target character generation method
CN113239967A (en) Character recognition model training method, recognition method, related equipment and storage medium
CN112084788A (en) Automatic marking method and system for implicit emotional tendency of image captions
Zhu et al. How to Evaluate Semantic Communications for Images with ViTScore Metric?
CN112784579B (en) Reading understanding choice question answering method based on data enhancement
Sun et al. A mongolian handwritten word images generation approach based on generative adversarial networks
CN113822521A (en) Method and device for detecting quality of question library questions and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230801

Address after: 250100 building S02, No. 1036, Langchao Road, high tech Zone, Jinan City, Shandong Province

Applicant after: Shandong Inspur Scientific Research Institute Co.,Ltd.

Address before: North 6th floor, S05 building, Langchao Science Park, 1036 Langchao Road, hi tech Zone, Jinan City, Shandong Province, 250100

Applicant before: SHANDONG INSPUR ARTIFICIAL INTELLIGENCE RESEARCH INSTITUTE Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200407

Assignee: Shandong Inspur Digital Business Technology Co.,Ltd.

Assignor: Shandong Inspur Scientific Research Institute Co.,Ltd.

Contract record no.: X2023980053547

Denomination of invention: A Handwritten Calligraphy Text Generation Method Based on GAN Network

Granted publication date: 20230829

License type: Exclusive License

Record date: 20231226