CN109408776A - A kind of calligraphy font automatic generating calculation based on production confrontation network - Google Patents

A kind of calligraphy font automatic generating calculation based on production confrontation network Download PDF

Info

Publication number
CN109408776A
CN109408776A CN201811172321.4A CN201811172321A CN109408776A CN 109408776 A CN109408776 A CN 109408776A CN 201811172321 A CN201811172321 A CN 201811172321A CN 109408776 A CN109408776 A CN 109408776A
Authority
CN
China
Prior art keywords
image
network
loss
discriminator
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811172321.4A
Other languages
Chinese (zh)
Inventor
彭宏
张国洲
陈茹
王军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xihua University
Original Assignee
Xihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xihua University filed Critical Xihua University
Priority to CN201811172321.4A priority Critical patent/CN109408776A/en
Publication of CN109408776A publication Critical patent/CN109408776A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/109Font handling; Temporal or kinetic typography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A kind of calligraphy font automatic generating calculation based on production confrontation network, builds two productions confrontation networks first, and respective generator is respectively G and F, and the discriminator of G and F are Dy, Dx;Minibatch is extracted from printing type face data set X and calligraphy font data set Y respectively again, it is respectively fed to generate corresponding calligraphy font image G (x), printing type face image F (y) in G and F, identified again by Dy, Dx and generate image as genuine probability and calculate loss function, optimizes network parameter;Secondly G (x) is input in F, F (y) is input in G, F (G (x)), the G (F (y)) of generation are exported after convolution, deconvolution, identify image true-false again, calculate loss, adjusting parameter;Four: repeating to network convergence or reach the number of iterations.Any one can be input to the Chinese character in trained neural network by inventive algorithm, can export the font with calligraphic style.

Description

A kind of calligraphy font automatic generating calculation based on production confrontation network
Technical field
The invention belongs to deep learning fields, and in particular to it is a kind of based on production confrontation network calligraphy font give birth to automatically At algorithm.
Background technique
A Chinese character is inputted in a computer, and selection needs the calligraphic style converted, and final effect is to export corresponding wind The calligraphy font of lattice, such as one Chinese regular script word " opening " of input, select the font style of the king legendary ruler of great antiquity, the first of the Three August Ones, and output has the font style of the king legendary ruler of great antiquity, the first of the Three August Ones " opening " word.
Existing font style conversion method is to input the Chinese character for needing to convert font style, such as ' opening ' first, then By indexing corresponding calligraphy font database, the calligraphy font of corresponding ' opening ' word is found, is finally exported.Wherein calligraphy font wind Lattice database is the artificial Chinese characters in common use for writing a certain calligraphic style imitative in advance, such as the font of the imitative style for writing the king legendary ruler of great antiquity, the first of the Three August Ones, is swept later Retouch deposit database.The shortcomings that this method, has: the first, can only convert the font style for being stored in the word of database in advance, such as Fruit encounters the font style of not typing, can only export former font;The second, this method is that artificial imitate is write, and can not imitate and write out Font (such as imitative font for writing the king legendary ruler of great antiquity, the first of the Three August Ones) really with corresponding style;Third, preservation calligraphy font need to occupy larger memory Space.
Production is fought network (GAN) and is generated in image, and picture editting, representative learning etc. all achieves good knot Fruit.The main target of original GAN model is that discrimination model D auxiliary is forced to generate model G, is generated similar with truthful data distribution Pseudo- data, wherein G and D is generally nonlinear mapping function, usually by network structures such as multi-layer perception (MLP) or convolutional neural networks To formalize.Given random noise variable z obeys simple distribution Pz (z), generates model G by the way that z is mapped as G (z) implicitly A generation distribution Pg is defined to be fitted authentic specimen distribution Pdata.Discrimination model D as two classifiers, respectively with Authentic specimen x and generation sample G (z) are as input, using a scalar value as probability output, indicate that D is for currently inputting The confidence level for the pseudo- data that truthful data still generates judges the quality of G generation data with this.When input is true training sample When this x~Pdata, D (x) desired output high probability, when input is generates sample G (z), D (G (z)) desired output low probability, And D (G (z)) to be made to export high probability as far as possible for G, it allows D that truthful data cannot be distinguished and generates data.Two models It alternately trains, to form competition and confrontation, entire optimization process can be considered as a minimax game.
The method of CycleGAN is established on the pix2pix frame of Isola et al., pix2pix learnt using cGAN from It is input to the mapping of output, but it relies on pairing training data.The innovative point of CycleGAN is can be in source domain and aiming field Between, without establishing one-to-one mapping between training data, so that it may realize this migration.This method is by carrying out original image The transformation of two steps: being mapped to aiming field for original image first, then returns to original image from aiming field again and obtains secondary generation image, from And eliminate the requirement that image matches in aiming field.Aiming field is mapped an image to using generator network, and passes through mirror Other device (discriminator) improves the quality of the generation image.
Summary of the invention
It is an object of the present invention to provide a kind of calligraphy font automatic generating calculations based on production confrontation network, can will appoint One Chinese character being input in trained neural network of meaning, can export the font with calligraphic style.
The technical solution adopted by the present invention is that:
A kind of calligraphy font automatic generating calculation based on production confrontation network, comprising the following steps:
One: building two production confrontation networks, respective generator is respectively G and F, wherein the discriminator of G is Dy, F Discriminator be Dx;
Two: minibatch (gradient decline) is extracted from printing type face data set X and calligraphy font data set Y respectively, and It is respectively fed to generate corresponding calligraphy font image G (x), printing type face image F (y) in G and F, then is identified by Dy, Dx give birth to respectively It is genuine probability at image and calculates loss function, optimizes network parameter;
Three: G (x) being input in F, F (y) is input in G, the block letter that output generates after convolution, deconvolution Image F (G (x)), handwriting image G (F (y)), then image true-false, calculating loss, adjusting parameter are identified by Dy, Dx;
Four: repeating step 1 to step 3, up to network convergence or reach the number of iterations.
Beneficial effects of the present invention:
The research of GAN has been extended to Chinese calligraphy field by the present invention, by collecting a large amount of calligraphy font authentic works as sample This, with the sample training deep neural network being collected into, makes the calligraphic style of neural network learning calligraphy font, such as the king legendary ruler of great antiquity, the first of the Three August Ones Font style, study saves the neural network model, any Chinese character can be transferred through the nerve later to after calligraphy font style Network exports corresponding calligraphic style font.
Compared with the existing technology, improvement of this method at four aspects: the first, any Chinese character may be converted into corresponding wind The font of lattice, it is no longer restricted;The second, manpower and time are saved, it is no longer necessary to which artificial imitate writes a large amount of font, and scanning These fonts are stored in database;Third, because the neural network of conversion style is the direct style for extracting former style font, The style that can guarantee output calligraphy font is to really need the style of conversion, the 4th, only need to save a trained nerve Network model is not take up a large amount of memories.
Detailed description of the invention
Fig. 1 is the training flow chart of inventive algorithm;
Fig. 2 is arithmetic result output figure of the embodiment of the present invention.
Specific embodiment
The present embodiment is programmed by Torch 7 on ubuntu16.04 platform and is realized, processor is Intel Core i7- 6700,3.4GHz, 8 core cpu, inside save as 16G, and video card is NVIDIAGeForce GTX 1060, video memory 3G.
The present embodiment is using Chinese regular script Chinese character image as block letter image data setThe round hand data set by 1000 Chinese character of stochastic inputs composition, Chinese character is converted into regular script by Microsoft word, then is fabricated to picture.With the word of the king legendary ruler of great antiquity, the first of the Three August Ones Body image is calligraphy font image data setThe character font data collection of the king legendary ruler of great antiquity, the first of the Three August Ones is the The Orchid Pavilion collection sequence authentic work etc. in the king legendary ruler of great antiquity, the first of the Three August Ones It is removed on picture, it has 1000 training sample images and 100 test sample images.Two character font data collection all pass through Unified standardization, each Chinese character image are made of 256*256 pixel.Following procedure only includes x to G (x) to F (G (x)) process, the process of y to F (y) to G (F (y)) is therewith similarly.
Detailed process includes the following steps, as shown in Figure 1:
Step 1: two productions of building fight network, their generator is respectively G and F, and discriminator is respectively Dy、 Dx
Step 2: minibatch { x is extracted from block letter image data set X respectively(1),…,x(n), from handwriting image number Minibatch { y is extracted according to collection Y(1),…,y(m), by x(i)It is sent into G, the calligraphy generated is exported after convolution, deconvolution Font image G (x);Extracting minibatch is actually that every repetitive exercise once all extracts sub-fraction number from data set X According to being trained, the data extracted every time are to obtain inside X at random;
Step 3: the loss of generator G in previous step is calculated using formula (1):
Step 4: by G (x) and handwriting image y(j)∈minibatch{y(1),…,y(m)It is input to discriminator network DyIn Identify image true-false, if image authentication is that very, discriminator exports the probability close to 1, otherwise exports the probability close to 0;
Step 4: discriminator D is calculated using formula (2)yLoss, acquire loss after using Adam optimizer optimization update Discriminator DyNetwork parameter;
Step 5: G (x) being input to and is generated in network F, by convolution, exports the block letter figure generated after deconvolution As F (G (x));
Step 6: the loss of generator F is calculated using formula (3), and updates its parameter using adam optimization:
Step 7: by F (G (x)) and printing type face image x(i)∈minibatch{x(1),…,x(n)It is input to discriminator Network DxIn discern the false from the genuine, if image authentication is true, probability of the discriminator output close to 1, otherwise output is close to 0 probability;
Step 8: discriminator D is calculated using formula (4)xLoss, acquire loss after using Adam optimizer optimization update Discriminator DxNetwork parameter;
Step 9: the consistent loss of circulation is calculated using formula (5), the sum of formula (1) and formula (5) are that generator G is always damaged It loses, total losses utilizes adam optimizer, the network parameter of optimization update generator G after acquiring the loss;
Step 10: judging whether to reach the number of iterations, and step 1 is repeated if not reaching to step 9.
In Fig. 2, (A) inputs round hand " dusk ", and the calligraphic style font of the king legendary ruler of great antiquity, the first of the Three August Ones is generated by generator G, then passes through Generator F reverts back round hand;(B) the calligraphic style font " Huai " for inputting the king legendary ruler of great antiquity, the first of the Three August Ones generates round hand by generator F, It is generated again by generator G and reverts back the calligraphy font of the king legendary ruler of great antiquity, the first of the Three August Ones, Dx, Dy differentiate round hand and the calligraphy font of the king legendary ruler of great antiquity, the first of the Three August Ones respectively The true and false.

Claims (2)

1. a kind of calligraphy font automatic generating calculation based on production confrontation network, which comprises the following steps:
One: building two production confrontation networks, respective generator is respectively G and F, wherein the discriminator of G is Dy, the mirror of F Other device is Dx;
Two: extracting minibatch from printing type face data set X and calligraphy font data set Y respectively, and be respectively fed in G and F Corresponding calligraphy font image G (x), printing type face image F (y) are generated, then it is genuine general for identifying generation image by Dy, Dx respectively Rate simultaneously calculates loss function, optimizes network parameter;
Three: G (x) being input in F, F (y) is input in G, the block letter image F that output generates after convolution, deconvolution (G (x)), handwriting image G (F (y)), then image true-false, calculating loss, adjusting parameter are identified by Dy, Dx;
Four: repeating step 1 to step 3, up to network convergence or reach the number of iterations.
2. a kind of calligraphy font automatic generating calculation based on production confrontation network, which comprises the following steps:
Step 1: two productions of building fight network, and respective generator is respectively G and F, and discriminator is respectively Dy、Dx
Step 2: minibatch { x is extracted from block letter image data set X respectively(1)..., x(n), from handwriting image data set Y extracts minibatch { y(1)..., y(m), by x(i)It is sent into G, the writing brush word generated is exported after convolution, deconvolution Body image G (x);
Step 3: the loss of generator G in step 2 is calculated using formula (1):
Step 4: by G (x) and handwriting image y(j)∈minibatch{y(1)..., y(m)It is input to discriminator network DyMiddle identification Image true-false, if image authentication is that very, discriminator exports the probability close to 1, otherwise exports the probability close to 0;
Step 4: discriminator D is calculated using formula (2)yLoss, acquire loss after using Adam optimizer optimization update identify Device DyNetwork parameter;
Step 5: G (x) being input to and is generated in network F, and the block letter image F (G generated is exported after convolution, deconvolution (x));
Step 6: the loss of generator F is calculated using formula (3), and updates its parameter using adam optimization:
Step 7: by F (G (x)) and printing type face image x(i)∈minibatch{x(1)..., x(n)It is input to discriminator network DxIn discern the false from the genuine, if image authentication is true, probability of the discriminator output close to 1, otherwise output is close to 0 probability;
Step 8: discriminator D is calculated using formula (4)xLoss, acquire loss after using Adam optimizer optimization update identify Device DxNetwork parameter;
Step 9: the consistent loss of circulation is calculated using formula (5), the sum of formula (1) and formula (5) are generator G total losses, always Loss utilizes adam optimizer, the network parameter of optimization update generator G after acquiring the loss;
Step 10: step 1 is repeated to step 9, until reaching the number of iterations.
CN201811172321.4A 2018-10-09 2018-10-09 A kind of calligraphy font automatic generating calculation based on production confrontation network Pending CN109408776A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811172321.4A CN109408776A (en) 2018-10-09 2018-10-09 A kind of calligraphy font automatic generating calculation based on production confrontation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811172321.4A CN109408776A (en) 2018-10-09 2018-10-09 A kind of calligraphy font automatic generating calculation based on production confrontation network

Publications (1)

Publication Number Publication Date
CN109408776A true CN109408776A (en) 2019-03-01

Family

ID=65466251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811172321.4A Pending CN109408776A (en) 2018-10-09 2018-10-09 A kind of calligraphy font automatic generating calculation based on production confrontation network

Country Status (1)

Country Link
CN (1) CN109408776A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110399845A (en) * 2019-07-29 2019-11-01 上海海事大学 Continuously at section text detection and recognition methods in a kind of image
CN110570481A (en) * 2019-07-31 2019-12-13 中国地质大学(武汉) calligraphy word stock automatic repairing method and system based on style migration
CN110695995A (en) * 2019-10-11 2020-01-17 星际(重庆)智能装备技术研究院有限公司 Robot calligraphy method based on deep learning
CN110930471A (en) * 2019-11-20 2020-03-27 大连交通大学 Image generation method based on man-machine interactive confrontation network
CN110969681A (en) * 2019-11-29 2020-04-07 山东浪潮人工智能研究院有限公司 Method for generating handwriting characters based on GAN network
CN111325661A (en) * 2020-02-21 2020-06-23 京工数演(福州)科技有限公司 Seasonal style conversion model and method for MSGAN image
CN111666950A (en) * 2020-06-17 2020-09-15 大连民族大学 Font family generation method based on stream model
CN112818634A (en) * 2021-01-29 2021-05-18 上海海事大学 Calligraphy work style migration system, method and terminal
CN115240201A (en) * 2022-09-21 2022-10-25 江西师范大学 Chinese character generation method for alleviating network mode collapse problem by utilizing Chinese character skeleton information
CN117058266A (en) * 2023-10-11 2023-11-14 江西师范大学 Handwriting word generation method based on skeleton and outline

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577651A (en) * 2017-08-25 2018-01-12 上海交通大学 Chinese character style migratory system based on confrontation network
CN107644006A (en) * 2017-09-29 2018-01-30 北京大学 A kind of Chinese script character library automatic generation method based on deep neural network
CN107945118A (en) * 2017-10-30 2018-04-20 南京邮电大学 A kind of facial image restorative procedure based on production confrontation network
CN108038818A (en) * 2017-12-06 2018-05-15 电子科技大学 A kind of generation confrontation type network image style transfer method based on Multiple Cycle uniformity
CN108171173A (en) * 2017-12-29 2018-06-15 北京中科虹霸科技有限公司 A kind of pupil generation of iris image U.S. and minimizing technology

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577651A (en) * 2017-08-25 2018-01-12 上海交通大学 Chinese character style migratory system based on confrontation network
CN107644006A (en) * 2017-09-29 2018-01-30 北京大学 A kind of Chinese script character library automatic generation method based on deep neural network
CN107945118A (en) * 2017-10-30 2018-04-20 南京邮电大学 A kind of facial image restorative procedure based on production confrontation network
CN108038818A (en) * 2017-12-06 2018-05-15 电子科技大学 A kind of generation confrontation type network image style transfer method based on Multiple Cycle uniformity
CN108171173A (en) * 2017-12-29 2018-06-15 北京中科虹霸科技有限公司 A kind of pupil generation of iris image U.S. and minimizing technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BO CHANG等: "Generating Handwritten Chinese Characters using CycleGAN", 《HTTPS://ARXIV.ORG/PDF/1801.08624.PDF》 *
PAUL VICOL: "Programming Assignment 4: CycleGAN", 《HTTPS://WEB.ARCHIVE.ORG/WEB/20180713103642/HTTPS://WWW.CS.TORONTO.EDU/~RGROSSE/COURSES/CSC321_2018/ASSIGNMENTS/A4-HANDOUT.PDF》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110399845A (en) * 2019-07-29 2019-11-01 上海海事大学 Continuously at section text detection and recognition methods in a kind of image
CN110570481A (en) * 2019-07-31 2019-12-13 中国地质大学(武汉) calligraphy word stock automatic repairing method and system based on style migration
CN110695995A (en) * 2019-10-11 2020-01-17 星际(重庆)智能装备技术研究院有限公司 Robot calligraphy method based on deep learning
CN110930471A (en) * 2019-11-20 2020-03-27 大连交通大学 Image generation method based on man-machine interactive confrontation network
CN110930471B (en) * 2019-11-20 2024-05-28 大连交通大学 Image generation method based on man-machine interaction type countermeasure network
CN110969681B (en) * 2019-11-29 2023-08-29 山东浪潮科学研究院有限公司 Handwriting word generation method based on GAN network
CN110969681A (en) * 2019-11-29 2020-04-07 山东浪潮人工智能研究院有限公司 Method for generating handwriting characters based on GAN network
CN111325661A (en) * 2020-02-21 2020-06-23 京工数演(福州)科技有限公司 Seasonal style conversion model and method for MSGAN image
CN111325661B (en) * 2020-02-21 2024-04-09 京工慧创(福州)科技有限公司 Seasonal style conversion model and method for image named MSGAN
CN111666950A (en) * 2020-06-17 2020-09-15 大连民族大学 Font family generation method based on stream model
CN112818634A (en) * 2021-01-29 2021-05-18 上海海事大学 Calligraphy work style migration system, method and terminal
CN112818634B (en) * 2021-01-29 2024-04-05 上海海事大学 Handwriting style migration system, method and terminal
CN115240201A (en) * 2022-09-21 2022-10-25 江西师范大学 Chinese character generation method for alleviating network mode collapse problem by utilizing Chinese character skeleton information
CN117058266A (en) * 2023-10-11 2023-11-14 江西师范大学 Handwriting word generation method based on skeleton and outline
CN117058266B (en) * 2023-10-11 2023-12-26 江西师范大学 Handwriting word generation method based on skeleton and outline

Similar Documents

Publication Publication Date Title
CN109408776A (en) A kind of calligraphy font automatic generating calculation based on production confrontation network
CN104463209B (en) Method for recognizing digital code on PCB based on BP neural network
CN108764195A (en) Handwriting model training method, hand-written character recognizing method, device, equipment and medium
CN107506722A (en) One kind is based on depth sparse convolution neutral net face emotion identification method
CN107330444A (en) A kind of image autotext mask method based on generation confrontation network
CN109376582A (en) A kind of interactive human face cartoon method based on generation confrontation network
CN109684912A (en) A kind of video presentation method and system based on information loss function
CN106022392B (en) A kind of training method that deep neural network sample is accepted or rejected automatically
CN109993164A (en) A kind of natural scene character recognition method based on RCRNN neural network
CN108229490A (en) Critical point detection method, neural network training method, device and electronic equipment
CN107463954B (en) A kind of template matching recognition methods obscuring different spectrogram picture
CN106022363B (en) A kind of Chinese text recognition methods suitable under natural scene
CN106372581A (en) Method for constructing and training human face identification feature extraction network
CN110009057A (en) A kind of graphical verification code recognition methods based on deep learning
CN104463101A (en) Answer recognition method and system for textual test question
CN108121975A (en) A kind of face identification method combined initial data and generate data
CN108764242A (en) Off-line Chinese Character discrimination body recognition methods based on deep layer convolutional neural networks
CN108681735A (en) Optical character recognition method based on convolutional neural networks deep learning model
CN108364037A (en) Method, system and the equipment of Handwritten Chinese Character Recognition
CN108345833A (en) The recognition methods of mathematical formulae and system and computer equipment
CN110163567A (en) Classroom roll calling system based on multitask concatenated convolutional neural network
CN112381082A (en) Table structure reconstruction method based on deep learning
CN108985442A (en) Handwriting model training method, hand-written character recognizing method, device, equipment and medium
CN111144407A (en) Target detection method, system, device and readable storage medium
CN110210371A (en) A kind of aerial hand-written inertia sensing signal creating method based on depth confrontation study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190301