CN112633430A - Chinese font style migration method - Google Patents

Chinese font style migration method Download PDF

Info

Publication number
CN112633430A
CN112633430A CN202011564611.0A CN202011564611A CN112633430A CN 112633430 A CN112633430 A CN 112633430A CN 202011564611 A CN202011564611 A CN 202011564611A CN 112633430 A CN112633430 A CN 112633430A
Authority
CN
China
Prior art keywords
style
generator
font
network
generated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011564611.0A
Other languages
Chinese (zh)
Other versions
CN112633430B (en
Inventor
叶晨
杨煜
魏宇翔
李昊龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202011564611.0A priority Critical patent/CN112633430B/en
Publication of CN112633430A publication Critical patent/CN112633430A/en
Application granted granted Critical
Publication of CN112633430B publication Critical patent/CN112633430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/242Division of the character sequences into groups prior to recognition; Selection of dictionaries
    • G06V30/244Division of the character sequences into groups prior to recognition; Selection of dictionaries using graphical properties, e.g. alphabet type or font
    • G06V30/245Font recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/28Character recognition specially adapted to the type of the alphabet, e.g. Latin alphabet
    • G06V30/287Character recognition specially adapted to the type of the alphabet, e.g. Latin alphabet of Kanji, Hiragana or Katakana characters

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A Chinese font style migration method is characterized in that unsupervised style migration learning is carried out on handwritten Chinese characters, two auxiliary networks are creatively added on a generator consisting of a circularly generated confrontation network on the basis of an original confrontation type generation network, firstly, a residual error network is identified through Chinese character classification, meanwhile, font structure characteristics of an original image and a generator generated image are extracted, and matrix information of a specific layer is taken as an auxiliary loss function to control the font structure consistency of the original image and the generator generated image; and secondly, the style encoder is used for extracting style characteristics of the generated diagram of the generator and feeding the style characteristics back to the generator so as to control style consistency of the generator and the generator.

Description

Chinese font style migration method
Technical Field
The invention relates to a Chinese font style migration method.
Technical Field
Characters are tools used by people to convey information, and are important components of human culture. The invention of Chinese characters not only inherits the long history of China, but also plays a role in visual transmission of various Chinese character fonts (such as calligraphy bodies in ancient civilians and ink guest writings and art bodies designed by designers), and is an artistic symbol.
However, there are some pain points in the field of Chinese character design for a long time, which makes the expansion of Chinese character fonts difficult. First, when designing a font, designers often spend a lot of time designing each character in order to ensure uniformity of the font. Meanwhile, compared with English with only 26 letters, the Chinese characters have a large number of common characters, which results in the design of a complete Chinese character library for years and even decades. In order to solve the problem, the Chinese character style migration technology comes along with the production.
The Chinese character style migration technology refers to the technology of extracting styles from one Chinese character and applying the styles to another Chinese character. By using the technology, new Chinese characters can be automatically generated without the need of character-by-character design. For example, the technique may extract a certain style of handwriting and apply it to another known bold face, thus achieving automatic generation of handwriting.
The concrete implementation methods of the present Chinese character style migration can be classified into two types: the first type is a coder-decoder implementation method based on a single character; the second type is point-to-point font conversion based on paired fonts.
The first kind of implementation method based on single-character encoder-decoder mainly utilizes convolution neural network to extract the characteristics of partial Chinese character contents and styles and map the extracted characteristics to new contents, and it generates high-quality target Chinese character contents by continuously reducing the characteristic loss of contents and styles, but the disadvantages are that the method is difficult to reserve the stroke characteristics of each Chinese character in the process of style migration, and the generated contents have limitations and are difficult to generalize to a character library.
The second type of point-to-point-based font conversion implementation method includes that a source font and a target font are brought into a data set in pairs, and then a style conversion model from the source font to the target font is obtained based on the training of a generation countermeasure Network (GAN) for developing fire and heat at present, so that any Chinese character can be generalized to the expected target font by using the model, and the purpose of font generation is achieved. Generative confrontation networks as referred to herein are image generation techniques that generate a corresponding representation of an image in one style in another. Applying this image generation idea to font generation, the second method described above is obtained.
Compared with the first method, the second method has the advantages that two types of fonts of the source domain and the target domain are added as training data, so that a model which is better in generating effect and easier to generalize can be obtained. However, the method has the disadvantages that a training set with source font-target font matching is needed, and the cost is high; moreover, the same model can only convert one font into one font, and only retrain can be carried out when a new target font is met.
1.1 related prior art
"a method for generating personalized Chinese character font picture based on feature fusion", patent publication No.: CN 111667008A.
1.1.1 technical solution of the prior art
CN111667008A utilizes the generation type confrontation network technology to construct the font style conversion model from one Chinese character to another Chinese character, thereby achieving the automatic generation of the personalized Chinese character font.
First, the invention requires a font library with standard fonts. For example, in song dynasty, the font in the GB2312 international code standard font library is saved in the font library atlas in the form of independent pictures, and a predetermined number of partial chinese characters (about 670) are selected as the training standard chinese character dataset.
Secondly, the invention needs a font designer to carry out character-by-character design on each character in the selected standard Chinese characters to obtain the corresponding personalized Chinese character font, and the Chinese characters are converted into images to be stored in a training personalized Chinese character data graph set. And the image sizes of the standard Chinese characters and the personalized Chinese character image sets are ensured to be consistent.
And then carrying out generation of a confrontation network model construction, which comprises three parts: the pre-trained font feature extraction network comprises a pre-trained font feature extraction network, a generator and a discriminator. Wherein, the pre-trained font feature extraction network randomly selects 75% of pictures in the word stock picture set as a training set, the rest 25% of pictures are used as a test set, and training parameters are stored; the generator part adopts the structure of an encoder-decoder and aims to extract the styles of Chinese characters and map the styles to the generated Chinese characters; the discriminator is a convolutional neural network and is used for judging whether the Chinese characters are real or generated.
Then, the standard Chinese character images in the training set are input into a coder of a generator to generate the generated Chinese characters, a discriminator continuously compares the generated Chinese characters with the Chinese characters provided by a designer, and training is completed until the value of the generation of the confrontation network model loss function is minimum.
And obtaining a font image set generation for the trained model: respectively inputting any Chinese character (including Chinese characters which do not appear in a training set) in the standard character library into the trained generator and the font feature extraction network so as to respectively obtain font feature codes and font feature vectors; the character image and the character type vector are combined together to be used as a characteristic combination vector, and the characteristic combination vector is input into a decoder for generating a confrontation network model, so that the personalized Chinese character image with the target character style can be generated.
1.1.2 Prior Art improvements
The first major drawback of the CN111667008A technology is that it requires that the standard font and the characters of the personalized chinese character font appear in pairs, i.e. the chinese character designer provides the same chinese characters as those selected in the standard font data set, and needs to design the chinese characters word by word. Thus, if a batch of standard Chinese characters selected from the data set is changed, the designed Chinese characters need to be changed again, which is too costly for the designer.
The second disadvantage is that the technology can only generate one target font for the same source font, and a new round of design and training is needed if the style of the generated personalized Chinese characters is required to be modified, so that the model obtained by the technology has a limited application range, is not flexible enough in the aspect of personalized Chinese character generation, has no diversity, and cannot be used universally.
Disclosure of Invention
The existing Chinese character style migration model using GAN can only convert the current font into one that has been trained before. And conversion of one font into any of different styles cannot be accomplished. This is the pain point addressed by the present solution. The scheme of the invention can convert the style of one font into the font of any style by the trained network. This is a task that existing models cannot accomplish.
Meanwhile, the existing model can only train paired fonts, that is, in the process of inputting the training set, the input content must be a one-to-one matching of the font and the style font. The style font and the font which we input and want to convert must be the same character, that is, in the existing model, the network cannot extract the font structure and style separately, but must extract them simultaneously. The problem solved by the scheme of the invention is that the style migration of the Chinese characters can be completed under the condition that the font contents of the input original font and the image of the style font are not consistent.
According to the Chinese character style migration method, firstly, Chinese characters of a source font and a target font do not need to be matched in the training process, and different Chinese character contents are allowed to exist in a source domain and a target domain; secondly, one font can be migrated to a plurality of different fonts by only training once. Compared with the existing method, the method greatly reduces the cost, does not influence the generation effect, and increases the diversity, flexibility and universality of the style of the generated Chinese characters.
Technical scheme
A Chinese font style migration method is characterized in that unsupervised style migration learning is carried out on handwritten Chinese characters, two auxiliary networks are creatively added on a generator consisting of a circularly generated confrontation network on the basis of an original confrontation type generation network, firstly, a residual error network is identified through Chinese character classification, meanwhile, font structure characteristics of an original image and a generator generated image are extracted, and matrix information of a specific layer is taken as an auxiliary loss function to control the font structure consistency of the original image and the generator generated image; and secondly, the style encoder is used for extracting style characteristics of the generated diagram of the generator and feeding the style characteristics back to the generator so as to control style consistency of the generator and the generator.
The specific method comprises the following steps:
firstly, a target font extracts a corresponding style code through a style encoder or an unsupervised style code generated by a random noise point through a mapping network is used as a generator acceptable style code, and the style code represents the statistical style commonality of the font.
And then, the style code and the original image are transmitted to a generator network G together to generate a corresponding target picture, so that the whole process of the style migration of the Chinese font is completed. Meanwhile, the generated target picture is transmitted back to a discriminator network D for discrimination and is fed back to a corresponding optimized loss function in the network; and transmitting the generated target picture and the original picture to a residual error network, extracting and comparing font structural features by the residual error network, and supplementarily and positively feeding the extracted and compared font structural features to a generator network.
The final target graph is the style effect of the target font or the noise font after the style migration and conversion. Meanwhile, the style encoder has reusability and expandability in concept, can finish the migration from one type of font to any style of font and avoids the process of pairing the original font with the target font.
Drawings
FIG. 1 is a schematic diagram of a network architecture
FIG. 2 is a schematic diagram of a network associated with a loss function algorithm for a generative countermeasure network GAN
FIG. 3 is a schematic diagram of a network associated with a round robin uniform loss function algorithm
FIG. 4 is a network diagram of loss function correlation for accomplishing text structure extraction
FIG. 5 network schematic of loss function dependence for optimized generators in a stylized encoder layer
Detailed Description
The technical scheme of the invention is further explained by combining the drawings and the embodiment.
The method carries out unsupervised style migration learning on the handwritten Chinese characters, creatively adds two auxiliary networks on a generator consisting of a cyclic generation countermeasure network on the basis of an original countermeasure generation network, firstly, the character structure characteristics of the original image and a generator generation image are extracted by classifying and identifying residual error networks through Chinese characters, and matrix information of a specific layer is taken as an auxiliary loss function to control the consistency of the character structures of the original image and the generator generation image; and secondly, the style encoder is used for extracting style characteristics of the generated diagram of the generator and feeding the style characteristics back to the generator so as to control style consistency of the generator and the generator. A specific network architecture is shown in fig. 1.
In fig. 1, it can be seen that this is a typical model of a generative confrontation network GAN.
Firstly, a target font extracts a corresponding style code through a style encoder or an unsupervised style code generated by a random noise point through a mapping network is used as a generator acceptable style code, and the style code represents the statistical style commonality of the font.
And then, the style code and the original image are transmitted to a generator network G together to generate a corresponding target picture, so that the whole process of the style migration of the Chinese font is completed. Meanwhile, the generated target picture is transmitted back to a discriminator network D for discrimination and is fed back to a corresponding optimized loss function in the network; and transmitting the generated target picture and the original picture to a residual error network, extracting and comparing font structural features by the residual error network, and supplementarily and positively feeding the extracted and compared font structural features to a generator network.
The final target graph is the style effect of the target font or the noise font after the style migration and conversion. Meanwhile, the style encoder has reusability and expandability in concept, can finish the migration from one type of font to any style of font and avoids the process of pairing the original font with the target font.
The invention jointly optimizes the learning effect of style migration through a plurality of loss functions and a dynamic game process of an impedance type generation network, and the related loss functions comprise the following four types:
1) a loss function is generated that counters the network GAN.
Figure BDA0002860372060000051
Wherein Dy(x) Is discriminator for discriminating whether the picture is generated by the generator, s represents a symbol in accordance with the genre
Figure BDA0002860372060000052
Style generated code. z represents the noise input into the mapping network.
Figure BDA0002860372060000053
Then this style is represented from the mapping network generation. G (x, s) represents a picture generated by the generator according to the encoder,
Figure BDA0002860372060000054
then the picture generated by the generator according to the mapping network. Lambda [ alpha ]fableAnd λeableThe style representing the generator comes from the mapping network or the generator, taking 0 or 1, respectively. Wherein
Figure BDA0002860372060000055
Then
Figure BDA0002860372060000056
This represents the expectation of the parenthesized part.
This is a loss function that optimizes both the generator and the arbiter. The purpose of the generator is to make the object font generated unrecognizable to the discriminator, which is originally input and which is generated. The goal of the discriminator is to identify as much as possible which picture was generated by the generator and which was originally. This is a competing process, as shown in FIG. 2.
2) Loss was consistent with cycling.
Figure BDA0002860372060000057
Wherein λfableAnd λeableIn keeping with the above-mentioned discussion,
Figure BDA0002860372060000061
and
Figure BDA0002860372060000062
the meaning of (A) is as described above.
Figure BDA0002860372060000063
Representing generation by a generator
Figure BDA0002860372060000064
The picture of (2) is then put into the generator.
Figure BDA0002860372060000065
Then the stylistic code from the original image x is represented.
The goal of this cyclic consistent loss function is to optimize the generator. As shown in fig. 3, the original image and a style code are processed by the generator to generate the target font image G (x, s). Then, the obtained target font image G (x, s) and the style of the original image are encoded
Figure BDA0002860372060000066
Putting the image into a generator to restore an original image style
Figure BDA0002860372060000067
The restored target image
Figure BDA0002860372060000068
The images are compared with the original image x and aligned. The generator is continuously optimized in this process. This looping back to the process of reconciling the original image is a essence of the looping GAN.
3) And completing the loss function of the text architecture extraction.
Figure BDA0002860372060000069
Wherein λfableAnd λeableIn keeping with the above-mentioned discussion,
Figure BDA00028603720600000610
and
Figure BDA00028603720600000611
the meaning of (A) is as described above. R (x) is the result of the neural network of pre-trained handwritten Chinese character recognition described below. Will appear as an embedded layer when in use.
The goal of this loss function is also to optimize the generator. As shown in fig. 4, the structure of the residual network is used in this optimization process. And (3) putting the original image and the image of the target font generated by the generator into a residual neural network, and taking a loss function of L1 or L2. The loss function approaches zero. The final effect is that the original image and the character architecture of the target font are consistent on the embedding layer. The generator optimized by the loss function can finish the retention of the text content in the generation process, and ensure that the text content is the original font content.
As shown in fig. 4, the original image and the target font are input, and after passing through a neural network and being classified and identified, the last part of the network layer is taken out as an embedded layer and added into the countermeasure network to assist in constructing the font structure.
4) Loss functions for the optimized generator in the encoder layer are stylized.
Figure BDA00028603720600000612
Wherein λfableAnd λeableIn keeping with the above-mentioned discussion,
Figure BDA00028603720600000613
and
Figure BDA00028603720600000614
the meaning of (A) is as described above. G (x, s) and
Figure BDA00028603720600000615
the meaning is also as described above.
As shown in FIG. 5, the loss function works by first randomly extracting a noise z, passing the noise through a mapping network to obtain a stylized code
Figure BDA00028603720600000616
A particular style is obtained. This style is put into the generator together with the original image x to generate a target image
Figure BDA00028603720600000617
The style of the current image is extracted using a style encoder and compared to the style of noise generation. They were aligned as much as possible with a loss of either L1 or L2. In the process, the extracted styles are ensured to be consistent with the original styles after passing through the generator.
Combining the above four losses, we give the following
Figure BDA0002860372060000071
Wherein λadv,λsty,λcyc,λfonAre parameters when integrating the loss functions. This equation is the target equation that we use when training the network. Wherein λfonIs set to 10, λadv、λstyAnd λcycSet to 1, the corner labels G, E, F, D represent the generator, the style encoder, the mapping network and the discriminator, respectively.
The technical scheme of the invention has the advantages
The scheme can finally achieve the following two effects in the field of Chinese character style font generation:
1. the migration of one font to any style of font can be accomplished without the limitation of training effect.
2. The original font and the target font do not need to be matched in the training process, and two pictures with different Chinese character contents can be used.
In conclusion, the scheme has great beneficial effect on the promotion of Chinese character style migration in the application field.

Claims (2)

1. A Chinese font style migration method is characterized in that unsupervised style migration learning is carried out on handwritten Chinese characters, two auxiliary networks are creatively added on a generator consisting of a circularly generated confrontation network on the basis of an original confrontation type generation network, firstly, a residual error network is identified through Chinese character classification, meanwhile, font structure characteristics of an original image and a generator generated image are extracted, and matrix information of a specific layer is taken as an auxiliary loss function to control the font structure consistency of the original image and the generator generated image; and secondly, the style encoder is used for extracting style characteristics of the generated diagram of the generator and feeding the style characteristics back to the generator so as to control style consistency of the generator and the generator.
2. The method of claim 1, wherein the specific method comprises:
firstly, extracting a corresponding style code from a target font through a style encoder or taking an unsupervised style code generated by a random noise point through a mapping network as an acceptable style code of a generator, wherein the style code represents statistical style commonality of the font;
thirdly, the style code and the original image are transmitted to a generator network G together to generate a corresponding target picture to complete the integral process of the style migration of the Chinese font; meanwhile, the generated target picture is transmitted back to a discriminator network D for discrimination and is fed back to a corresponding optimized loss function in the network; the generated target picture and the original picture are transmitted to a residual error network together, and the residual error network is used for extracting and comparing font structure characteristics and supplementarily and positively feeding the font structure characteristics to a generator network;
and finally, the obtained target graph is the style effect of the target font or the noise font after the style migration and conversion.
CN202011564611.0A 2020-12-25 2020-12-25 Chinese font style migration method Active CN112633430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011564611.0A CN112633430B (en) 2020-12-25 2020-12-25 Chinese font style migration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011564611.0A CN112633430B (en) 2020-12-25 2020-12-25 Chinese font style migration method

Publications (2)

Publication Number Publication Date
CN112633430A true CN112633430A (en) 2021-04-09
CN112633430B CN112633430B (en) 2022-10-14

Family

ID=75324945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011564611.0A Active CN112633430B (en) 2020-12-25 2020-12-25 Chinese font style migration method

Country Status (1)

Country Link
CN (1) CN112633430B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095038A (en) * 2021-05-08 2021-07-09 杭州王道控股有限公司 Font generation method and device for generating countermeasure network based on multitask discriminator
CN113553932A (en) * 2021-07-14 2021-10-26 同济大学 Calligraphy character erosion repairing method based on style migration
CN113792853A (en) * 2021-09-09 2021-12-14 北京百度网讯科技有限公司 Training method of character generation model, character generation method, device and equipment
CN113792850A (en) * 2021-09-09 2021-12-14 北京百度网讯科技有限公司 Font generation model training method, font library establishing method, device and equipment
CN113807430A (en) * 2021-09-15 2021-12-17 网易(杭州)网络有限公司 Model training method and device, computer equipment and storage medium
JP2022058691A (en) * 2021-04-30 2022-04-12 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド Method for training adversarial network model, method for establishing character library, apparatus therefor, electronic device, storage medium, and program
CN114399427A (en) * 2022-01-07 2022-04-26 福州大学 Character effect migration method based on cyclic generation countermeasure network
CN114495118A (en) * 2022-04-15 2022-05-13 华南理工大学 Personalized handwritten character generation method based on countermeasure decoupling
CN114821602A (en) * 2022-06-28 2022-07-29 北京汉仪创新科技股份有限公司 Method, system, apparatus and medium for training an antagonistic neural network to generate a word stock

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145112A (en) * 2019-12-18 2020-05-12 华东师范大学 Two-stage image rain removing method and system based on residual error countermeasure refinement network
US20200364860A1 (en) * 2019-05-16 2020-11-19 Retrace Labs Artificial Intelligence Architecture For Identification Of Periodontal Features
CN112070658A (en) * 2020-08-25 2020-12-11 西安理工大学 Chinese character font style migration method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200364860A1 (en) * 2019-05-16 2020-11-19 Retrace Labs Artificial Intelligence Architecture For Identification Of Periodontal Features
CN111145112A (en) * 2019-12-18 2020-05-12 华东师范大学 Two-stage image rain removing method and system based on residual error countermeasure refinement network
CN112070658A (en) * 2020-08-25 2020-12-11 西安理工大学 Chinese character font style migration method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
滕少华等: "《基于生成式对抗网络的中文字体风格迁移》", 《计算机应用研究》 *
白海娟等: "《基于生成式对抗网络的字体风格迁移方法》", 《大连民族大学学报》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022058691A (en) * 2021-04-30 2022-04-12 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド Method for training adversarial network model, method for establishing character library, apparatus therefor, electronic device, storage medium, and program
CN113095038A (en) * 2021-05-08 2021-07-09 杭州王道控股有限公司 Font generation method and device for generating countermeasure network based on multitask discriminator
CN113095038B (en) * 2021-05-08 2024-04-16 杭州王道控股有限公司 Font generation method and device for generating countermeasure network based on multi-task discriminator
CN113553932A (en) * 2021-07-14 2021-10-26 同济大学 Calligraphy character erosion repairing method based on style migration
CN113553932B (en) * 2021-07-14 2022-05-13 同济大学 Calligraphy character erosion repairing method based on style migration
CN113792853B (en) * 2021-09-09 2023-09-05 北京百度网讯科技有限公司 Training method of character generation model, character generation method, device and equipment
CN113792853A (en) * 2021-09-09 2021-12-14 北京百度网讯科技有限公司 Training method of character generation model, character generation method, device and equipment
CN113792850A (en) * 2021-09-09 2021-12-14 北京百度网讯科技有限公司 Font generation model training method, font library establishing method, device and equipment
US11875584B2 (en) 2021-09-09 2024-01-16 Beijing Baidu Netcom Science Technology Co., Ltd. Method for training a font generation model, method for establishing a font library, and device
CN113792850B (en) * 2021-09-09 2023-09-01 北京百度网讯科技有限公司 Font generation model training method, font library building method, font generation model training device and font library building equipment
CN113807430A (en) * 2021-09-15 2021-12-17 网易(杭州)网络有限公司 Model training method and device, computer equipment and storage medium
CN113807430B (en) * 2021-09-15 2023-08-08 网易(杭州)网络有限公司 Model training method, device, computer equipment and storage medium
CN114399427A (en) * 2022-01-07 2022-04-26 福州大学 Character effect migration method based on cyclic generation countermeasure network
CN114495118A (en) * 2022-04-15 2022-05-13 华南理工大学 Personalized handwritten character generation method based on countermeasure decoupling
CN114821602A (en) * 2022-06-28 2022-07-29 北京汉仪创新科技股份有限公司 Method, system, apparatus and medium for training an antagonistic neural network to generate a word stock

Also Published As

Publication number Publication date
CN112633430B (en) 2022-10-14

Similar Documents

Publication Publication Date Title
CN112633430B (en) Chinese font style migration method
US11250252B2 (en) Simulated handwriting image generator
CN107577651B (en) Chinese character font migration system based on countermeasure network
Wang et al. Deepfont: Identify your font from an image
CN111488931B (en) Article quality evaluation method, article recommendation method and corresponding devices
CN110196913A (en) Multiple entity relationship joint abstracting method and device based on text generation formula
CN108874174A (en) A kind of text error correction method, device and relevant device
Krishnan et al. Textstylebrush: transfer of text aesthetics from a single example
CN111160452A (en) Multi-modal network rumor detection method based on pre-training language model
CN111177366A (en) Method, device and system for automatically generating extraction type document abstract based on query mechanism
CN113408535B (en) OCR error correction method based on Chinese character level features and language model
CN112036137A (en) Deep learning-based multi-style calligraphy digital ink simulation method and system
CN114255159A (en) Handwritten text image generation method and device, electronic equipment and storage medium
CN110097615B (en) Stylized and de-stylized artistic word editing method and system
CN110570484A (en) Text-guided image coloring method under image decoupling representation
Badry et al. Quranic script optical text recognition using deep learning in IoT systems
CN116958700A (en) Image classification method based on prompt engineering and contrast learning
Han-wen et al. Fingerspelling identification for American sign language based on Resnet-18
CN115984842A (en) Multi-mode-based video open tag extraction method
JP2023007432A (en) Handwriting recognition method and apparatus by augmenting content aware and style aware data
CN111598075A (en) Picture generation method and device and readable storage medium
Bi et al. Chinese character captcha sequential selection system based on convolutional neural network
CN116311275B (en) Text recognition method and system based on seq2seq language model
CN116975344B (en) Chinese character library generation method and device based on Stable diffration
CN116383428B (en) Graphic encoder training method, graphic matching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant