CN117236284A - Font generation method and device based on style information and content information adaptation - Google Patents

Font generation method and device based on style information and content information adaptation Download PDF

Info

Publication number
CN117236284A
CN117236284A CN202311503006.6A CN202311503006A CN117236284A CN 117236284 A CN117236284 A CN 117236284A CN 202311503006 A CN202311503006 A CN 202311503006A CN 117236284 A CN117236284 A CN 117236284A
Authority
CN
China
Prior art keywords
style
font
picture
content
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311503006.6A
Other languages
Chinese (zh)
Inventor
曾锦山
杨孙哲
汪叶飞
熊康悦
章燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Normal University
Original Assignee
Jiangxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Normal University filed Critical Jiangxi Normal University
Priority to CN202311503006.6A priority Critical patent/CN117236284A/en
Publication of CN117236284A publication Critical patent/CN117236284A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses a font generating method and a font generating device based on style information and content information adaptation, wherein the method comprises the following steps: the method comprises the steps of obtaining a current source font picture and a current reference style font picture set, inputting the current source font picture and the current reference style font picture set into a preset font generation model for calculation, and generating a target style font picture corresponding to the current source font picture. According to the scheme, the target style font picture is automatically generated through the preset font generation model which is built in advance, so that the efficiency of generating the target style font picture is improved; in addition, the preset font generation model fully utilizes style characteristics of a plurality of current reference style font pictures, and improves the accuracy of generating target style font pictures; meanwhile, as a plurality of historical reference style font pictures with preset quantity are randomly selected, the preset font generation model can learn more characteristics, so that the accuracy of generating the target style font pictures is further improved.

Description

Font generation method and device based on style information and content information adaptation
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a font generating method and apparatus based on style information and content information adaptation.
Background
The Chinese character fonts have huge quantity and complex structure, and the styles of each font are quite different, and because the manual font design is time-consuming and labor-consuming and needs professional designers to design, the font generation task becomes an important research direction.
In the related art, when fonts with different styles are generated, information such as strokes, parts, structures and the like are marked by manpower, so that marked information is used as priori information, and fonts with required styles are generated. However, the generation efficiency is low when the fonts are generated in the above manner.
Disclosure of Invention
The present invention aims to solve at least the technical problems existing in the prior art, and therefore, a first aspect of the present invention provides a font generating method based on style information and content information adaptation, the method comprising:
acquiring a current source font picture and a current reference style font picture set; the current reference style font picture set comprises a plurality of current reference style font pictures which are selected randomly and have preset numbers, each current reference style font picture corresponds to different content information, and each current reference style font picture corresponds to the same style information;
Inputting the current source font picture and the current reference style font picture set into a preset font generating model for calculation, and generating a target style font picture corresponding to the current source font picture; the content information of the target style font picture is the same as the content information of the current source font picture, and the style information of the target style font picture is the same as the style information of each current reference style font picture.
In one possible implementation manner, the preset font generating model includes a content encoder, a style content feature adapting module and a decoder, inputs the current source font picture and the current reference style font picture set into the preset font generating model for calculation, and generates a target style font picture corresponding to the current source font picture, including:
inputting a current source font picture and a current reference style font picture set into a preset font generation model, and extracting features of the current source font picture through a content encoder to generate first content features;
performing feature extraction on the current reference style font picture set through a style encoder to generate a first style feature set; the first style characteristic set comprises first style characteristics corresponding to each current reference style font picture;
The method comprises the steps of carrying out fusion processing on first content features and a first style feature set through a style content feature adaptation module to generate fusion features;
and decoding the fusion characteristics through a decoder to generate the target style font picture.
In one possible implementation manner, the generating, by the style content feature adapting module, the fusion processing for the first content feature and the first style feature set, includes:
connecting the first content features and the first style feature set through a style content feature adaptation module to generate connection features; the connection features comprise second content features corresponding to the first content features and second style feature sets corresponding to the first style feature sets, and the second style feature sets comprise second style features corresponding to the first style features;
calculating a first weight corresponding to the second content feature and a second weight corresponding to the second style feature set;
and carrying out weighted fusion processing on the second content features and the second style feature set based on the first weight and the second weight to generate fusion features.
In one possible implementation, calculating a first weight corresponding to a second content feature and a second weight corresponding to a second set of style features includes:
Summing the connection features to generate a combined feature;
carrying out global average pooling treatment on the combined features to generate feature vectors;
compressing the feature vector to generate a compressed feature;
converting the compressed features to generate a first probability distribution corresponding to the second content features and a second probability distribution corresponding to the second style feature set;
based on the first probability distribution and the second probability distribution, a first weight corresponding to the second content feature and a second weight corresponding to the second style feature set are calculated, respectively.
In one possible implementation, the preset number of the plurality of current reference style font pictures is less than or equal to six current reference style font pictures.
In one possible implementation manner, the construction process of the preset font generation model includes:
acquiring a training sample set and a target picture; the training sample set comprises historical source font pictures and historical reference style font picture sets of various types, the various types of the historical reference style font picture sets correspond to different style information, the historical reference style font picture sets comprise a plurality of historical reference style font pictures which are randomly selected and have preset numbers, each historical reference style font picture corresponds to different content information, and each historical reference style font picture corresponds to the same style information;
Inputting the training sample set and the target picture into an initial font generating model for training to obtain an output result;
and calculating an overall loss value based on the output result, updating model parameters according to the overall loss value, and generating a preset font generating model based on the updated model parameters.
In one possible implementation, calculating the overall loss value based on the output result includes:
calculating an average absolute error loss value based on the output result and the target picture;
calculating a content countermeasures loss value based on the output result and the historical source font picture;
calculating style countermeasures against loss values based on the output results and the historical reference style font picture sets for various types of historical reference style font picture sets;
the overall loss value is generated based on the average absolute error loss value, the content counter loss value, and the style counter loss value.
A second aspect of the present invention proposes a font generating device based on adaptation of style information and content information, the device comprising:
the acquisition module is used for acquiring a current source font picture and a current reference style font picture set; the current reference style font picture set comprises a plurality of current reference style font pictures which are selected randomly and have preset numbers, each current reference style font picture corresponds to different content information, and each current reference style font picture corresponds to the same style information;
The generation module is used for inputting the current source font picture and the current reference style font picture set into a preset font generation model for calculation and generating a target style font picture corresponding to the current source font picture; the content information of the target style font picture is the same as the content information of the current source font picture, and the style information of the target style font picture is the same as the style information of each current reference style font picture.
In one possible implementation manner, the preset font generation model includes a content encoder, a style content feature adapting module and a decoder, where the generation module is specifically configured to:
inputting a current source font picture and a current reference style font picture set into a preset font generation model, and extracting features of the current source font picture through a content encoder to generate first content features;
performing feature extraction on the current reference style font picture set through a style encoder to generate a first style feature set; the first style characteristic set comprises first style characteristics corresponding to each current reference style font picture;
the method comprises the steps of carrying out fusion processing on first content features and a first style feature set through a style content feature adaptation module to generate fusion features;
And decoding the fusion characteristics through a decoder to generate the target style font picture.
In one possible implementation manner, the generating module is further configured to:
connecting the first content features and the first style feature set through a style content feature adaptation module to generate connection features; the connection features comprise second content features corresponding to the first content features and second style feature sets corresponding to the first style feature sets, and the second style feature sets comprise second style features corresponding to the first style features;
calculating a first weight corresponding to the second content feature and a second weight corresponding to the second style feature set;
and carrying out weighted fusion processing on the second content features and the second style feature set based on the first weight and the second weight to generate fusion features.
In one possible implementation manner, the generating module is further configured to:
summing the connection features to generate a combined feature;
carrying out global average pooling treatment on the combined features to generate feature vectors;
compressing the feature vector to generate a compressed feature;
converting the compressed features to generate a first probability distribution corresponding to the second content features and a second probability distribution corresponding to the second style feature set;
Based on the first probability distribution and the second probability distribution, a first weight corresponding to the second content feature and a second weight corresponding to the second style feature set are calculated, respectively.
In one possible implementation, the preset number of the plurality of current reference style font pictures is less than or equal to six current reference style font pictures.
In one possible implementation manner, the font generating device adapted based on the style information and the content information is further configured to:
acquiring a training sample set and a target picture; the training sample set comprises historical source font pictures and historical reference style font picture sets of various types, the various types of the historical reference style font picture sets correspond to different style information, the historical reference style font picture sets comprise a plurality of historical reference style font pictures which are randomly selected and have preset numbers, each historical reference style font picture corresponds to different content information, and each historical reference style font picture corresponds to the same style information;
inputting the training sample set and the target picture into an initial font generating model for training to obtain an output result;
and calculating an overall loss value based on the output result, updating model parameters according to the overall loss value, and generating a preset font generating model based on the updated model parameters.
In one possible implementation manner, the font generating device adapted based on the style information and the content information is further configured to:
calculating an average absolute error loss value based on the output result and the target picture;
calculating a content countermeasures loss value based on the output result and the historical source font picture;
calculating style countermeasures against loss values based on the output results and the historical reference style font picture sets for various types of historical reference style font picture sets;
the overall loss value is generated based on the average absolute error loss value, the content counter loss value, and the style counter loss value.
A third aspect of the present invention proposes an electronic device, the electronic device comprising a processor and a memory, the memory storing at least one instruction, at least one program, a set of codes or a set of instructions, the at least one instruction, the at least one program, the set of codes or the set of instructions being loaded and executed by the processor to implement the font generating method based on style information and content information adaptation as described in the first aspect.
A fourth aspect of the present invention proposes a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes or a set of instructions, the at least one instruction, the at least one program, the set of codes or the set of instructions being loaded and executed by a processor to implement the font generating method based on style information and content information adaptation as described in the first aspect.
The embodiment of the invention has the following beneficial effects:
the font generating method based on style information and content information adaptation provided by the embodiment of the invention comprises the following steps: acquiring a current source font picture and a current reference style font picture set; the current reference style font picture set comprises a plurality of current reference style font pictures which are selected randomly and have preset numbers, each current reference style font picture corresponds to different content information, and each current reference style font picture corresponds to the same style information; and inputting the current source font picture and the current reference style font picture set into a preset font generation model for calculation, and generating a target style font picture corresponding to the current source font picture. According to the scheme, the target style font picture is automatically generated through the preset font generation model which is built in advance, so that the efficiency of generating the target style font picture is improved; in addition, the preset font generation model fully utilizes the style characteristics of a plurality of current reference style font pictures, and improves the accuracy of generating the target style font pictures by effectively matching the style characteristics with the content characteristics of the current source font pictures; meanwhile, as a plurality of historical reference style font pictures with preset quantity are randomly selected, the preset font generation model can learn more characteristics, so that the accuracy of generating the target style font pictures is further improved.
Drawings
FIG. 1 is a schematic diagram of a different font style provided by an embodiment of the present application;
FIG. 2 is a block diagram of a computer device according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating steps of a font generating method based on style information and content information adaptation according to an embodiment of the present application;
FIG. 4 is a flowchart for generating a target style font picture according to an embodiment of the present application;
FIG. 5 is an overall frame diagram for generating a target style font picture according to an embodiment of the present application;
FIG. 6 is a flow chart of generating fusion features according to an embodiment of the present application;
FIG. 7 is a flowchart of calculating a first weight and a second weight according to an embodiment of the present application;
FIG. 8 is a flowchart for constructing a preset font generation model according to an embodiment of the present application;
FIG. 9 is a flowchart of calculating an overall loss value according to an embodiment of the present application;
FIG. 10 is a schematic diagram of the results of an ablation experiment provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of test results of a preset font generation model and other models for an unseen character set of a seen font according to an embodiment of the present application;
FIG. 12 is a schematic diagram of test results of a preset font generation model and other models for an invisible font visible character set and an invisible font invisible character set according to an embodiment of the present application;
Fig. 13 is a block diagram of a font generating device adapted based on style information and content information according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The Chinese character fonts have huge quantity and complex structure, and the styles of each font are quite different, and because the manual font design is time-consuming and labor-consuming and needs professional designers to design, the font generation task becomes an important research direction. In the related art, when fonts with different styles are generated, information such as strokes, parts, structures and the like are marked by manpower, so that marked information is used as priori information, and fonts with required styles are generated.
In particular, existing manually designed font techniques are time-consuming and labor-consuming, and require specialized personnel to complete, and in addition, the current mainstream small sample generation model is based on style-content separation paradigms, mainly based on font information such as strokes, parts, structures, etc., without considering that style characteristics behind each character in the same style are different. As shown in fig. 1, fig. 1 is a schematic diagram of a different font style provided in an embodiment of the present application, it can be seen that the different font styles are mainly embodied in terms of shape, thickness, angle, radian, etc., and these factors have different effects on visual presentation and emotion transmission. Therefore, the efficiency and accuracy of generating the style font picture by adopting the existing method are low.
In view of the above, the application provides a font generation method and device based on style information and content information adaptation, which automatically generates a target style font picture through a pre-constructed preset font generation model, thereby improving the efficiency of generating the target style font picture; in addition, the preset font generation model fully utilizes the style characteristics of a plurality of current reference style font pictures, and improves the accuracy of generating the target style font pictures by effectively matching the style characteristics with the content characteristics of the current source font pictures; meanwhile, as a plurality of historical reference style font pictures with preset quantity are randomly selected, the preset font generation model can learn more characteristics, so that the accuracy of generating the target style font pictures is further improved.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the embodiments of the present disclosure, unless otherwise indicated, the meaning of "a plurality" is two or more. In addition, the use of "based on" or "according to" is intended to be open and inclusive in that a process, step, calculation, or other action "based on" or "according to" one or more of the stated conditions or values may in practice be based on additional conditions or beyond the stated values.
The font generating method based on style information and content information adaptation provided by the application can be applied to computer equipment (electronic equipment), wherein the computer equipment can be a server or a terminal, wherein the server can be one server or a server cluster consisting of a plurality of servers, the embodiment of the application is not particularly limited to the embodiment, and the terminal can be but not limited to various personal computers, notebook computers, smart phones, tablet computers and portable wearable equipment.
Taking the example of a computer device being a server, FIG. 2 shows a block diagram of a server, as shown in FIG. 2, which may include a processor and memory connected by a system bus. Wherein the processor of the server is configured to provide computing and control capabilities. The memory of the server includes nonvolatile storage medium and internal memory. The nonvolatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The computer program, when executed by a processor, implements a font generating method based on adaptation of style information to content information.
It will be appreciated by those skilled in the art that the structure shown in fig. 2 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the server to which the present inventive arrangements are applied, alternatively the server may comprise more or less components than shown in the drawings, or may combine some components, or have a different arrangement of components.
The execution subject of the embodiment of the present application may be a computer device, or may be a font generating device adapted based on style information and content information, and the following method embodiments will be described with reference to the computer device as the execution subject.
Fig. 3 is a flowchart of steps of a font generating method based on style information and content information adaptation according to an embodiment of the present application. As shown in fig. 3, the method comprises the steps of:
step 302, acquiring a current source font picture and a current reference style font picture set.
The current reference style font picture set comprises a plurality of current reference style font pictures which are selected randomly and have preset numbers, each current reference style font picture corresponds to different content information, and each current reference style font picture corresponds to the same style information.
Therefore, when the target font style picture is generated, the current source font picture and the current reference font picture set need to be acquired first. The current source font picture contains corresponding content information, namely specific characters, and the current reference style font picture set contains corresponding style information, such as different font styles of a regular script, song Ti and the like, so that the target style font picture can be generated based on the current source font picture and the current reference style font picture set.
Step 304, inputting the current source font picture and the current reference style font picture set into a preset font generating model for calculation, and generating a target style font picture corresponding to the current source font picture.
The content information of the target style font picture is the same as the content information of the current source font picture, and the style information of the target style font picture is the same as the style information of each current reference style font picture.
In some optional embodiments, the foregoing preset font generating model is a model for generating a target style font picture based on a current source font picture and a current reference style font picture set, where the preset font generating model may include a content encoder, a style content feature adapting module, and a decoder, and when the current source font picture and the current reference style font picture set are input into the preset font generating model to perform calculation, and generate the target style font picture corresponding to the current source font picture, as shown in fig. 4, fig. 4 is a flowchart for generating the target style font picture according to an embodiment of the present invention, where the flowchart includes:
Step 402, inputting the current source font picture and the current reference style font picture set into a preset font generating model, and extracting features of the current source font picture by a content encoder to generate a first content feature.
And step 404, extracting characteristics of the current reference style font picture set through a style encoder to generate a first style characteristic set.
Step 406, performing fusion processing on the first content feature and the first style feature set through the style content feature adapting module to generate fusion features.
And 408, decoding the fusion characteristics through a decoder to generate the target style font picture.
Referring to fig. 5, fig. 5 is an overall frame diagram for generating a font picture of a target style according to an embodiment of the present invention. Current source font picture, content picture x c Current reference style font picture set, i.e. style pictureThe content encoder is E c The style encoder is E s The decoder is D.
After the current source font picture and the current reference style font picture set are input into the preset font generating model, the current source font picture and the current reference style font picture set can pass through the content encoder E c Extracting features of the current source font picture to generate a first content feature f c The first content feature f c Wherein C represents the number of channels, and H and W represent the first content feature f, respectively c Is a height and width of (a). And, can pass through the style encoder E s Extracting features of the current reference style font picture set to generate a first style feature set f i . Wherein the first style characteristic set f i Includes a first style characteristic corresponding to each current reference style font picture, i.e 1 To f k A first style characteristic set f i The size of k×c×h×w, k representing the maximum number of current reference style font pictures.
So that the first content feature f can be matched by the style content feature adaptation module c A first style characteristic set f i And performing fusion processing to generate fusion characteristics. In some alternative embodiments, as shown in fig. 6, fig. 6 is a flowchart of generating a fusion feature according to an embodiment of the present invention, including:
step 602, performing connection operation on the first content feature and the first style feature set through the style content feature adapting module, and generating connection features.
Step 604, calculating a first weight corresponding to the second content feature and a second weight corresponding to the second set of style features.
Step 606, based on the first weight and the second weight, performing weighted fusion processing on the second content feature and the second style feature set, and generating a fusion feature.
After the first content feature and the first style feature set are connected through the style content feature adapting module to generate a connection feature, the connection feature comprises a first content feature f c Corresponding second content featuresAnd with the first style characteristic set f i Corresponding second set of style characteristics +.>. Second set of style characteristics->Includes the second style characteristics corresponding to the first style characteristics, namely f 1 To f k Respectively corresponding->To->. The size of the connection feature is (k+1) ×c×h×w.
Then, a first weight corresponding to the second content feature and a second weight corresponding to the second style feature set can be calculated, and finally, the second content feature and the second style feature set are subjected to weighted fusion processing through the first weight and the second weight, so that fusion features are generated.
In this embodiment, since the current reference style font picture set includes a plurality of current reference style font pictures, the preset font generation model makes full use of style features of the plurality of current reference style font pictures, and by effectively matching the style features with content features of the current source font picture, an excellent font generation effect is achieved; and through effectively fusing the second content characteristics and the second style characteristic set, the expressive force and diversity of generating the target style font picture are further enhanced, the fusion process not only improves the capability of generating the font by the model, but also can generate the font design with unique style and artistic sense, and a new and effective solution is provided for the field of generating the small sample fonts.
In some alternative embodiments, as shown in fig. 7, fig. 7 is a flowchart for calculating a first weight and a second weight according to an embodiment of the present invention, including:
step 702, summing the connection features to generate a combined feature.
And 704, carrying out global average pooling processing on the combined features to generate feature vectors.
Step 706, compressing the feature vector to generate a compressed feature.
Step 708, performing a conversion process on the compressed feature to generate a first probability distribution corresponding to the second content feature and a second probability distribution corresponding to the second style feature set.
Step 710, respectively calculating a first weight corresponding to the second content feature and a second weight corresponding to the second style feature set based on the first probability distribution and the second probability distribution.
The connection features may be summed to generate a combined feature U, and optionally, each feature in the connection features may be added to obtain the combined feature U, where the size of the combined feature U is c×h×w, and specifically may be calculated by formula (1). Then, global information can be extracted by global average pooling processing on the combined feature U, and the size of the combined feature U is reduced to generate a feature vector S, where the size of the feature vector S is c×1×1, and specifically, the feature vector S can be calculated by the formula (2).
(1)
(2)
In order to further reduce the size of the feature vector S and improve the efficiency, a simple Full Connection (FC) layer may be used to compress the feature vector to generate a compressed feature Z, where the size of the compressed feature Z is d×1×1.
In addition, the compressed features may be transformed by softmax mapping to generate a first probability distribution corresponding to the second content feature and a second probability distribution corresponding to the second set of style features. Further, based on the first probability distribution and the second probability distribution, a first weight w corresponding to the second content feature can be calculated c Specifically, the second weight w corresponding to the second style characteristic set can be obtained through calculation in the formula (3) i Namely, andto->W respectively corresponding to 1 To w k Specifically, the method can be calculated by a formula (4).
(3)
(4)
Wherein,representing a first probability distribution corresponding to a second content feature->A second probability distribution corresponding to a second set of style characteristics is represented.
Then, based on the first weight and the second weight, a weighted fusion process may be performed on the second content feature and the second style feature set, to generate a fusion feature F, where the size of the fusion feature F is c×h×w. Optionally, the second content feature and the second style feature set may be multiplied by the first weight and the second weight, respectively, to calculate the contribution of each branch to the final fusion feature F. Essentially, this process involves weighted fusion of feature information from branches under the direction of their respective weights, with the aim of creating a unified and comprehensive feature representation. Specifically, the method can be calculated by a formula (5).
(5)
Finally, after the decoder D decodes the fusion feature F, the target style font picture y can be generated.
In some optional embodiments, the preset font generating model is a model for generating a target font picture based on a current source font picture and a current reference font picture set, a process of constructing the preset font generating model is shown in fig. 8, and fig. 8 is a flowchart for constructing the preset font generating model according to an embodiment of the present invention, where the flowchart includes:
step 802, obtaining a training sample set and a target picture.
Step 804, inputting the training sample set and the target picture into the initial font generating model for training, and obtaining an output result.
And step 806, calculating an overall loss value based on the output result, updating model parameters according to the overall loss value, and generating a preset font generation model based on the updated model parameters.
The training sample set comprises historical source font pictures and historical reference style font picture sets of various types, the various types of the historical reference style font picture sets correspond to different style information, the historical reference style font picture sets comprise a plurality of historical reference style font pictures which are randomly selected and have preset numbers, each historical reference style font picture corresponds to different content information, and each historical reference style font picture corresponds to the same style information. Target picture Is a preset standard picture used for calculating a corresponding loss value in the model training process. It should be noted that x may also be used for the historical source font picture c The historical reference style font picture set can also adopt X s And (3) representing.
When the training sample set and the target picture are input into the initial font generating model for training to obtain an output result, the output result may also be denoted as y, and the process of specifically obtaining the output result may refer to the process of generating the target style font picture y when the model is used in the above embodiment, which is not described herein.
In some alternative embodiments, when calculating the overall loss value based on the output result, as shown in fig. 9, fig. 9 is a flowchart for calculating the overall loss value according to an embodiment of the present invention, including:
step 902, calculating an average absolute error loss value based on the output result and the target picture.
Step 904, calculating a content counterdamage value based on the output result and the historical source font picture.
Step 906, calculating a style countermeasures loss value for each type of historical reference style font picture set based on the output result and the historical reference style font picture set.
Step 908, generating an overall loss value based on the average absolute error loss value, the content counter loss value, and the style counter loss value.
The whole preset font generation model is a countermeasure network architecture, the generator network generates new data by learning the distribution of the training sample set, the discriminator network tries to distinguish the data generated by the generator and the real training data, the two networks are mutually countermeasure, and finally, the generator network can generate new data similar to the training data.
By averaging absolute error loss values, i.e. L, during training 1 The relationship between the generator and the discriminator is constrained by a penalty and a counterpenalty, the counterpenalty consisting of a content counterpenalty and a style counterpenalty. Thus, the preset font generating model may further include a style discriminator D s And content discriminator D c . Style discriminator D s For calculating style contrast loss valuesContent discriminator D c For calculating content counter-loss value->
Then, based on the output result and the target picture, the average absolute error loss value L can be calculated 1 Namely, the difference of pixel levels between the output result and the target picture can be calculated by a formula (6).
(6)
Wherein,indicating when->From->When distributed, the formula x is expected.
Based on the output result and the historical source font picture, the method calculatesResistance loss valueContent fight loss valueLoss generated by content->And content authentication loss->Composition, loss by content authentication->The discriminator can better discriminate whether the generated picture, i.e. the output result, has the same content as the historical source font picture, by content generation loss +.>The generator may make the output result more similar to the content of the historical source font picture. Specifically, the content counter loss value +.>Calculating by formula (8) to obtain content generation loss->Content discrimination loss +.>
(7)
(8)
(9)
Wherein,indicating when->From->When distributed, the formula x is expected.The representation will->The result value input into the content discriminator.Indicating when->From->When distributed, the formula x is expected.The representation will->The result value input into the content discriminator.
In addition, the style countermeasures may be calculated for various types of historical reference style font picture sets based on the output result and the historical reference style font picture sets . Style challenge loss value->Generating losses from styleAnd style discrimination loss->Composition, loss by style discrimination->The discriminator can better discriminate whether the generated picture, i.e. the output result, has the same style as the historical reference style font picture, by style generation loss +.>The generator may make the output result more similar to the style of the historical reference style font picture. Specifically, the style countermeasures loss value +.>Calculating the style generation loss ++through the formula (11)>Calculating to obtain style discrimination loss ++through formula (12)>
(10)
(11)
(12)/>
Wherein,indicating when->From->When distributed, the formula x is expected.The representation will->The result value input to the style discriminator.Indicating when->From->When distributed, the formula x is expected.The representation will->The result value input to the style discriminator.
Finally, the loss value L can be based on the average absolute error 1 Content fight loss value L advc Style challenge loss value L advs Generating an overall loss value L total . Specifically, the method can be calculated by a formula (13).
(13)
Wherein,representing that from the generator point of view an attempt is made to minimize the formula,/->Represents an attempt to maximize this equation from a discriminator point of view, λ adv Representing superparameters, which can be pre-customized, < ->Representing a number ranging from 1 to 100.
In this embodiment, by integrating the average absolute error loss value, the content counterloss value and the style counterloss value, the parameters of the model can be adjusted more reasonably and accurately, so that the accuracy of finally generating the preset font generation model is higher.
In some optional embodiments, during use of the preset font generating model, the preset number of the plurality of current reference style font pictures is less than or equal to six current reference style font pictures. Likewise, in the process of constructing the preset font generating model, the set of various types of historical reference style font pictures comprises a preset number of randomly selected historical reference style font pictures which are less than or equal to six.
In the embodiment, the preset font generation model can be obtained through training by a small amount of sample data, and a user can generate various new fonts by providing a small amount of sample characters, so that the efficiency and creativity of font design are greatly improved; in addition, as the plurality of historical reference style font pictures with the preset number are randomly selected, the preset font generation model can learn more characteristics, so that the accuracy of generating the target style font pictures is improved.
Moreover, an ablation experiment shows that when the number of the historical reference style font pictures in the same type of historical reference style font picture set is six, the effect is optimal, as shown in fig. 10, and fig. 10 is a schematic diagram of the result of the ablation experiment provided by the embodiment of the application. Wherein the X-axis represents the number k of historical reference style font pictures in the same type of set of historical reference style font pictures, and the Y-axis represents an image similarity evaluation index (FID for short) which is a measure for calculating the distance between the real image and the feature vector of the generated image. If the FID value is smaller, the degree of similarity is higher, and it is preferable that fid=0, and the two images are identical.
In addition, to evaluate the preset font generation model, a set of visible characters (Seen fonts Seen characters, SFSC for short) of the visible font may be used as training, a set of non-visible characters (Seen fonts Unseen characters, SFUC for short) of the visible font may be used as testing, and another task we divide it into two subtasks: an unseen font unseen character set (Unseen fonts Unseen characters, abbreviated UFSC) and an unseen font unseen character set (Unseen fonts Seen characters, abbreviated UFUC), and the two subtasks are tested directly using a model trained in the task of the unseen font unseen character set.
Thus, as shown in fig. 11, fig. 11 is a schematic diagram of test results of a preset font generating model and other models for an unseen character set of an unseen font according to an embodiment of the present application, and fig. 12 is a schematic diagram of test results of a preset font generating model and other models for an unseen character set of an unseen font and an unseen character set of an unseen font according to an embodiment of the present application. The part outlined in fig. 11 and fig. 12 is a font with poor generating effect, the second last line is the generating result corresponding to the preset font generating model provided by the present application, and it can be seen that the generating result of the preset font generating model in the present application is good, and the last line is the target picture. And the other lines are test results of other models aiming at the invisible character set of the invisible character and the invisible character set of the invisible character. In addition, the left half part separated by vertical lines in fig. 12 is a test result corresponding to the unseen character set of the unseen font, and the right half part is a test result corresponding to the unseen character set of the unseen font.
In the embodiment of the application, the method comprises the following steps: acquiring a current source font picture and a current reference style font picture set; the current reference style font picture set comprises a plurality of current reference style font pictures which are selected randomly and have preset numbers, each current reference style font picture corresponds to different content information, and each current reference style font picture corresponds to the same style information; and inputting the current source font picture and the current reference style font picture set into a preset font generation model for calculation, and generating a target style font picture corresponding to the current source font picture. According to the scheme, the target style font picture is automatically generated through the preset font generation model which is built in advance, so that the efficiency of generating the target style font picture is improved; in addition, the preset font generation model fully utilizes the style characteristics of a plurality of current reference style font pictures, and improves the accuracy of generating the target style font pictures by effectively matching the style characteristics with the content characteristics of the current source font pictures; meanwhile, as a plurality of historical reference style font pictures with preset quantity are randomly selected, the preset font generation model can learn more characteristics, so that the accuracy of generating the target style font pictures is further improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Fig. 13 is a block diagram of a font generating device adapted based on style information and content information according to an embodiment of the present invention.
As shown in fig. 13, the font generating apparatus 1300 adapted based on style information and content information includes:
an obtaining module 1302, configured to obtain a current source font picture and a current reference style font picture set; the current reference style font picture set comprises a plurality of current reference style font pictures which are selected randomly and have preset numbers, each current reference style font picture corresponds to different content information, and each current reference style font picture corresponds to the same style information.
The generating module 1304 is configured to input the current source font picture and the current reference style font picture set into a preset font generating model for calculation, and generate a target style font picture corresponding to the current source font picture; the content information of the target style font picture is the same as the content information of the current source font picture, and the style information of the target style font picture is the same as the style information of each current reference style font picture.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein. The respective modules in the font generating device adapted based on the style information and the content information described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may invoke and perform the operations of the above modules.
In one embodiment of the present application, there is provided a computer device including a memory and a processor, the memory having stored therein a computer program which when executed by the processor performs the steps of:
Acquiring a current source font picture and a current reference style font picture set; the current reference style font picture set comprises a plurality of current reference style font pictures which are selected randomly and have preset numbers, each current reference style font picture corresponds to different content information, and each current reference style font picture corresponds to the same style information;
inputting the current source font picture and the current reference style font picture set into a preset font generating model for calculation, and generating a target style font picture corresponding to the current source font picture; the content information of the target style font picture is the same as the content information of the current source font picture, and the style information of the target style font picture is the same as the style information of each current reference style font picture.
In one embodiment of the present application, the preset font generating model includes a content encoder, a style content feature adapting module and a decoder, and the processor when executing the computer program further implements the steps of:
inputting a current source font picture and a current reference style font picture set into a preset font generation model, and extracting features of the current source font picture through a content encoder to generate first content features;
Performing feature extraction on the current reference style font picture set through a style encoder to generate a first style feature set; the first style characteristic set comprises first style characteristics corresponding to each current reference style font picture;
the method comprises the steps of carrying out fusion processing on first content features and a first style feature set through a style content feature adaptation module to generate fusion features;
and decoding the fusion characteristics through a decoder to generate the target style font picture.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
connecting the first content features and the first style feature set through a style content feature adaptation module to generate connection features; the connection features comprise second content features corresponding to the first content features and second style feature sets corresponding to the first style feature sets, and the second style feature sets comprise second style features corresponding to the first style features;
calculating a first weight corresponding to the second content feature and a second weight corresponding to the second style feature set;
and carrying out weighted fusion processing on the second content features and the second style feature set based on the first weight and the second weight to generate fusion features.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
summing the connection features to generate a combined feature;
carrying out global average pooling treatment on the combined features to generate feature vectors;
compressing the feature vector to generate a compressed feature;
converting the compressed features to generate a first probability distribution corresponding to the second content features and a second probability distribution corresponding to the second style feature set;
based on the first probability distribution and the second probability distribution, a first weight corresponding to the second content feature and a second weight corresponding to the second style feature set are calculated, respectively.
In one embodiment of the present application, the preset number of the plurality of current reference style font pictures is less than or equal to six current reference style font pictures.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
acquiring a training sample set and a target picture; the training sample set comprises historical source font pictures and historical reference style font picture sets of various types, the various types of the historical reference style font picture sets correspond to different style information, the historical reference style font picture sets comprise a plurality of historical reference style font pictures which are randomly selected and have preset numbers, each historical reference style font picture corresponds to different content information, and each historical reference style font picture corresponds to the same style information;
Inputting the training sample set and the target picture into an initial font generating model for training to obtain an output result;
and calculating an overall loss value based on the output result, updating model parameters according to the overall loss value, and generating a preset font generating model based on the updated model parameters.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
calculating an average absolute error loss value based on the output result and the target picture;
calculating a content countermeasures loss value based on the output result and the historical source font picture;
calculating style countermeasures against loss values based on the output results and the historical reference style font picture sets for various types of historical reference style font picture sets;
the overall loss value is generated based on the average absolute error loss value, the content counter loss value, and the style counter loss value.
The implementation principle and technical effects of the computer device provided by the embodiment of the present application are similar to those of the above method embodiment, and are not described herein.
In one embodiment of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
Acquiring a current source font picture and a current reference style font picture set; the current reference style font picture set comprises a plurality of current reference style font pictures which are selected randomly and have preset numbers, each current reference style font picture corresponds to different content information, and each current reference style font picture corresponds to the same style information;
inputting the current source font picture and the current reference style font picture set into a preset font generating model for calculation, and generating a target style font picture corresponding to the current source font picture; the content information of the target style font picture is the same as the content information of the current source font picture, and the style information of the target style font picture is the same as the style information of each current reference style font picture.
In one embodiment of the present application, the preset font generating model comprises a content encoder, a style content feature adapting module and a decoder, and the computer program when executed by the processor further implements the steps of:
inputting a current source font picture and a current reference style font picture set into a preset font generation model, and extracting features of the current source font picture through a content encoder to generate first content features;
Performing feature extraction on the current reference style font picture set through a style encoder to generate a first style feature set; the first style characteristic set comprises first style characteristics corresponding to each current reference style font picture;
the method comprises the steps of carrying out fusion processing on first content features and a first style feature set through a style content feature adaptation module to generate fusion features;
and decoding the fusion characteristics through a decoder to generate the target style font picture.
In one embodiment of the application, the computer program when executed by a processor performs the steps of:
connecting the first content features and the first style feature set through a style content feature adaptation module to generate connection features; the connection features comprise second content features corresponding to the first content features and second style feature sets corresponding to the first style feature sets, and the second style feature sets comprise second style features corresponding to the first style features;
calculating a first weight corresponding to the second content feature and a second weight corresponding to the second style feature set;
and carrying out weighted fusion processing on the second content features and the second style feature set based on the first weight and the second weight to generate fusion features.
In one embodiment of the application, the computer program when executed by a processor performs the steps of:
summing the connection features to generate a combined feature;
carrying out global average pooling treatment on the combined features to generate feature vectors;
compressing the feature vector to generate a compressed feature;
converting the compressed features to generate a first probability distribution corresponding to the second content features and a second probability distribution corresponding to the second style feature set;
based on the first probability distribution and the second probability distribution, a first weight corresponding to the second content feature and a second weight corresponding to the second style feature set are calculated, respectively.
In one embodiment of the present application, the preset number of the plurality of current reference style font pictures is less than or equal to six current reference style font pictures.
In one embodiment of the application, the computer program when executed by a processor performs the steps of:
acquiring a training sample set and a target picture; the training sample set comprises historical source font pictures and historical reference style font picture sets of various types, the various types of the historical reference style font picture sets correspond to different style information, the historical reference style font picture sets comprise a plurality of historical reference style font pictures which are randomly selected and have preset numbers, each historical reference style font picture corresponds to different content information, and each historical reference style font picture corresponds to the same style information;
Inputting the training sample set and the target picture into an initial font generating model for training to obtain an output result;
and calculating an overall loss value based on the output result, updating model parameters according to the overall loss value, and generating a preset font generating model based on the updated model parameters.
In one embodiment of the application, the computer program when executed by a processor performs the steps of:
calculating an average absolute error loss value based on the output result and the target picture;
calculating a content countermeasures loss value based on the output result and the historical source font picture;
calculating style countermeasures against loss values based on the output results and the historical reference style font picture sets for various types of historical reference style font picture sets;
the overall loss value is generated based on the average absolute error loss value, the content counter loss value, and the style counter loss value.
The computer readable storage medium provided in this embodiment has similar principles and technical effects to those of the above method embodiment, and will not be described herein.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (8)

1. A font generation method based on style information and content information adaptation, the method comprising:
acquiring a current source font picture and a current reference style font picture set; the current reference style font picture set comprises a plurality of current reference style font pictures which are selected randomly and have preset numbers, each current reference style font picture corresponds to different content information, and each current reference style font picture corresponds to the same style information;
Inputting the current source font picture and the current reference style font picture set into a preset font generating model, and extracting features of the current source font picture through a content encoder in the preset font generating model to generate first content features;
extracting features of the current reference style font picture set through a style encoder in the preset font generation model to generate a first style feature set; the first style characteristic set comprises first style characteristics corresponding to each current reference style font picture;
performing connection operation on the first content features and the first style feature set through a style content feature adaptation module in the preset font generation model to generate connection features; the connection feature comprises a second content feature corresponding to the first content feature and a second style feature set corresponding to the first style feature set, and the second style feature set comprises second style features corresponding to the first style features;
calculating a first weight corresponding to the second content feature and a second weight corresponding to the second style feature set;
Based on the first weight and the second weight, carrying out weighted fusion processing on the second content feature and the second style feature set to generate fusion features;
decoding the fusion characteristics through a decoder in the preset font generation model to generate a target style font picture corresponding to the current source font picture; the content information of the target style font picture is the same as the content information of the current source font picture, and the style information of the target style font picture is the same as the style information of each current reference style font picture.
2. The method of claim 1, wherein the computing a first weight corresponding to the second content feature and a second weight corresponding to the second set of style features comprises:
summing the connection features to generate a combined feature;
carrying out global average pooling treatment on the combined features to generate feature vectors;
compressing the feature vector to generate a compressed feature;
converting the compressed features to generate a first probability distribution corresponding to the second content features and a second probability distribution corresponding to the second style feature set;
Based on the first probability distribution and the second probability distribution, a first weight corresponding to the second content feature and a second weight corresponding to the second set of style features are calculated, respectively.
3. The method of claim 1 or 2, wherein the preset number of current reference style font pictures is less than or equal to six current reference style font pictures.
4. The method according to claim 1 or 2, wherein the construction process of the preset font generation model comprises:
acquiring a training sample set and a target picture; the training sample set comprises historical source font pictures and historical reference style font picture sets of various types, the various types of the historical reference style font picture sets correspond to different style information, the historical reference style font picture sets comprise a plurality of historical reference style font pictures which are randomly selected and have preset numbers, each historical reference style font picture corresponds to different content information, and each historical reference style font picture corresponds to the same style information;
inputting the training sample set and the target picture into an initial font generating model for training to obtain an output result;
And calculating an overall loss value based on the output result, updating model parameters according to the overall loss value, and generating the preset font generation model based on the updated model parameters.
5. The method of claim 4, wherein calculating an overall loss value based on the output result comprises:
calculating an average absolute error loss value based on the output result and the target picture;
calculating a content countermeasures loss value based on the output result and the historical source font picture;
calculating style countermeasures loss values for the historical reference style font picture sets of various types based on the output result and the historical reference style font picture sets;
the overall loss value is generated based on the average absolute error loss value, the content counterloss value, and the style counterloss value.
6. A font generating device adapted based on style information and content information, the device comprising:
the acquisition module is used for acquiring a current source font picture and a current reference style font picture set; the current reference style font picture set comprises a plurality of current reference style font pictures which are selected randomly and have preset numbers, each current reference style font picture corresponds to different content information, and each current reference style font picture corresponds to the same style information;
The generation module is used for inputting the current source font picture and the current reference style font picture set into a preset font generation model, and extracting the characteristics of the current source font picture through a content encoder in the preset font generation model to generate first content characteristics; extracting features of the current reference style font picture set through a style encoder in the preset font generation model to generate a first style feature set; the first style characteristic set comprises first style characteristics corresponding to each current reference style font picture; performing connection operation on the first content features and the first style feature set through a style content feature adaptation module in the preset font generation model to generate connection features; the connection feature comprises a second content feature corresponding to the first content feature and a second style feature set corresponding to the first style feature set, and the second style feature set comprises second style features corresponding to the first style features; calculating a first weight corresponding to the second content feature and a second weight corresponding to the second style feature set; based on the first weight and the second weight, carrying out weighted fusion processing on the second content feature and the second style feature set to generate fusion features; decoding the fusion characteristics through a decoder in the preset font generation model to generate a target style font picture corresponding to the current source font picture; the content information of the target style font picture is the same as the content information of the current source font picture, and the style information of the target style font picture is the same as the style information of each current reference style font picture.
7. An electronic device comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the font generation method based on style information and content information adaptation as claimed in any one of claims 1-5.
8. A computer-readable storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement the font generation method based on style information and content information adaptation as claimed in any one of claims 1-5.
CN202311503006.6A 2023-11-13 2023-11-13 Font generation method and device based on style information and content information adaptation Pending CN117236284A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311503006.6A CN117236284A (en) 2023-11-13 2023-11-13 Font generation method and device based on style information and content information adaptation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311503006.6A CN117236284A (en) 2023-11-13 2023-11-13 Font generation method and device based on style information and content information adaptation

Publications (1)

Publication Number Publication Date
CN117236284A true CN117236284A (en) 2023-12-15

Family

ID=89096958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311503006.6A Pending CN117236284A (en) 2023-11-13 2023-11-13 Font generation method and device based on style information and content information adaptation

Country Status (1)

Country Link
CN (1) CN117236284A (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255772A (en) * 2018-08-27 2019-01-22 平安科技(深圳)有限公司 License plate image generation method, device, equipment and medium based on Style Transfer
CN111553246A (en) * 2020-04-15 2020-08-18 山东大学 Chinese character style migration method and system based on multi-task antagonistic learning network
CN113052143A (en) * 2021-04-26 2021-06-29 中国建设银行股份有限公司 Handwritten digit generation method and device
CN113393370A (en) * 2021-06-02 2021-09-14 西北大学 Method, system and intelligent terminal for migrating Chinese calligraphy character and image styles
CN113962192A (en) * 2021-04-28 2022-01-21 江西师范大学 Method and device for generating Chinese character font generation model and Chinese character font generation method and device
CN114139495A (en) * 2021-11-29 2022-03-04 合肥高维数据技术有限公司 Chinese font style migration method based on adaptive generation countermeasure network
CN114418834A (en) * 2021-12-29 2022-04-29 北京字跳网络技术有限公司 Character generation method and device, electronic equipment and storage medium
US20220147695A1 (en) * 2021-09-09 2022-05-12 Beijing Baidu Netcom Science Technology Co., Ltd. Model training method and apparatus, font library establishment method and apparatus, and storage medium
CN114820871A (en) * 2022-06-29 2022-07-29 北京百度网讯科技有限公司 Font generation method, model training method, device, equipment and medium
CN115222845A (en) * 2022-08-01 2022-10-21 北京元亦科技有限公司 Method and device for generating style font picture, electronic equipment and medium
CN115311130A (en) * 2022-07-16 2022-11-08 西北大学 Method, system and terminal for migrating styles of Chinese, calligraphy and digital images in multiple lattices
CN115828848A (en) * 2021-09-15 2023-03-21 浙江大学 Font generation model training method, device, equipment and storage medium
CN116152368A (en) * 2022-12-23 2023-05-23 浙江大学 Font generation method, training method, device and equipment of font generation model
CN116416628A (en) * 2023-06-06 2023-07-11 广州宏途数字科技有限公司 Handwriting font recognition based method and recognition system
CN116433474A (en) * 2023-05-09 2023-07-14 北京华文众合科技有限公司 Model training method, font migration device and medium
CN116469111A (en) * 2023-06-08 2023-07-21 江西师范大学 Character generation model training method and target character generation method
CN116681581A (en) * 2023-05-17 2023-09-01 网易(杭州)网络有限公司 Font generation method and device, electronic equipment and readable storage medium

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255772A (en) * 2018-08-27 2019-01-22 平安科技(深圳)有限公司 License plate image generation method, device, equipment and medium based on Style Transfer
CN111553246A (en) * 2020-04-15 2020-08-18 山东大学 Chinese character style migration method and system based on multi-task antagonistic learning network
CN113052143A (en) * 2021-04-26 2021-06-29 中国建设银行股份有限公司 Handwritten digit generation method and device
CN113962192A (en) * 2021-04-28 2022-01-21 江西师范大学 Method and device for generating Chinese character font generation model and Chinese character font generation method and device
CN113393370A (en) * 2021-06-02 2021-09-14 西北大学 Method, system and intelligent terminal for migrating Chinese calligraphy character and image styles
US20220147695A1 (en) * 2021-09-09 2022-05-12 Beijing Baidu Netcom Science Technology Co., Ltd. Model training method and apparatus, font library establishment method and apparatus, and storage medium
CN115828848A (en) * 2021-09-15 2023-03-21 浙江大学 Font generation model training method, device, equipment and storage medium
CN114139495A (en) * 2021-11-29 2022-03-04 合肥高维数据技术有限公司 Chinese font style migration method based on adaptive generation countermeasure network
CN114418834A (en) * 2021-12-29 2022-04-29 北京字跳网络技术有限公司 Character generation method and device, electronic equipment and storage medium
WO2023125361A1 (en) * 2021-12-29 2023-07-06 北京字跳网络技术有限公司 Character generation method and apparatus, electronic device, and storage medium
CN114820871A (en) * 2022-06-29 2022-07-29 北京百度网讯科技有限公司 Font generation method, model training method, device, equipment and medium
CN115311130A (en) * 2022-07-16 2022-11-08 西北大学 Method, system and terminal for migrating styles of Chinese, calligraphy and digital images in multiple lattices
CN115222845A (en) * 2022-08-01 2022-10-21 北京元亦科技有限公司 Method and device for generating style font picture, electronic equipment and medium
CN116152368A (en) * 2022-12-23 2023-05-23 浙江大学 Font generation method, training method, device and equipment of font generation model
CN116433474A (en) * 2023-05-09 2023-07-14 北京华文众合科技有限公司 Model training method, font migration device and medium
CN116681581A (en) * 2023-05-17 2023-09-01 网易(杭州)网络有限公司 Font generation method and device, electronic equipment and readable storage medium
CN116416628A (en) * 2023-06-06 2023-07-11 广州宏途数字科技有限公司 Handwriting font recognition based method and recognition system
CN116469111A (en) * 2023-06-08 2023-07-21 江西师范大学 Character generation model training method and target character generation method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刘明昊;: "基于VGG-16的图像风格迁移", 电子制作, no. 12 *
李进;高静;陈俊杰;王永军;: "基于条件生成对抗网络的蒙古文字体风格迁移模型", 中文信息学报, no. 04 *
王晓红;卢辉;麻祥才;: "基于生成对抗网络的风格化书法图像生成", 包装工程, no. 11 *
白海娟;周未;王存睿;王磊;: "基于生成式对抗网络的字体风格迁移方法", 大连民族大学学报, no. 03 *

Similar Documents

Publication Publication Date Title
CN112418292B (en) Image quality evaluation method, device, computer equipment and storage medium
CN109583509B (en) Data generation method and device and electronic equipment
CN112270686B (en) Image segmentation model training method, image segmentation device and electronic equipment
CN113435522A (en) Image classification method, device, equipment and storage medium
CN111274999B (en) Data processing method, image processing device and electronic equipment
CN111739115B (en) Unsupervised human body posture migration method, system and device based on cycle consistency
WO2021223738A1 (en) Method, apparatus and device for updating model parameter, and storage medium
CN114067057A (en) Human body reconstruction method, model and device based on attention mechanism
CN112560964A (en) Method and system for training Chinese herbal medicine pest and disease identification model based on semi-supervised learning
CN112801215A (en) Image processing model search, image processing method, image processing apparatus, and storage medium
CN113221645B (en) Target model training method, face image generating method and related device
CN115050064A (en) Face living body detection method, device, equipment and medium
CN113139462A (en) Unsupervised face image quality evaluation method, electronic device and storage medium
CN114202456A (en) Image generation method, image generation device, electronic equipment and storage medium
CN113192175A (en) Model training method and device, computer equipment and readable storage medium
CN114547267A (en) Intelligent question-answering model generation method and device, computing equipment and storage medium
CN114202615A (en) Facial expression reconstruction method, device, equipment and storage medium
CN110415341B (en) Three-dimensional face model generation method and device, electronic equipment and medium
CN115049769A (en) Character animation generation method and device, computer equipment and storage medium
CN116993864A (en) Image generation method and device, electronic equipment and storage medium
CN113902848A (en) Object reconstruction method and device, electronic equipment and storage medium
CN117236284A (en) Font generation method and device based on style information and content information adaptation
Liu et al. Ranking-preserving cross-source learning for image retargeting quality assessment
CN104317892B (en) The temporal aspect processing method and processing device of Portable executable file
US20210224947A1 (en) Computer Vision Systems and Methods for Diverse Image-to-Image Translation Via Disentangled Representations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20231215