CN110176050B - Aesthetic optimization method for text generated image - Google Patents

Aesthetic optimization method for text generated image Download PDF

Info

Publication number
CN110176050B
CN110176050B CN201910464250.3A CN201910464250A CN110176050B CN 110176050 B CN110176050 B CN 110176050B CN 201910464250 A CN201910464250 A CN 201910464250A CN 110176050 B CN110176050 B CN 110176050B
Authority
CN
China
Prior art keywords
model
aesthetic
training
text
generated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910464250.3A
Other languages
Chinese (zh)
Other versions
CN110176050A (en
Inventor
徐天宇
王智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201910464250.3A priority Critical patent/CN110176050B/en
Publication of CN110176050A publication Critical patent/CN110176050A/en
Application granted granted Critical
Publication of CN110176050B publication Critical patent/CN110176050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an aesthetic optimization method of a text generated image, which comprises the steps of selecting a StackGAN++ (pile generation countermeasure network++) as a basic text generated image model, integrating an aesthetic degree judgment model into a training stage of the text generated image model, carrying out score judgment on an intermediate result generated in the training stage by means of the judgment model, and utilizing the obtained score to help guide the training of the generated model; and realizing the text generation image by using the obtained text generation image model. According to the invention, the aesthetic degree judging model is integrated into the text generated image GAN, a new loss is provided for the training of the GAN by utilizing the judging model to guide the GAN to improve the aesthetic degree of the generated image, the scheme only improves the consumption during model training, but the generated model obtained by training is not different from the original model in structure, only the parameters are changed, so that the operation efficiency is not changed, but the generated result is correspondingly changed, and the result with higher aesthetic degree can be generated.

Description

Aesthetic optimization method for text generated image
Technical Field
The present invention relates to the field of deep learning and computer vision, and more particularly to aesthetic quality optimization of text generated images.
Background
In the field of computer vision, the study of generating corresponding images based on a given piece of text is a popular study, wherein the study of achieving text-generated image targets based on an countermeasure generation network (generative adversarial network) is most attractive, and the achieved model can generate 256 x 256-sized pictures with higher quality. However, research in this field is currently focused mainly on improving the resolution (size), diversity, and complexity of being able to process text of the generated image, but little attention is paid to how to improve the aesthetic quality of the generated image. In an actual application scene, the more attractive picture can be generated, so that the use experience of a user can be improved, and the quality of related applications is improved.
While the task of improving the aesthetic quality of an image can be realized by means of another research topic of computer vision, namely an image enhancement technology, the method is essentially that two processes are connected in series, namely an image is generated by a generating model, the image is input into the image enhancement model to improve the quality of the image, and the complexity is the combination of the complexity of the two tasks.
Disclosure of Invention
The object of the present invention is to solve the problems of the prior art and to propose an aesthetic optimization method of text-generated images.
In order to solve the technical problems, the invention provides an aesthetic optimization method of a text generated image, which selects a StackGAN++ (pile generation antagonism network++) as a text generated image model based, blends an aesthetic degree judgment model into a training stage of the text generated image model, performs score judgment on intermediate results generated in the training stage by means of the judgment model, and helps to guide training of the generated model by using the obtained score; and realizing the text generation image by using the obtained text generation image model.
In some embodiments of the present invention, the following technical features are further included:
the aesthetic degree judging model is integrated into the training stage of the text generated image model, specifically, a loss function-aesthetic loss is defined based on the score given by the aesthetic degree judging model, and the aesthetic loss is added into the loss function of the generated model to become one of the components, so that the training process of the generated model can be guided to tend to generate a result with higher aesthetic degree.
Training of the StackGAN++ is performed in a small batch gradient descent mode, and overall training data are input into the StackGAN++ model in batch units.
Training a model to a plurality of epochs, each epoch comprising a plurality of Step trains; the training process of the data of one batch is a Step, and the training process of all input data is defined as one epoch.
In each Step, the steps are as follows: s1, after three groups of image results are obtained in the generation stage, taking a group with the highest resolution, and introducing an aesthetic degree judgment model to carry out aesthetic degree judgment on the group of image results to obtain aesthetic degree scores corresponding to the group of images; s2, calculating aesthetic losses of a plurality of aesthetic scores in the group of images respectively, and finally taking the average value of the aesthetic losses as the aesthetic loss of the batch, and recording the aesthetic loss as L aes The method comprises the steps of carrying out a first treatment on the surface of the S3, with L G +β·L aes The new loss of the generator is carried out gradient return to finish a Step training process, wherein beta is an aesthetic coefficient and L G Loss for the original stackgan++ generator.
The score interval given by the aesthetic degree judgment model is [0,1].
For the score given by the aesthetic judgment model, if the interval is out of range, it is limited to the nearest boundary vicinity: and is limited to 0.9999 when 1 or more and to 0.0001 when 0 or less.
The aesthetic loss is: the Euclidean distance of the aesthetic score from the upper threshold, i.e., the L2 distance.
Different beta parameters were chosen and the model was trained on multiple epochs.
The present invention also includes a computer medium storing a computer program executable to implement the above-described method.
Compared with the prior art, the invention has the beneficial effects that: on the premise of ensuring that the overall quality of the text generated image model result is not reduced, the aesthetic quality of the generated result is improved; the aim of improving the aesthetic quality of the generated model result is achieved without adding extra operation steps.
Drawings
FIG. 1 is a schematic flow chart of a step during training according to the embodiment of the present invention.
FIG. 2 is a flow chart of an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved by the embodiments of the present invention more clear, the present invention is further described in detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the invention is based on the following consideration: if the generated model can automatically judge the aesthetic quality of the generated result in the training process, and the generated model is self-adjusted according to the judging result, the efficiency of the generated model is higher than that of the combined image enhancement (the combined image enhancement means that the generated image is generated by the text generated image model and then enhanced by the image enhancement model, although the aim of improving the aesthetic quality of the generated result of the text generated image model can be achieved by the process, the method is more visual and traditional, and the operation efficiency of the whole process is lower than that of the scheme). And judging the aesthetic quality of the generated image corresponds to another hot research in the field of computer vision, namely automatic aesthetic degree judgment. The evaluation of the aesthetic degree of pictures is a task with strong subjectivity, but under the background that the number of pictures in the internet is increased year by year at present, the demand for automatic picture aesthetic degree judgment by a computer is also increasing, and especially, the demand for the picture aesthetic degree judgment in the field of image retrieval is higher. Under the drive of the demand, automatic aesthetic judgment is also becoming a popular research in the field of computer vision, and a plurality of more successful aesthetic judgment models are developed to date. The models can be classified into classification models (output discrete classification results, e.g., high quality/low quality two classification) and regression models (output continuous score results within a certain interval) according to the judgment results.
The following embodiment of the invention is to integrate the aesthetic degree judging model into the text generated image GAN, and provide a new loss for the training of the GAN by utilizing the judging model to guide the GAN to improve the aesthetic degree of the generated image. The scheme adopted by the embodiment of the invention only improves the consumption during model training, but the generated model obtained by training is not different from the original model in structure, but only the parameters are changed, so that the operation efficiency is not changed, and the generated result is correspondingly changed. The key points of the method can be summarized as follows: 1. selecting a StackGAN++ (pile generation countermeasure network++) as a basic text generation image model, integrating an aesthetic degree judgment model into a training stage of the text generation image model, carrying out score judgment on intermediate results generated in the training stage by means of the judgment model, and utilizing the obtained score to help guide training of the generation model. 2. Based on the score given by the aesthetic judgment model, a new loss function, namely aesthetic loss, is defined, and the aesthetic loss is added into the loss function of the generated model to become one of the components, so that the result that the generated model training process can be guided to tend to generate higher aesthetic degree is achieved.
Example 1
Training of the StackGAN++ is performed by adopting a mode of small gradient descent (minibatch gradient descent), and overall training data is input into a StackGAN++ model in a batch unit, namely, the organization mode of input data of the StackGAN++ model is as follows: all input data is divided into several batches in batch (batch) units, one batch at a time is taken as input, and the size of batch is set to 24.
A Step in training is defined as a training process of data of a batch, and consists of the following three stages in sequence:
the generation stage: a batch of input data is passed through a generator (generator) to generate three sets of image results of sizes 64 x 64, 128 x 128, 256 x 256, respectively.
The basic composition of generating a countermeasure network (GAN) includes a generator and a discriminator, wherein the generator is used for generating a certain output result according to input data (usually a random gaussian noise vector with a certain length, in the invention, the input is a concatenation of the random noise vector and a processed text vector because of text generation), and the essence of the basic composition is that a data distribution is fitted, and the purpose of the basic composition is to make the fitted data distribution be as close to a real data distribution as possible through countermeasure training of a certain flow, so that the discriminator cannot judge whether the result output by the generator is a fitting result from the generator or a real data result.
And a judging stage: the image results are respectively input into corresponding discriminators (discriminators) according to the sizes to obtain corresponding discrimination results.
In generating a contrast network (GAN), a arbiter receives as inputs the output of a generator and training data samples, and its task is to distinguish whether a certain input comes from the generator output or from the training data samples, i.e. whether the data is true or false (the generator may be considered as falsified real data distribution). The training process of GAN is the countermeasure process of the generator and the arbiter.
Parameter updating stage: and respectively calculating the losses of the generator and the discriminator by means of the discrimination results, carrying out gradient feedback, and updating the respective parameters.
This embodiment is an optimization modification to the StackGAN++ generator, and the arbiter portion is the same as the StackGAN++. L in FIG. 1 G Is the loss of the generator in the StackGAN++, while the loss of the arbiter is not relevant in the present invention as no changes are made.
The back propagation of the gradient is called back propagation (backprojection), and is an intra-network parameter updating method adopted by the neural network. The generator and the arbiter can each be regarded as a neural network, so this segment of gradient back propagation refers to the back propagation of gradients within the respective network using the respective losses described above.
The sequence of completing the series of processes is a Step. One epoch at training time is defined as the process of one training pass of all input data.
The implementation of this example is as follows:
s1, in each Step of training, after three groups of image results are obtained in the generation stage, taking a group with the largest resolution, namely 24 images with 256 x 256 size, introducing an aesthetic degree judgment model to carry out aesthetic degree judgment on the images, and obtaining the aesthetic degree score corresponding to the group of images. The aesthetic degree judgment model selected in this embodiment gives a score interval of [0,1].
This example selects the model proposed in S.Kong et al, paper Photo Aesthetics Ranking Network with Attributes and Content Adaptation. The authors do not name their models in the paper, and another outcome of the paper is to propose a picture database named AADB, so for ease of reference and further explanation, the model is referred to as "AADB aesthetics model". In fact, in this embodiment, the aesthetic degree evaluation model may be freely selected, because what is actually needed is an index for evaluating the aesthetic degree of the picture, and the manner in which the index is obtained (i.e., the selection of the model) is not limited in theory (a model may be selected if efficiency, convenience, etc. are to be considered).
Meanwhile, since the calculated score exceeds 1, it is necessary to make a corresponding judgment for the obtained score, and if the interval range is exceeded, it is limited to the vicinity of the nearest boundary. In this embodiment, the range is limited to 0.9999 when 1 or more and to 0.0001 when 0 or less, so that the range is in practical significance;
s2, calculating the L2 distance between the 24 aesthetic scores in the group and 1.0 as aesthetic loss, taking the average value of the 24 aesthetic losses as the aesthetic loss of the batch, and recording as L aes
The model selected in this embodiment gives a limited aesthetic score of no more than 1.0, whereas a higher aesthetic score represents a higher aesthetic value for the picture, so that the aesthetic penalty of a picture is defined as the Euclidean distance of the aesthetic score from the upper limit of 1.0, i.e., the L2 distance. The greater the aesthetic score, the less the corresponding aesthetic loss.
S3, defining an aesthetic coefficient beta, and recording the loss of the original StackGAN++ generator as L G Then take L G +β·L aes And (3) carrying out gradient feedback on the new loss of the generator (carrying out gradient feedback in the generator and updating parameters of the generator) to finish a Step training process.
The aesthetic factor beta is selected according to a plurality of experiments and the quality of the experimental results. Section 3 illustrates the situation during an experiment, and how this coefficient is determined is illustrated in section 4. Custom losses (unlike traditional fight losses, i.e. L G Custom loss is some loss index defined by the model designer himself for a certain purpose), and there is no clear criterion, L is considered first in the experiment of this embodiment G The difference between the two magnitudes is selected to make the magnitude of the two magnitudes close, the selection coefficient is equal to 1 (namely, the two are directly added), and the coefficient is adjusted according to the training result.
S4, selecting different beta parameters, training a model by 600epoch, selecting an optimal model according to an acceptance Score (IS) result and a judgment result of an aesthetic degree model, comparing the optimal model with an original model from the two aspects of acceptance Score and overall aesthetic degree of a generated result, and adjusting beta to carry out a training step again if the result IS weaker than the original model.
Wherein the optimal model is selected according to the following method: using the test dataset of the stackgan++ can generate 29280 different images, and the average aesthetic score of these images was calculated as an aesthetic indicator. When the model is selected, firstly, the acceptance Score is not lower than the acceptance Score of the original model, namely, the StackGAN++, and then the model with the highest aesthetic index is the optimal model.
The IS index IS one of the indexes with the most extensive application for evaluating the quality of the picture generation GAN generation result, and the higher the value IS, the better the result IS. The index name is derived from the acceptance Net (proposed by Google) used to calculate the index.
The advantages of the above-described embodiments of the invention are:
1. the optimization of the scheme directly acts on the generation process, and is more efficient from the aspect of operation efficiency than the process of enhancing after generation;
2. the aesthetic degree of the overall generation result of the optimized generation model IS improved (about 2.7%), and meanwhile, the IS Score of the optimized generation model IS correspondingly improved (about 3.1%) after evaluation by utilizing the quality judgment index of the generation model, namely the acceptance Score (IS), popular in the field of image generation.
The invention can be used for the following user use scenarios:
1. the aesthetic optimizing version character generating image model realized by the scheme can be applied to the field of advertisement design, and helps a plane advertisement designer to generate a better-looking and attractive background picture or a match picture so as to reduce the manpower consumption pressure in the aspect of the picture design.
2. In links such as interior decoration, furniture customization and the like, a user can generate an image model by means of texts, and generates a corresponding picture sample for reference selection by inputting description of decoration arrangement or furniture style, and after aesthetic optimization, the user has higher probability to obtain a reference sample with higher aesthetic degree, so that the user can be helped to more effectively carry out screening and selection.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several equivalent substitutions and obvious modifications can be made without departing from the spirit of the invention, and the same should be considered to be within the scope of the invention.

Claims (6)

1. The aesthetic optimization method of the text generated image is characterized in that a StackGAN++, namely, a text generated image model based on a stacking generation countermeasure network++, is selected, an aesthetic degree judgment model is integrated into a training stage of the text generated image model, intermediate results generated in the training stage are subjected to score judgment by means of the judgment model, and the obtained score is used for helping to guide training of the generated model; utilizing the obtained text generation image model to realize text generation images; the aesthetic degree judging model is integrated into a training stage of the text generating image model, specifically, a loss function-aesthetic loss is defined based on the score given by the aesthetic degree judging model, and the aesthetic loss is added into the loss function of the generating model to become one of the components, so that the training process of the generating model can be guided to lead the generating model to tend to generate a result with higher aesthetic degree;
training the StackGAN++ by adopting a small batch gradient descent mode, and inputting the whole training data into a StackGAN++ model by taking batch as a unit;
training a model to a plurality of epochs, each epoch comprising a plurality of Step trains; the method comprises the steps that one training process is performed on data of one batch, and one epoch is defined as a process of one training of all input data;
in each Step, the steps are as follows:
s1, after three groups of image results are obtained in the generation stage, taking a group with the highest resolution, and introducing an aesthetic degree judgment model to carry out aesthetic degree judgment on the group of image results to obtain aesthetic degree scores corresponding to the group of images;
s2, pairThe aesthetic scores in the group are calculated as their aesthetic losses, and the average value of the aesthetic losses is taken as the aesthetic loss of the one batch, denoted as L aes
S3, with L G +β·L aes The new loss of the generator is carried out gradient return to finish a Step training process, wherein beta is an aesthetic coefficient and L G Loss for the original stackgan++ generator.
2. The method of aesthetic optimization of text-generated images of claim 1, wherein the aesthetic measure model gives a score interval of [0,1].
3. The aesthetic optimization method of text generated images according to claim 1, characterized in that for the score given by the aesthetic judgment model, it is limited to the nearest boundary vicinity if the interval is out of range: and is limited to 0.9999 when 1 or more and to 0.0001 when 0 or less.
4. The method of aesthetic optimization of text-generated images of claim 2, wherein the aesthetic penalty is: the Euclidean distance of the aesthetic score from the upper threshold, i.e., the L2 distance.
5. The aesthetic optimization method of text-generated images of claim 1, wherein the model is trained on a plurality of epochs by selecting different β parameters.
6. A computer medium, characterized in that a computer program is stored, which computer program is executable to implement the method of any one of claims 1 to 5.
CN201910464250.3A 2019-05-30 2019-05-30 Aesthetic optimization method for text generated image Active CN110176050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910464250.3A CN110176050B (en) 2019-05-30 2019-05-30 Aesthetic optimization method for text generated image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910464250.3A CN110176050B (en) 2019-05-30 2019-05-30 Aesthetic optimization method for text generated image

Publications (2)

Publication Number Publication Date
CN110176050A CN110176050A (en) 2019-08-27
CN110176050B true CN110176050B (en) 2023-05-09

Family

ID=67696654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910464250.3A Active CN110176050B (en) 2019-05-30 2019-05-30 Aesthetic optimization method for text generated image

Country Status (1)

Country Link
CN (1) CN110176050B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781633A (en) * 2019-10-30 2020-02-11 广东博智林机器人有限公司 Image-text design quality detection method, device and system based on deep learning model
CN111968193B (en) * 2020-07-28 2023-11-21 西安工程大学 Text image generation method based on StackGAN (secure gas network)
CN113642673B (en) * 2021-08-31 2023-12-22 北京字跳网络技术有限公司 Image generation method, device, equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590786A (en) * 2017-09-08 2018-01-16 深圳市唯特视科技有限公司 A kind of image enchancing method based on confrontation learning network
CN107644006B (en) * 2017-09-29 2020-04-03 北京大学 Automatic generation method of handwritten Chinese character library based on deep neural network
CN107610123A (en) * 2017-10-11 2018-01-19 中共中央办公厅电子科技学院 A kind of image aesthetic quality evaluation method based on depth convolutional neural networks
CN108334497A (en) * 2018-02-06 2018-07-27 北京航空航天大学 The method and apparatus for automatically generating text
CN108648188B (en) * 2018-05-15 2022-02-11 南京邮电大学 No-reference image quality evaluation method based on generation countermeasure network
CN109543159B (en) * 2018-11-12 2023-03-24 南京德磐信息科技有限公司 Text image generation method and device

Also Published As

Publication number Publication date
CN110176050A (en) 2019-08-27

Similar Documents

Publication Publication Date Title
Nichol et al. Glide: Towards photorealistic image generation and editing with text-guided diffusion models
CN107464210B (en) Image style migration method based on generating type countermeasure network
CN110176050B (en) Aesthetic optimization method for text generated image
CN109447906B (en) Picture synthesis method based on generation countermeasure network
CN109992779B (en) Emotion analysis method, device, equipment and storage medium based on CNN
CN110909754B (en) Attribute generation countermeasure network and matching clothing generation method based on same
CN111161137B (en) Multi-style Chinese painting flower generation method based on neural network
CN108446667A (en) Based on the facial expression recognizing method and device for generating confrontation network data enhancing
CN104285224B (en) Method for classifying to text
CN109657156A (en) A kind of personalized recommendation method generating confrontation network based on circulation
CN110502753A (en) A kind of deep learning sentiment analysis model and its analysis method based on semantically enhancement
CN113343705B (en) Text semantic based detail preservation image generation method and system
CN110069778A (en) Chinese incorporates the commodity sentiment analysis method of insertion word location aware
CN112000772B (en) Sentence-to-semantic matching method based on semantic feature cube and oriented to intelligent question and answer
CN110222173B (en) Short text emotion classification method and device based on neural network
CN109711401A (en) A kind of Method for text detection in natural scene image based on Faster Rcnn
CN111476771B (en) Domain self-adaption method and system based on distance countermeasure generation network
CN110070116A (en) Segmented based on the tree-shaped Training strategy of depth selects integrated image classification method
CN115601772B (en) Aesthetic quality evaluation model and method based on multi-modal learning
US11914841B2 (en) Automatic generation of stylized icons
CN110059220A (en) A kind of film recommended method based on deep learning Yu Bayesian probability matrix decomposition
Pan et al. A quantitative model for identifying regions of design visual attraction and application to automobile styling
CN113837229B (en) Knowledge-driven text-to-image generation method
CN113220893B (en) Product feedback analysis system and method based on emotion analysis
CN113806564B (en) Multi-mode informative text detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant