CN109815465B - Deep learning-based poster generation method and device and computer equipment - Google Patents

Deep learning-based poster generation method and device and computer equipment Download PDF

Info

Publication number
CN109815465B
CN109815465B CN201811556085.6A CN201811556085A CN109815465B CN 109815465 B CN109815465 B CN 109815465B CN 201811556085 A CN201811556085 A CN 201811556085A CN 109815465 B CN109815465 B CN 109815465B
Authority
CN
China
Prior art keywords
poster
model
evaluation model
evaluation
dimensional code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811556085.6A
Other languages
Chinese (zh)
Other versions
CN109815465A (en
Inventor
李惠梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201811556085.6A priority Critical patent/CN109815465B/en
Publication of CN109815465A publication Critical patent/CN109815465A/en
Application granted granted Critical
Publication of CN109815465B publication Critical patent/CN109815465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a deep learning-based poster generation method, a deep learning-based poster generation device, computer equipment and a storage medium, wherein the deep learning-based poster generation method comprises the following steps: obtaining unstructured data provided by a user and used for generating a poster; sheathing the unstructured data into a plurality of preset poster templates to generate a plurality of preliminary posters; inputting the plurality of preliminary posters into a preset poster evaluation model which is trained based on a convolutional neural network model for operation; the poster evaluation model is trained based on sample data consisting of existing posters and manual scores associated with the existing posters; outputting evaluation scores of the plurality of preliminary posters; and adding a two-dimensional code linked to the shared content of the user into the preliminary poster with the highest evaluation score to obtain a final poster. Thereby improving the scanning rate of the two-dimensional code.

Description

Deep learning-based poster generation method and device and computer equipment
Technical Field
The present application relates to the field of computers, and in particular, to a deep learning-based poster generating method, apparatus, computer device, and storage medium.
Background
In network promotion, a link for promoting a store or a product or recommending registration is generally sent to a client, and the promotion mode cannot intuitively give promotion contents, so that the client can know corresponding promotion contents only by clicking the link when receiving the promotion link. Since the client needs to take more steps and wait for the page to jump, the promoted link is likely to be abandoned, resulting in a decrease in the click rate of the promoted link. Therefore, in the prior art, a technical scheme for directly displaying the linked content to improve the click rate is lacking.
Disclosure of Invention
The application mainly aims to provide a poster generation method, a poster generation device, computer equipment and a storage medium based on deep learning, aiming at improving the read quantity of popularization products and the click rate.
In order to achieve the above object, the present application provides a deep learning-based poster generation method, comprising the steps of:
obtaining unstructured data provided by a user and used for generating a poster;
sheathing the unstructured data into a plurality of preset poster templates to generate a plurality of preliminary posters;
inputting the plurality of preliminary posters into a preset poster evaluation model which is trained based on a convolutional neural network model for operation; the poster evaluation model is trained based on sample data consisting of existing posters and manual scores associated with the existing posters;
Outputting evaluation scores of the plurality of preliminary posters;
and adding a two-dimensional code linked to the shared content of the user into the preliminary poster with the highest evaluation score to obtain a final poster.
Further, the method for acquiring the poster evaluation model comprises the following steps:
training a first evaluation model by adopting a VGGNET model and taking an image in an open-source image quality evaluation database TID2013 and an average human perception score associated with the image as first sample data;
taking sample data consisting of the existing poster and the manual score associated with the existing poster as second sample data;
and inputting the second sample data into the first evaluation model for training to obtain the poster evaluation model.
Further, the method for acquiring the poster evaluation model comprises the following steps:
invoking weight parameters of each layer of the trained image evaluation model by adopting the VGGNET model;
initializing the weight parameters of each layer into weight parameters of each layer of the VGGNET model to obtain a second evaluation model;
taking sample data consisting of the existing poster and the manual score associated with the existing poster as third sample data;
And inputting the third sample data into the second evaluation model for training to obtain the poster evaluation model.
Further, the step of adding the two-dimensional code linked to the shared content of the user to the preliminary poster with the highest evaluation score to obtain a final poster includes:
generating m multiplied by n multiplied by o preliminary posters with the highest evaluation score of the two-dimensional codes, wherein m is the number of color categories of the two-dimensional codes, n is the number of shapes of the two-dimensional codes, and o is the number of positions of the two-dimensional codes in the posters;
inputting the m multiplied by n multiplied by o preliminary posters into a preset two-dimensional code evaluation model which is trained based on a convolutional neural network model for operation; the two-dimensional code evaluation model is trained based on sample data consisting of the existing poster added with the two-dimensional code and the manual score associated with the existing poster added with the two-dimensional code;
and outputting the evaluation scores of the m multiplied by n multiplied by o preliminary posters added with the two-dimension codes, and taking the preliminary poster added with the two-dimension codes with the highest evaluation score as the final poster.
Further, the method for acquiring the two-dimensional code evaluation model comprises the following steps:
training a third evaluation model by adopting a VGGNET model and taking an image in an open-source image quality evaluation database TID2013 and an average human perception score associated with the image as fourth sample data;
The method comprises the steps that sample data consisting of a poster added with a two-dimensional code and a manual score associated with the poster added with the two-dimensional code are used as fifth sample data;
and inputting the fifth sample data into the third evaluation model for training to obtain the two-dimensional code evaluation model.
Further, the method for acquiring the two-dimensional code evaluation model comprises the following steps:
invoking weight parameters of each layer of the trained evaluation model corresponding to the VGGNET model;
initializing the weight parameters of each layer into weight parameters of each layer of the VGGNET model to obtain a fourth evaluation model;
the method comprises the steps that sample data consisting of a poster added with a two-dimensional code and a manual score associated with the poster added with the two-dimensional code are used as sixth sample data;
and inputting the sixth sample data into the fourth evaluation model for training, and obtaining the two-dimensional code evaluation model.
Further, the step of inputting the plurality of preliminary posters into a preset poster evaluation model trained based on a convolutional neural network model to perform operation includes:
according to the category of the poster input by the user in advance, invoking a poster evaluation model corresponding to the category of the poster;
And inputting the plurality of preliminary posters into a preset poster evaluation model corresponding to the poster category for operation.
The application provides a poster generation device based on deep learning, comprising:
the unstructured data acquisition unit is used for acquiring unstructured data provided by a user and used for generating a poster;
the preliminary poster generation unit is used for sleeving the unstructured data into a plurality of preset poster templates to generate a plurality of preliminary posters;
the preliminary poster evaluation unit is used for inputting the plurality of preliminary posters into a preset poster evaluation model which is trained based on a convolutional neural network model for operation; the poster evaluation model is trained based on sample data consisting of existing posters and manual scores associated with the existing posters;
an evaluation score output unit configured to output evaluation scores of the plurality of preliminary posters;
and the two-dimension code adding unit is used for adding the two-dimension code linked to the user sharing content into the preliminary poster with the highest evaluation score to obtain a final poster.
The present application provides a computer device comprising a memory storing a computer program and a processor implementing the steps of any of the methods described above when the processor executes the computer program.
The present application provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of any of the preceding claims.
According to the deep learning-based poster generation method, device, computer equipment and storage medium, a poster evaluation model which is trained by using a deep learning convolutional neural network model is utilized to screen out the poster with the highest score, and a two-dimensional code is added into the poster, so that the technical effect of improving the scanning times of the two-dimensional code is achieved.
Drawings
Fig. 1 is a flow chart of a deep learning-based poster generation method according to an embodiment of the application;
fig. 2 is a schematic block diagram of a deep learning-based poster generating apparatus according to an embodiment of the present application;
fig. 3 is a schematic block diagram of a computer device according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Referring to fig. 1, an embodiment of the present application provides a deep learning-based poster generation method, including the steps of:
s1, unstructured data provided by a user and used for generating a poster are obtained;
s2, sheathing the unstructured data into a plurality of preset poster templates to generate a plurality of preliminary posters;
s3, inputting the plurality of preliminary posters into a preset poster evaluation model which is trained based on a convolutional neural network model for operation; the poster evaluation model is trained based on sample data consisting of existing posters and manual scores associated with the existing posters;
s4, outputting evaluation scores of the plurality of preliminary posters;
and S5, adding a two-dimensional code linked to the shared content of the user into the preliminary poster with the highest evaluation score to obtain a final poster.
Unstructured data provided by the user for generating a poster is obtained as described in step S1 above. Wherein the unstructured data is related to the link content which the user wants to share, for example, the user wants to share a product, and the unstructured data can be a picture of the product; the user wants to share a store, and the unstructured data may be a picture of the store. Wherein the poster can be generated by combining structured data (i.e., a poster template) with unstructured data.
And as described in the step S2, the unstructured data is sleeved into a plurality of preset poster templates to generate a plurality of preliminary posters. The poster template is structured data. The designer designs templates of different styles for different types of posters, such as a car insurance design set of templates, and a non-car insurance design set of templates. Whereby a plurality of preliminary posters are obtained.
Inputting the plurality of preliminary posters into a preset poster evaluation model trained based on a convolutional neural network model for operation as described in the step S3; the poster evaluation model is trained based on sample data consisting of existing posters and manual scores associated with the existing posters. The convolutional neural network model is a model for deep learning, comprising GoogleNet, xception, resnet, VGGNET and the like, and is preferably a VGGNET model, such as a VGG19 model, a VGG16 model and a VGG-F model. Wherein the manual scoring is, for example, a manual scoring of the overall perception degree of the poster, for example, the color conflict between the poster template and the unstructured data is serious, and the manual scoring is naturally low. Specifically training the model process includes: and taking sample data consisting of the existing poster and the manual scores associated with the existing poster as a training set, and inputting the sample data into a VGGNET model for operation to obtain the poster evaluation model.
The evaluation scores of the plurality of preliminary posters are output as described in step S4 above. And the overall perception degree of the plurality of preliminary posters can be obtained through the evaluation scores. From this it is known which poster template is suitable for the unstructured data.
And (5) adding the two-dimensional code linked to the shared content of the user into the preliminary poster with the highest evaluation score to obtain a final poster. Therefore, the initial poster with the most overall appearance attracts eyeballs, and the possibility that the sharees access the user to share the content through the two-dimension code link is improved.
In one embodiment, the method for obtaining the poster evaluation model includes:
s301, training a first evaluation model by adopting a VGGNET model and taking an image in an open-source image quality evaluation database TID2013 and an average human perception score associated with the image as first sample data;
s302, taking sample data consisting of the existing poster and manual scores associated with the existing poster as second sample data;
s303, inputting the second sample data into the first evaluation model for training, and obtaining the poster evaluation model.
As described in the above steps, the acquisition of the poster evaluation model is achieved. The present embodiment adopts a VGGNET model in a convolutional neural network to train a first evaluation model. Wherein the VGGNET model may be a VGGNET model, such as a VGG19 model, a VGG16 model, a VGG-F model. The open source image quality assessment database TID2013 is primarily used to assess the matching of image quality assessment models to average human perception (MOS). Thereby training out the first evaluation model. And taking the sample data consisting of the existing poster and the manual scores associated with the existing poster as second sample data, inputting the second sample data into the first evaluation model for training, and obtaining the poster evaluation model. The training adopts a random gradient descent method, and the parameters of each layer of the model are optimized by a reverse conduction method to obtain the poster evaluation model. Furthermore, the sample data can be set as a training set and a testing set, the training is performed by using the sample data of the training set, the testing is performed by using the sample data of the testing set, and the model passing the testing is used as a poster evaluation model.
In one embodiment, the method for obtaining the poster evaluation model includes:
S304, invoking weight parameters of each layer of the trained image evaluation model by adopting the VGGNET model;
s305, initializing the weight parameters of each layer into weight parameters of each layer of the VGGNET model to obtain a second evaluation model;
s306, taking sample data consisting of the existing poster and the manual scores associated with the existing poster as third sample data;
s307, inputting the third sample data into the second evaluation model for training, and obtaining the poster evaluation model.
As described in the above steps, i.e. adopting transfer learning, the weight parameters of each layer of the evaluation model which are based on the same VGGNET model and are already trained are called to be used as the initial weight parameters of the VGGNET model which is not yet trained in the application. If the second evaluation model is already trained, the training step can be omitted, and therefore the second evaluation model is directly obtained. And then, taking the sample data consisting of the existing poster and the manual scores associated with the existing poster as third sample data, and inputting the third sample data into the second evaluation model for training to obtain the poster evaluation model. The training adopts a random gradient descent method, and the parameters of each layer of the model are optimized by a reverse conduction method, so that the poster evaluation model is obtained. Further, the third sample data can be set as a training set and a testing set, the training is performed by using the sample data of the training set, the testing is performed by using the sample data of the testing set, and the model passing the testing is used as a poster evaluation model.
In one embodiment, the step S5 of adding the two-dimensional code linked to the shared content of the user to the preliminary poster with the highest evaluation score to obtain a final poster includes:
s501, generating m multiplied by n multiplied by o preliminary posters with highest evaluation scores added into two-dimensional codes, wherein m is the number of color categories of the two-dimensional codes, n is the number of shapes of the two-dimensional codes, and o is the number of positions of the two-dimensional codes in the posters;
s502, inputting the m multiplied by n multiplied by o preliminary posters into a preset two-dimensional code evaluation model which is trained based on a convolutional neural network model for operation; the two-dimensional code evaluation model is trained based on sample data consisting of the existing poster added with the two-dimensional code and the manual score associated with the existing poster added with the two-dimensional code;
and S503, taking the preliminary poster with the highest evaluation score of the two-dimensional code as the final poster.
As described in the above steps, the obtaining of the final poster is achieved. In this embodiment, it is possible to determine what two-dimensional code is most suitable to be added to a specific poster. According to the method, operation is performed in a two-dimensional code evaluation model trained based on a convolutional neural network model so as to evaluate corresponding scores and reflect whether the two-dimensional code is suitable or not. The embodiment generates different preliminary posters with two-dimension codes according to the colors, shapes and positions of the two-dimension codes. For example, m=7, n=2, o=3, 7×2×3=42 posters are generated, and the 42 posters are input into a two-dimensional code evaluation model to obtain corresponding evaluation scores, and the poster with the highest score is used as the final poster. Wherein the VGGNET model can be used in the convolutional neural network. Specifically training the model process includes: and taking sample data consisting of the existing poster and the manual scores associated with the existing poster as a training set, and inputting the sample data into a VGGNET model for operation to obtain the poster evaluation model.
In one embodiment, the method for obtaining the two-dimensional code evaluation model includes:
s5021, training a third evaluation model by adopting a VGGNET model and taking an image in an open-source image quality evaluation database TID2013 and an average human perception score associated with the image as fourth sample data;
s5022, using a sample data formed by a poster added with a two-dimensional code and a manual score associated with the poster added with the two-dimensional code as fifth sample data;
s5023, inputting the fifth sample data into the third evaluation model for training, and obtaining the two-dimensional code evaluation model.
And the two-dimensional code evaluation model is obtained through the steps. The present embodiment adopts VGGNET model in convolutional neural network to train out the second evaluation model. Wherein the VGGNET model may be a VGGNET model, such as a VGG19 model, a VGG16 model, a VGG-F model. The open source image quality assessment database TID2013 is primarily used to assess the matching of image quality assessment models to average human perception (MOS). Thereby training a third evaluation model. And the existing poster and sample data formed by manual scoring associated with the existing poster are used as fifth sample data and input into the third evaluation model for training, and the two-dimensional code evaluation model is obtained. The two-dimensional code evaluation model is obtained by adopting a random gradient descent method for training and utilizing a reverse conduction method to optimize parameters of each layer of the model. Furthermore, the sample data can be set as a training set and a testing set, the training is performed by using the sample data of the training set, the testing is performed by using the sample data of the testing set, and the model passing the testing is used as a two-dimensional code evaluation model.
In one embodiment, the method for obtaining the two-dimensional code evaluation model includes:
s5024, calling weight parameters of each layer of the trained evaluation model corresponding to the VGGNET model;
s5025, initializing the weight parameters of each layer into weight parameters of each layer of the VGGNET model to obtain a fourth evaluation model;
s5026, using the poster added with the two-dimensional code and associating with the poster added with the two-dimensional code
Manually scoring the formed sample data as sixth sample data;
s5027, inputting the sixth sample data into the fourth evaluation model for training, and obtaining the two-dimensional code evaluation model.
As described in the above steps, i.e. adopting transfer learning, the weight parameters of each layer of the evaluation model which are based on the same VGGNET model and are already trained are called to be used as the initial weight parameters of the VGGNET model which is not yet trained in the application. And if the training model is provided, a process of training the model is omitted, and a fourth evaluation model is directly obtained. And then, taking sample data consisting of the existing poster and the manual scores associated with the existing poster as sixth sample data, and inputting the sixth sample data into the fourth evaluation model for training to obtain the two-dimensional code evaluation model. The training adopts a random gradient descent method, and the parameters of each layer of the VGG model are optimized by a reverse conduction method, so that the two-dimensional code evaluation model is obtained. Further, the fourth sample data can be set as a training set and a testing set, training is performed by using the sample data of the training set, testing is performed by using the sample data of the testing set, and a model passing the testing is used as a two-dimensional code evaluation model.
In one embodiment, the step S3 of inputting the plurality of preliminary posters into a preset poster evaluation model trained based on a convolutional neural network model for operation includes:
s31, invoking a poster evaluation model corresponding to the poster category according to the poster category input by the user in advance;
s32, inputting the plurality of preliminary posters into a preset poster evaluation model corresponding to the poster category for operation.
And the steps are described above, and the operation is carried out by inputting the plurality of preliminary posters into a preset poster evaluation model which is trained based on the convolutional neural network model. According to the poster category, invoking the poster evaluation model corresponding to the poster category can enable the evaluation score to be more correct. Wherein the user can provide the poster category at the same time as the unstructured data. For example, the class of the poster input by the user is the car insurance, and the poster evaluation model of the car insurance is called according to the class of the poster input by the user, so that the influence of the poster data of the non-car insurance on the poster evaluation score is avoided (the importance points of the poster of the car insurance and the poster of the non-car insurance are different, and therefore the evaluation suitable standards are also different, and therefore, the accuracy of the evaluation score can be improved by using the poster evaluation model generated by the poster data of the car insurance).
According to the deep learning-based poster generation method, the poster evaluation model which is trained by using the deep learning convolutional neural network model is utilized to screen the poster with the highest score, and the two-dimension code is added into the poster evaluation model, so that the technical effect of improving the scanning times of the two-dimension code is achieved.
Referring to fig. 2, an embodiment of the present application provides a deep learning-based poster generating apparatus, including:
an unstructured data acquisition unit 10 for acquiring unstructured data provided by a user for generating a poster;
a preliminary poster generation unit 20, configured to nest the unstructured data into a plurality of preset poster templates to generate a plurality of preliminary posters;
the preliminary poster evaluation unit 30 is configured to input the plurality of preliminary posters into a preset poster evaluation model that is trained based on a convolutional neural network model for operation; the poster evaluation model is trained based on sample data consisting of existing posters and manual scores associated with the existing posters;
an evaluation score output unit 40 for outputting evaluation scores of the plurality of preliminary posters;
and the two-dimension code adding unit 50 is used for adding the two-dimension code linked to the user sharing content into the preliminary poster with the highest evaluation score to obtain a final poster.
Unstructured data provided by the user for generating a poster is obtained as described in element 10 above. Wherein the unstructured data is related to the link content which the user wants to share, for example, the user wants to share a product, and the unstructured data can be a picture of the product; the user wants to share a store, and the unstructured data may be a picture of the store. Wherein the poster can be generated by combining structured data (i.e., a poster template) with unstructured data.
The unstructured data is nested into a preset plurality of poster templates to generate a plurality of preliminary posters, as described in element 20 above. The poster template is structured data. The designer designs templates of different styles for different types of posters, such as a car insurance design set of templates, and a non-car insurance design set of templates. Whereby a plurality of preliminary posters are obtained.
As described in the above unit 30, the plurality of preliminary posters are input into a preset poster evaluation model which is trained based on a convolutional neural network model for operation; the poster evaluation model is trained based on sample data consisting of existing posters and manual scores associated with the existing posters. The convolutional neural network model is a model for deep learning, comprising GoogleNet, xception, resnet, VGGNET and the like, and is preferably a VGGNET model, such as a VGG19 model, a VGG16 model and a VGG-F model. Wherein the manual scoring is, for example, a manual scoring of the overall perception degree of the poster, for example, the color conflict between the poster template and the unstructured data is serious, and the manual scoring is naturally low. Specifically training the model process includes: and taking sample data consisting of the existing poster and the manual scores associated with the existing poster as a training set, and inputting the sample data into a VGGNET model for operation to obtain the poster evaluation model.
As described in the above unit 40, the evaluation scores of the plurality of preliminary posters are output. And the overall perception degree of the plurality of preliminary posters can be obtained through the evaluation scores. From this it is known which poster template is suitable for the unstructured data.
And adding a two-dimensional code linked to the shared content of the user to the preliminary poster with the highest evaluation score to obtain a final poster as described in the above unit 50. Therefore, the initial poster with the most overall appearance attracts eyeballs, and the possibility that the sharees access the user to share the content through the two-dimension code link is improved.
In one embodiment, the apparatus includes a poster evaluation model acquisition unit, including:
the first evaluation model training subunit is used for training a first evaluation model by adopting a VGGNET model and taking an image in an open-source image quality evaluation database TID2013 and an average human perception score associated with the image as first sample data;
a second sample data obtaining subunit, configured to use, as second sample data, sample data that is formed by the existing poster and a manual score associated with the existing poster;
And the poster evaluation model obtaining subunit is used for inputting the second sample data into the first evaluation model for training to obtain the poster evaluation model.
As described above, acquisition of the poster evaluation model is achieved. The present embodiment adopts a VGGNET model in a convolutional neural network to train a first evaluation model. Wherein the VGGNET model may be a VGGNET model, such as a VGG19 model, a VGG16 model, a VGG-F model. The open source image quality assessment database TID2013 is primarily used to assess the matching of image quality assessment models to average human perception (MOS). Thereby training out the first evaluation model. And taking the sample data consisting of the existing poster and the manual scores associated with the existing poster as second sample data, inputting the second sample data into the first evaluation model for training, and obtaining the poster evaluation model. The training adopts a random gradient descent method, and the parameters of each layer of the model are optimized by a reverse conduction method to obtain the poster evaluation model. Furthermore, the sample data can be set as a training set and a testing set, the training is performed by using the sample data of the training set, the testing is performed by using the sample data of the testing set, and the model passing the testing is used as a poster evaluation model.
In one embodiment, the apparatus includes a poster evaluation model acquisition unit, including:
the weight parameter calling subunit is used for calling weight parameters of each layer of the trained image evaluation model by adopting the VGGNET model;
a second evaluation model obtaining subunit, configured to initialize the weight parameters of each layer to weight parameters of each layer of the VGGNET model, so as to obtain a second evaluation model;
a third sample data obtaining subunit, configured to use, as third sample data, sample data that is formed by the existing poster and a manual score associated with the existing poster;
and the poster evaluation model obtaining subunit is used for inputting the third sample data into the second evaluation model for training to obtain the poster evaluation model.
As described above, the weight parameters of each layer based on the same VGGNET model and trained evaluation model are called by adopting transfer learning to serve as the initial weight parameters of the VGGNET model which is not trained yet. If the second evaluation model is already trained, the training step can be omitted, and therefore the second evaluation model is directly obtained. And then, taking the sample data consisting of the existing poster and the manual scores associated with the existing poster as third sample data, and inputting the third sample data into the second evaluation model for training to obtain the poster evaluation model. The training adopts a random gradient descent method, and the parameters of each layer of the model are optimized by a reverse conduction method, so that the poster evaluation model is obtained. Further, the third sample data can be set as a training set and a testing set, the training is performed by using the sample data of the training set, the testing is performed by using the sample data of the testing set, and the model passing the testing is used as a poster evaluation model.
In one embodiment, the two-dimensional code adding unit 50 includes:
the two-dimensional code poster generation subunit is used for generating m multiplied by n multiplied by o preliminary posters with the highest evaluation scores of the two-dimensional codes, wherein m is the number of two-dimensional code color types, n is the number of two-dimensional code shapes, and o is the number of positions of the two-dimensional codes in the poster;
the two-dimensional code evaluation model operation subunit is used for inputting the m multiplied by n multiplied by o preliminary posters into a preset two-dimensional code evaluation model which is trained based on a convolutional neural network model for operation; the two-dimensional code evaluation model is trained based on sample data consisting of the existing poster added with the two-dimensional code and the manual score associated with the existing poster added with the two-dimensional code;
and the final poster obtaining subunit is used for taking the preliminary poster with the highest evaluation score of the two-dimensional code as the final poster.
As described above, obtaining the final poster is achieved. In this embodiment, it is possible to determine what two-dimensional code is most suitable to be added to a specific poster. According to the method, operation is performed in a two-dimensional code evaluation model trained based on a convolutional neural network model so as to evaluate corresponding scores and reflect whether the two-dimensional code is suitable or not. The embodiment generates different preliminary posters with two-dimension codes according to the colors, shapes and positions of the two-dimension codes. For example, m=7, n=2, o=3, 7×2×3=42 posters are generated, and the 42 posters are input into a two-dimensional code evaluation model to obtain corresponding evaluation scores, and the poster with the highest score is used as the final poster. Wherein the VGGNET model can be used in the convolutional neural network. Specifically training the model process includes: and taking sample data consisting of the existing poster and the manual scores associated with the existing poster as a training set, and inputting the sample data into a VGGNET model for operation to obtain the poster evaluation model.
In one embodiment, the apparatus includes a two-dimensional code evaluation model acquisition unit including:
the third evaluation model training subunit is configured to train a third evaluation model by using a VGGNET model and using an image in an open-source image quality evaluation database TID2013 and an average human perception score associated with the image as fourth sample data;
a fifth sample data obtaining subunit, configured to use, as fifth sample data, sample data formed by a poster added with a two-dimensional code and a manual score associated with the poster added with the two-dimensional code;
the two-dimensional code evaluation model acquisition subunit is used for inputting the fifth sample data into the third evaluation model for training to acquire the two-dimensional code evaluation model.
As described above, the acquisition of the two-dimensional code evaluation model is realized. The present embodiment adopts VGGNET model in convolutional neural network to train out the second evaluation model. Wherein the VGGNET model may be a VGGNET model, such as a VGG19 model, a VGG16 model, a VGG-F model. The open source image quality assessment database TID2013 is primarily used to assess the matching of image quality assessment models to average human perception (MOS). Thereby training a third evaluation model. And the existing poster and sample data formed by manual scoring associated with the existing poster are used as fifth sample data and input into the third evaluation model for training, and the two-dimensional code evaluation model is obtained. The two-dimensional code evaluation model is obtained by adopting a random gradient descent method for training and utilizing a reverse conduction method to optimize parameters of each layer of the model. Furthermore, the sample data can be set as a training set and a testing set, the training is performed by using the sample data of the training set, the testing is performed by using the sample data of the testing set, and the model passing the testing is used as a two-dimensional code evaluation model.
In one embodiment, the apparatus includes a two-dimensional code evaluation model acquisition unit including:
the weight parameter calling subunit of each layer is used for calling the weight parameters of each layer of the trained evaluation model corresponding to the VGGNET model;
a weight parameter initializing subunit, configured to initialize the weight parameters of each layer to weight parameters of each layer of the VGGNET model, so as to obtain a fourth evaluation model;
a sixth sample data obtaining subunit, configured to use, as sixth sample data, sample data formed by a poster added with a two-dimensional code and a manual score associated with the existing poster added with the two-dimensional code;
and the two-dimensional code evaluation model acquisition subunit is used for inputting the sixth sample data into the fourth evaluation model for training to acquire the two-dimensional code evaluation model.
As described above, the weight parameters of each layer based on the same VGGNET model and trained evaluation model are called by adopting transfer learning to serve as the initial weight parameters of the VGGNET model which is not trained yet. And if the training model is provided, a process of training the model is omitted, and a fourth evaluation model is directly obtained. And then, taking sample data consisting of the existing poster and the manual scores associated with the existing poster as sixth sample data, and inputting the sixth sample data into the fourth evaluation model for training to obtain the two-dimensional code evaluation model. The training adopts a random gradient descent method, and the parameters of each layer of the VGG model are optimized by a reverse conduction method, so that the two-dimensional code evaluation model is obtained. Further, the fourth sample data can be set as a training set and a testing set, training is performed by using the sample data of the training set, testing is performed by using the sample data of the testing set, and a model passing the testing is used as a two-dimensional code evaluation model.
In one embodiment, the preliminary poster evaluation unit 30 includes:
the poster evaluation model calling subunit is used for calling a poster evaluation model corresponding to the poster category according to the poster category input by the user in advance;
and the operation subunit is used for inputting the plurality of preliminary posters into a preset poster evaluation model corresponding to the poster category for operation.
As described above, the operation of inputting the plurality of preliminary posters into the preset poster evaluation model which is trained based on the convolutional neural network model is realized. According to the poster category, invoking the poster evaluation model corresponding to the poster category can enable the evaluation score to be more correct. Wherein the user can provide the poster category at the same time as the unstructured data. For example, the class of the poster input by the user is the car insurance, and the poster evaluation model of the car insurance is called according to the class of the poster input by the user, so that the influence of the poster data of the non-car insurance on the poster evaluation score is avoided (the importance points of the poster of the car insurance and the poster of the non-car insurance are different, and therefore the evaluation suitable standards are also different, and therefore, the accuracy of the evaluation score can be improved by using the poster evaluation model generated by the poster data of the car insurance).
According to the deep learning-based poster generation device, the poster evaluation model which is trained by using the deep learning convolutional neural network model is utilized to screen the poster with the highest score, and the two-dimension code is added into the poster evaluation model, so that the technical effect of improving the scanning times of the two-dimension code is realized.
Referring to fig. 3, in an embodiment of the present application, there is further provided a computer device, which may be a server, and the internal structure of which may be as shown in the drawing. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data used by the deep learning-based poster generation method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a deep learning based poster generation method.
The processor executes the deep learning-based poster generation method, and the method comprises the following steps: obtaining unstructured data provided by a user and used for generating a poster; sheathing the unstructured data into a plurality of preset poster templates to generate a plurality of preliminary posters; inputting the plurality of preliminary posters into a preset poster evaluation model which is trained based on a convolutional neural network model for operation; the poster evaluation model is trained based on sample data consisting of existing posters and manual scores associated with the existing posters; outputting evaluation scores of the plurality of preliminary posters; and adding a two-dimensional code linked to the shared content of the user into the preliminary poster with the highest evaluation score to obtain a final poster.
In one embodiment, the method for obtaining the poster evaluation model includes: training a first evaluation model by adopting a VGGNET model and taking an image in an open-source image quality evaluation database TID2013 and an average human perception score associated with the image as first sample data; taking sample data consisting of the existing poster and the manual score associated with the existing poster as second sample data; and inputting the second sample data into the first evaluation model for training to obtain the poster evaluation model.
In one embodiment, the method for obtaining the poster evaluation model includes: invoking weight parameters of each layer of the trained image evaluation model by adopting the VGGNET model; initializing the weight parameters of each layer into weight parameters of each layer of the VGGNET model to obtain a second evaluation model; taking sample data consisting of the existing poster and the manual score associated with the existing poster as third sample data; and inputting the third sample data into the second evaluation model for training to obtain the poster evaluation model.
In one embodiment, the step of adding a two-dimensional code linked to the shared content of the user to the preliminary poster with the highest evaluation score to obtain a final poster includes: generating m multiplied by n multiplied by o preliminary posters with the highest evaluation score of the two-dimensional codes, wherein m is the number of color categories of the two-dimensional codes, n is the number of shapes of the two-dimensional codes, and o is the number of positions of the two-dimensional codes in the posters; inputting the m multiplied by n multiplied by o preliminary posters into a preset two-dimensional code evaluation model which is trained based on a convolutional neural network model for operation; the two-dimensional code evaluation model is trained based on sample data consisting of the existing poster added with the two-dimensional code and the manual score associated with the existing poster added with the two-dimensional code; and outputting the evaluation scores of the m multiplied by n multiplied by o preliminary posters added with the two-dimension codes, and taking the preliminary poster added with the two-dimension codes with the highest evaluation score as the final poster.
In one embodiment, the method for obtaining the two-dimensional code evaluation model includes: training a third evaluation model by adopting a VGGNET model and taking an image in an open-source image quality evaluation database TID2013 and an average human perception score associated with the image as fourth sample data; the method comprises the steps that sample data consisting of a poster added with a two-dimensional code and a manual score associated with the poster added with the two-dimensional code are used as fifth sample data; and inputting the fifth sample data into the third evaluation model for training to obtain the two-dimensional code evaluation model.
In one embodiment, the method for obtaining the two-dimensional code evaluation model includes: invoking weight parameters of each layer of the trained evaluation model corresponding to the VGGNET model; initializing the weight parameters of each layer into weight parameters of each layer of the VGGNET model to obtain a fourth evaluation model;
the method comprises the steps that sample data consisting of a poster added with a two-dimensional code and a manual score associated with the poster added with the two-dimensional code are used as sixth sample data; and inputting the sixth sample data into the fourth evaluation model for training, and obtaining the two-dimensional code evaluation model.
In one embodiment, the step of inputting the plurality of preliminary posters into a preset poster evaluation model trained based on a convolutional neural network model for operation includes: according to the category of the poster input by the user in advance, invoking a poster evaluation model corresponding to the category of the poster; and inputting the plurality of preliminary posters into a preset poster evaluation model corresponding to the poster category for operation.
It will be appreciated by persons skilled in the art that the structures shown in the drawings are only block diagrams of portions of structures that may be associated with the aspects of the application and are not intended to limit the scope of the computer apparatus to which the aspects of the application may be applied.
The computer equipment of the application utilizes the deep-learning convolutional neural network model to train the completed poster evaluation model, screens out the poster with the highest score, and adds the two-dimension code into the poster so as to realize the technical effect of improving the scanning times of the two-dimension code.
An embodiment of the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a deep learning-based poster generation method, including the steps of: obtaining unstructured data provided by a user and used for generating a poster; sheathing the unstructured data into a plurality of preset poster templates to generate a plurality of preliminary posters; inputting the plurality of preliminary posters into a preset poster evaluation model which is trained based on a convolutional neural network model for operation; the poster evaluation model is trained based on sample data consisting of existing posters and manual scores associated with the existing posters; outputting evaluation scores of the plurality of preliminary posters; and adding a two-dimensional code linked to the shared content of the user into the preliminary poster with the highest evaluation score to obtain a final poster.
In one embodiment, the method for obtaining the poster evaluation model includes: training a first evaluation model by adopting a VGGNET model and taking an image in an open-source image quality evaluation database TID2013 and an average human perception score associated with the image as first sample data; taking sample data consisting of the existing poster and the manual score associated with the existing poster as second sample data; and inputting the second sample data into the first evaluation model for training to obtain the poster evaluation model.
In one embodiment, the method for obtaining the poster evaluation model includes: invoking weight parameters of each layer of the trained image evaluation model by adopting the VGGNET model; initializing the weight parameters of each layer into weight parameters of each layer of the VGGNET model to obtain a second evaluation model; taking sample data consisting of the existing poster and the manual score associated with the existing poster as third sample data; and inputting the third sample data into the second evaluation model for training to obtain the poster evaluation model.
In one embodiment, the step of adding a two-dimensional code linked to the shared content of the user to the preliminary poster with the highest evaluation score to obtain a final poster includes: generating m multiplied by n multiplied by o preliminary posters with the highest evaluation score of the two-dimensional codes, wherein m is the number of color categories of the two-dimensional codes, n is the number of shapes of the two-dimensional codes, and o is the number of positions of the two-dimensional codes in the posters; inputting the m multiplied by n multiplied by o preliminary posters into a preset two-dimensional code evaluation model which is trained based on a convolutional neural network model for operation; the two-dimensional code evaluation model is trained based on sample data consisting of the existing poster added with the two-dimensional code and the manual score associated with the existing poster added with the two-dimensional code; and outputting the evaluation scores of the m multiplied by n multiplied by o preliminary posters added with the two-dimension codes, and taking the preliminary poster added with the two-dimension codes with the highest evaluation score as the final poster.
In one embodiment, the method for obtaining the two-dimensional code evaluation model includes: training a third evaluation model by adopting a VGGNET model and taking an image in an open-source image quality evaluation database TID2013 and an average human perception score associated with the image as fourth sample data; the method comprises the steps that sample data consisting of a poster added with a two-dimensional code and a manual score associated with the poster added with the two-dimensional code are used as fifth sample data; and inputting the fifth sample data into the third evaluation model for training to obtain the two-dimensional code evaluation model.
In one embodiment, the method for obtaining the two-dimensional code evaluation model includes: invoking weight parameters of each layer of the trained evaluation model corresponding to the VGGNET model; initializing the weight parameters of each layer into weight parameters of each layer of the VGGNET model to obtain a fourth evaluation model;
the method comprises the steps that sample data consisting of a poster added with a two-dimensional code and a manual score associated with the poster added with the two-dimensional code are used as sixth sample data; and inputting the sixth sample data into the fourth evaluation model for training, and obtaining the two-dimensional code evaluation model.
In one embodiment, the step of inputting the plurality of preliminary posters into a preset poster evaluation model trained based on a convolutional neural network model for operation includes: according to the category of the poster input by the user in advance, invoking a poster evaluation model corresponding to the category of the poster; and inputting the plurality of preliminary posters into a preset poster evaluation model corresponding to the poster category for operation.
The computer readable storage medium of the application utilizes the deep-learning convolutional neural network model to train the completed poster evaluation model, screens out the poster with the highest score, and adds the two-dimension code into the poster so as to realize the technical effect of improving the scanning times of the two-dimension code.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided by the present application and used in embodiments may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual speed data rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application or directly or indirectly applied to other related technical fields are included in the scope of the application.

Claims (9)

1. The deep learning-based poster generation method is characterized by comprising the following steps of:
obtaining unstructured data provided by a user and used for generating a poster;
sheathing the unstructured data into a plurality of preset poster templates to generate a plurality of preliminary posters;
Inputting the plurality of preliminary posters into a preset poster evaluation model which is trained based on a convolutional neural network model for operation; the poster evaluation model is trained based on sample data consisting of existing posters and manual scores associated with the existing posters;
outputting evaluation scores of the plurality of preliminary posters;
adding a two-dimensional code linked to the shared content of the user into the preliminary poster with the highest evaluation score to obtain a final poster;
and adding a two-dimensional code linked to the shared content of the user into the preliminary poster with the highest evaluation score to obtain a final poster, wherein the step comprises the following steps:
generating m multiplied by n multiplied by o preliminary posters with the highest evaluation score of the two-dimensional codes, wherein m is the number of color categories of the two-dimensional codes, n is the number of shapes of the two-dimensional codes, and o is the number of positions of the two-dimensional codes in the posters;
inputting the m multiplied by n multiplied by o preliminary posters into a preset two-dimensional code evaluation model which is trained based on a convolutional neural network model for operation; the two-dimensional code evaluation model is trained based on sample data consisting of the existing poster added with the two-dimensional code and the manual score associated with the existing poster added with the two-dimensional code;
And outputting the evaluation scores of the m multiplied by n multiplied by o preliminary posters added with the two-dimension codes, and taking the preliminary poster added with the two-dimension codes with the highest evaluation score as the final poster.
2. The deep learning-based poster generation method according to claim 1, wherein said poster evaluation model acquisition method comprises:
training a first evaluation model by adopting a VGGNET model and taking an image in an open-source image quality evaluation database TID2013 and an average human perception score associated with the image as first sample data;
taking sample data consisting of the existing poster and the manual score associated with the existing poster as second sample data;
and inputting the second sample data into the first evaluation model for training to obtain the poster evaluation model.
3. The deep learning-based poster generation method according to claim 1, wherein said poster evaluation model acquisition method comprises:
invoking weight parameters of each layer of the trained image evaluation model by adopting the VGGNET model;
initializing the weight parameters of each layer into weight parameters of each layer of the VGGNET model to obtain a second evaluation model;
Taking sample data consisting of the existing poster and the manual score associated with the existing poster as third sample data;
and inputting the third sample data into the second evaluation model for training to obtain the poster evaluation model.
4. The deep learning-based poster generation method according to claim 1, wherein the two-dimensional code evaluation model acquisition method comprises:
training a third evaluation model by adopting a VGGNET model and taking an image in an open-source image quality evaluation database TID2013 and an average human perception score associated with the image as fourth sample data;
the method comprises the steps that sample data consisting of a poster added with a two-dimensional code and a manual score associated with the poster added with the two-dimensional code are used as fifth sample data;
and inputting the fifth sample data into the third evaluation model for training to obtain the two-dimensional code evaluation model.
5. The deep learning-based poster generation method according to claim 1, wherein the two-dimensional code evaluation model acquisition method comprises:
invoking weight parameters of each layer of the trained evaluation model corresponding to the VGGNET model;
Initializing the weight parameters of each layer into weight parameters of each layer of the VGGNET model to obtain a fourth evaluation model;
the method comprises the steps that sample data consisting of a poster added with a two-dimensional code and a manual score associated with the poster added with the two-dimensional code are used as sixth sample data;
and inputting the sixth sample data into the fourth evaluation model for training, and obtaining the two-dimensional code evaluation model.
6. The deep learning-based poster generation method according to claim 1, wherein said step of inputting said plurality of preliminary posters into a preset poster evaluation model trained based on a convolutional neural network model for operation comprises:
according to the category of the poster input by the user in advance, invoking a poster evaluation model corresponding to the category of the poster;
and inputting the plurality of preliminary posters into a preset poster evaluation model corresponding to the poster category for operation.
7. A deep learning-based poster generation apparatus, comprising:
the unstructured data acquisition unit is used for acquiring unstructured data provided by a user and used for generating a poster;
the preliminary poster generation unit is used for sleeving the unstructured data into a plurality of preset poster templates to generate a plurality of preliminary posters;
The preliminary poster evaluation unit is used for inputting the plurality of preliminary posters into a preset poster evaluation model which is trained based on a convolutional neural network model for operation; the poster evaluation model is trained based on sample data consisting of existing posters and manual scores associated with the existing posters;
an evaluation score output unit configured to output evaluation scores of the plurality of preliminary posters;
the two-dimension code adding unit is used for adding a two-dimension code linked to the user sharing content into the preliminary poster with the highest evaluation score to obtain a final poster;
the two-dimensional code adding unit comprises:
the two-dimensional code poster generation subunit is used for generating m multiplied by n multiplied by o preliminary posters with the highest evaluation scores of the two-dimensional codes, wherein m is the number of two-dimensional code color types, n is the number of two-dimensional code shapes, and o is the number of positions of the two-dimensional codes in the poster;
the two-dimensional code evaluation model operation subunit is used for inputting the m multiplied by n multiplied by o preliminary posters into a preset two-dimensional code evaluation model which is trained based on a convolutional neural network model for operation; the two-dimensional code evaluation model is trained based on sample data consisting of the existing poster added with the two-dimensional code and the manual score associated with the existing poster added with the two-dimensional code;
And the final poster obtaining subunit is used for taking the preliminary poster with the highest evaluation score of the two-dimensional code as the final poster.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN201811556085.6A 2018-12-19 2018-12-19 Deep learning-based poster generation method and device and computer equipment Active CN109815465B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811556085.6A CN109815465B (en) 2018-12-19 2018-12-19 Deep learning-based poster generation method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811556085.6A CN109815465B (en) 2018-12-19 2018-12-19 Deep learning-based poster generation method and device and computer equipment

Publications (2)

Publication Number Publication Date
CN109815465A CN109815465A (en) 2019-05-28
CN109815465B true CN109815465B (en) 2023-11-17

Family

ID=66602232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811556085.6A Active CN109815465B (en) 2018-12-19 2018-12-19 Deep learning-based poster generation method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN109815465B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110737963B (en) * 2019-12-20 2020-03-31 广东博智林机器人有限公司 Poster element layout method, system and computer readable storage medium
CN111161381A (en) * 2019-12-31 2020-05-15 广东博智林机器人有限公司 Poster template generation method and device, electronic equipment and storage medium
CN112465088A (en) * 2020-12-07 2021-03-09 合肥维天运通信息科技股份有限公司 Two-dimensional code position generation method
CN113010711B (en) * 2021-04-01 2022-04-29 杭州初灵数据科技有限公司 Method and system for automatically generating movie poster based on deep learning
CN113536006B (en) * 2021-06-25 2023-06-13 北京百度网讯科技有限公司 Method, apparatus, device, storage medium and computer product for generating picture
CN113869960B (en) * 2021-10-15 2022-06-21 创优数字科技(广东)有限公司 Poster generation method and device, storage medium and computer equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359360A (en) * 2008-07-31 2009-02-04 刘旭 Graphics context fused electronic ticket coding/decoding method
WO2015001637A1 (en) * 2013-07-03 2015-01-08 A・Tコミュニケーションズ株式会社 Authentication server, authentication system, authentication method, and program
CN107330715A (en) * 2017-05-31 2017-11-07 北京京东尚科信息技术有限公司 The method and apparatus for selecting display advertising material
CN107945175A (en) * 2017-12-12 2018-04-20 百度在线网络技术(北京)有限公司 Evaluation method, device, server and the storage medium of image
CN108269250A (en) * 2017-12-27 2018-07-10 武汉烽火众智数字技术有限责任公司 Method and apparatus based on convolutional neural networks assessment quality of human face image
CN108520193A (en) * 2018-03-27 2018-09-11 康体佳智能科技(深圳)有限公司 Quick Response Code identifying system based on neural network and recognition methods
CN108985414A (en) * 2018-08-28 2018-12-11 深圳春沐源控股有限公司 A kind of sharing method and relevant apparatus of merchandise news

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359360A (en) * 2008-07-31 2009-02-04 刘旭 Graphics context fused electronic ticket coding/decoding method
WO2015001637A1 (en) * 2013-07-03 2015-01-08 A・Tコミュニケーションズ株式会社 Authentication server, authentication system, authentication method, and program
CN107330715A (en) * 2017-05-31 2017-11-07 北京京东尚科信息技术有限公司 The method and apparatus for selecting display advertising material
CN107945175A (en) * 2017-12-12 2018-04-20 百度在线网络技术(北京)有限公司 Evaluation method, device, server and the storage medium of image
CN108269250A (en) * 2017-12-27 2018-07-10 武汉烽火众智数字技术有限责任公司 Method and apparatus based on convolutional neural networks assessment quality of human face image
CN108520193A (en) * 2018-03-27 2018-09-11 康体佳智能科技(深圳)有限公司 Quick Response Code identifying system based on neural network and recognition methods
CN108985414A (en) * 2018-08-28 2018-12-11 深圳春沐源控股有限公司 A kind of sharing method and relevant apparatus of merchandise news

Also Published As

Publication number Publication date
CN109815465A (en) 2019-05-28

Similar Documents

Publication Publication Date Title
CN109815465B (en) Deep learning-based poster generation method and device and computer equipment
US9734255B2 (en) Ubiquitous personalized learning evaluation network using 2D barcodes
CN108351871B (en) General translator
Zhang Knowledge adoption in online communities of practice
CN111079056A (en) Method, device, computer equipment and storage medium for extracting user portrait
US20180268307A1 (en) Analysis device, analysis method, and computer readable storage medium
US20140370480A1 (en) Storage medium, apparatus, and method for information processing
CN110378986B (en) Problem demonstration animation generation method and device, electronic equipment and storage medium
CN110018823B (en) Processing method and system, and generating method and system of interactive application program
CN114461871B (en) Recommendation model training method, object recommendation device and storage medium
US20180018321A1 (en) Avoiding sentiment model overfitting in a machine language model
CN111475628B (en) Session data processing method, apparatus, computer device and storage medium
CN107609487B (en) User head portrait generation method and device
CN109388759B (en) Webpage interface construction method and system and data processing method
CN111352623B (en) Page generation method and device
JP2009116519A (en) Personal history development device
Tham et al. The ethics of experimental research employing intrusive technologies in tourism: A collaborative ethnography perspective
Gao et al. Online features of qzone weblog for critical peer feedback to facilitate business english writing
Ngwadla An operational framework for equity in the 2015 Agreement
JP7329293B1 (en) Information processing device, method, program, and system
KR102111658B1 (en) Social marketing method for providing business support service
CN115191002A (en) Matching system, matching method, and matching program
CN111489419A (en) Poster generation method and system
Ollman Dialectics and world politics
JP2021015549A (en) Information processing method and information processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant