CN116611131A - Automatic generation method, device, medium and equipment for packaging graphics - Google Patents

Automatic generation method, device, medium and equipment for packaging graphics Download PDF

Info

Publication number
CN116611131A
CN116611131A CN202310817194.3A CN202310817194A CN116611131A CN 116611131 A CN116611131 A CN 116611131A CN 202310817194 A CN202310817194 A CN 202310817194A CN 116611131 A CN116611131 A CN 116611131A
Authority
CN
China
Prior art keywords
design
model
packaging
package
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310817194.3A
Other languages
Chinese (zh)
Other versions
CN116611131B (en
Inventor
陈彦
郝晓伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dajia Zhihe Beijing Network Technology Co ltd
Original Assignee
Dajia Zhihe Beijing Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dajia Zhihe Beijing Network Technology Co ltd filed Critical Dajia Zhihe Beijing Network Technology Co ltd
Priority to CN202310817194.3A priority Critical patent/CN116611131B/en
Publication of CN116611131A publication Critical patent/CN116611131A/en
Application granted granted Critical
Publication of CN116611131B publication Critical patent/CN116611131B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention relates to a method, a device, equipment and a medium for automatically generating a package graph, and belongs to the technical field of package design. The technical scheme of the invention mainly comprises the following steps: obtaining design requirements based on a packaging language model according to user input analysis, wherein the design requirements comprise packaging types, packaged product types or design styles, and obtaining a packaging model according to the packaging types; generating a design element pattern according to the packaged product type and the design style based on an image generation model; calling a corresponding design template from a template library according to the design style; generating a package design pattern according to the design element pattern and the design template; the package design pattern is combined with the package model to generate the package graphic.

Description

Automatic generation method, device, medium and equipment for packaging graphics
Technical Field
The invention belongs to the technical field of package design, and particularly relates to a method, a device, a medium and equipment for automatically generating a package graph.
Background
With the increase of market competition, packaging designs of various products are increasingly valued by enterprises and consumers.
The traditional package design method is time-consuming and consumes manpower and material resources, and a designer and a customer are in communication with each other in a misunderstanding manner, so that a certain gap exists between the design effect and the customer demand. Therefore, the design efficiency can be improved, the requirements of customers can be met, and the method has important practical significance.
The invention aims to improve the production efficiency of package design images.
Disclosure of Invention
In view of the above analysis, the embodiments of the present invention aim to provide a method, an apparatus, a medium and a device for automatically generating a packaging pattern, so as to solve the problem of low packaging design efficiency in the prior art.
An embodiment of a first aspect of the present invention provides a method for automatically generating a packaging graphic, including the steps of:
obtaining design requirements based on a packaging language model according to user input analysis, wherein the design requirements comprise packaging types, packaged product types or design styles, and obtaining a packaging model according to the packaging types;
generating a design element pattern according to the packaged product type and the design style based on an image generation model;
calling a corresponding design template from a template library according to the design style;
generating a package design pattern according to the design element pattern and the design template;
Combining the package design pattern with the package model to generate the package graphic;
the training method of the image generation model comprises the following steps:
acquiring package design image data and performing data expansion by a data enhancement method to obtain an image dataset;
performing third preprocessing on the image data in the image data set to enable the image data to meet the input requirement of the neural network;
classifying the image data after the third preprocessing, and extracting the characteristics of each category through a convolutional neural network;
training a plurality of embedded models according to the classified image data so that each embedded model respectively learns the characteristics of different design styles, wherein each embedded model comprises StyleGAN;
training a generative model according to the image data so that the generative model can perform optimization adjustment when generating an image, wherein the generative model comprises Hypernetwork, lora or VAE;
the embedded models and the generating models which respectively have different styles are fused to train and generate the image generating model, and the image generating model is used for generating a design element image according to user description.
In some embodiments, the training method of the packaging language model comprises:
acquiring package design term data and a pre-training language model;
performing first preprocessing on the package design term data, wherein the first preprocessing comprises removing HTML labels and special characters, performing data cleaning and removing stop words;
word segmentation is carried out on the first preprocessed package design term data so as to extract keywords, phrases or industry terms in the package design term data;
adding the keywords, phrases and industry terms into a vocabulary of the pre-trained language model after de-duplication;
obtaining a custom package design dataset comprising performing a second preprocessing of package design industry data such that the package design industry data meets an input format of the pre-trained language model, thereby forming the custom package design dataset;
and fine tuning the pre-training language model based on the custom package design data set according to the selected loss function and the optimizer so as to update the network weight of the pre-training language model and word vectors corresponding to vocabulary in the vocabulary, thereby obtaining the package language model.
In some embodiments, the word segmentation process includes word segmentation of the package design term data with a text processing tool to obtain word segmentation results, the text processing tool including jieba word segmentation or THULAC;
the keyword extraction method comprises the steps of extracting keywords from the segmentation result based on a BERT TextRank or a BERT keyword extraction library, wherein the BERT keyword extraction library comprises Bert-extraction-keywords;
extracting the phrase and the industry term comprises the step of performing part-of-speech analysis on the word segmentation result through a part-of-speech tagging tool, and extracting the phrase and the industry term containing actual meaning through combining words with different parts of speech, wherein the part-of-speech tagging tool comprises jieba part-of-speech tagging or LTP.
In some embodiments, further comprising: establishing a box-type library, wherein the packaging model comprises a standard cutting die and a label, the label comprises types, sizes, materials, application range descriptions or manufacturing processes, and the types comprise boxes, bags, bottles, boxes or cans; the box-type library further comprises a three-dimensional modeling of the packaging model and a two-dimensional plan view of the standard cutting die;
the step of obtaining the package model according to the package type comprises matching according to the package type and the label.
In some embodiments, the template library comprises a number of design templates collected in advance, the design templates comprising one or more combinations of text typesetting, overall layout, color, or font;
the method further comprises classifying style categories of the design templates through image clustering and style migration algorithms;
the step of calling the corresponding design templates from the template library according to the design styles comprises matching according to the design styles and the style categories.
In some embodiments, the packaging graphic comprises a packaging layout generated from the packaging design pattern in combination with a two-dimensional plan view of the standard die;
the packaging graph further comprises a three-dimensional rendering graph generated according to the three-dimensional modeling of the packaging model and the packaging design pattern, wherein the three-dimensional rendering graph is generated through a real-time ray tracing technology.
In some embodiments, further comprising editing the packaging graphic by an editing module;
the editing module comprises a graphic display unit, and the design scheme display unit comprises a three-dimensional rendering diagram and a packing cutter layout of the design scheme;
the editing module comprises a layout editing unit, wherein the layout editing unit is used for modifying the position relation of each design element or adding or deleting design elements, and the design elements at least comprise text elements or pattern elements;
The editing module further comprises a text editing unit, wherein the text editing unit is used for modifying and adding text, and the text editing unit is in communication connection with the packaging language model to generate a text file according to user input;
the editing module further includes a pattern editing unit communicatively coupled to the image generation model to generate pattern elements based on user input.
An embodiment of a second aspect of the present invention provides an automatic packaging graphic generating device, including:
the demand acquisition module is used for acquiring design demands according to user input analysis based on a packaging language model, wherein the design demands comprise packaging types, packaged product types or design styles, and the packaging model is acquired according to the packaging types;
an image generation module for generating a design element pattern according to the packaged product type and the design style based on an image generation model;
the template acquisition module is used for calling a corresponding design template from a template library according to the design style;
a first synthesis module for generating a package design pattern according to the design element pattern and the design template;
a second synthesis module that combines the packaging design pattern with the packaging model to generate the packaging graphic;
The training method of the image generation model comprises the following steps:
acquiring package design image data and performing data expansion by a data enhancement method to obtain an image dataset;
performing third preprocessing on the image data in the image data set to enable the image data to meet the input requirement of the neural network;
classifying the image data after the third preprocessing, and extracting the characteristics of each category through a convolutional neural network;
training a plurality of embedded models according to the classified image data so that each embedded model respectively learns the characteristics of different design styles, wherein each embedded model comprises StyleGAN;
training a generative model according to the image data so that the generative model can perform optimization adjustment when generating an image, wherein the generative model comprises Hypernetwork, lora or VAE;
the embedded models and the generating models which respectively have different styles are fused to train and generate the image generating model, and the image generating model is used for generating a design element image according to user description.
An embodiment of a third aspect of the present invention provides an electronic device, including a memory and a processor, the memory storing a computer program that, when executed by the processor, implements the method for automatically generating a wrapper graph according to any of the embodiments above.
An embodiment of a fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for automatically generating a wrapper graphic according to any of the embodiments above.
The embodiment of the invention realizes automatic generation of the image aiming at the packaging design field, can refine the requirements through the description of the user, and automatically invokes and fuses the contents of the database.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present description, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a schematic flow chart of a method for automatically generating a packaging graphic according to an embodiment of the first aspect of the present invention;
FIG. 2 is a schematic diagram of an automatic package graphics generating apparatus according to a second embodiment of the present invention;
fig. 3 is a schematic diagram of an electronic device architecture according to an embodiment of a third aspect of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. It should be noted that embodiments and features of embodiments in the present disclosure may be combined, separated, interchanged, and/or rearranged with one another without conflict. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when the terms "comprises" and/or "comprising," and variations thereof, are used in the present specification, the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof is described, but the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof is not precluded. It is also noted that, as used herein, the terms "substantially," "about," and other similar terms are used as approximation terms and not as degree terms, and as such, are used to explain the inherent deviations of measured, calculated, and/or provided values that would be recognized by one of ordinary skill in the art.
The following provides an embodiment of the first aspect of the present invention with a specific embodiment of a method for automatically generating a package graphic. Referring to fig. 1, an embodiment of the first aspect of the present invention provides a method for automatically generating a packaging graphic, including the following steps:
the method comprises the steps of obtaining design requirements according to user input analysis based on a packaging language model, wherein the design requirements comprise packaging types, packaged product types or design styles, and obtaining a packaging model according to the packaging types.
In some embodiments, the training method of the packaging language model comprises:
acquiring package design term data and a pre-training language model;
the packaging language model adopts LLM (Large Language Model) language model technology, which is a leading edge technology in the current natural language processing field, and can enable the model to predict the next word or sentence through training the model so as to realize understanding of the natural language. LLM language model techniques can more accurately understand natural language input than traditional rule-based natural language processing techniques. In the invention, the LLM language model technology can help the system to more accurately analyze the natural language input of the client and understand the requirements of the client, thereby generating the design proposal effect which meets the requirements of the client.
Another advantage of LLM language model technology is that unsupervised pre-training can be performed. This means that by pretraining a lot of unlabeled data, the LLM language model can learn more language knowledge, improving understanding ability of natural language. In the invention, through carrying out unsupervised pre-training on a large number of package design related texts, the LLM language model can better understand natural language input related to package design and generate design proposal effects more in line with customer requirements.
In addition, the LLM language model technology can analyze and mine the data input by the clients, and support is provided for continuous optimization and updating of the system. By analyzing the large amount of data entered by the customer, the LLM language model can discover some hidden rules and patterns. For example, the LLM language model may discover some types of similar package designs, thereby providing more options for customers. Meanwhile, the LLM language model can also find out the change and trend of some customer demands, and support is provided for continuous optimization and updating of the system.
The LLM language model technology is a very promising natural language processing technology, can help a system to more accurately understand natural language input of a client, generate a design scheme effect which meets the requirements of the client, and provide support for continuous optimization and updating of the system.
Preferably, in some embodiments, the training method of the packaging language model includes:
package design language data and a pre-trained language model are obtained.
Specifically, the embodiment of the invention provides a vertical field language model for the packaging design industry, which is used for realizing man-machine interaction, design demand prediction and design case generation in the packaging design industry. The pre-trained deep learning language model is fine-tuned using specialized data in the packaging design area to provide an understanding of packaging design terminology and concepts. A pre-trained language model suitable for use with the present invention is first selected. The pre-training model is a deep learning-based transducer architecture and exhibits a superior performance in natural language processing tasks, either a generative pre-training transducer (OpenAI GPT series) or a bi-directional transducer (e.g., BERT series) model. In some embodiments, the pre-training language model may be directly used by the already trained model, or may be a pre-training language model trained by the following method.
Summarizing the training process of the pre-training language model: firstly, collecting a large amount of related data according to task characteristics, then carrying out model training on a selected transducer model by a data set, then carrying out reasoning on the trained model by a test set to check whether the performance of the model accords with the expectation or not so as to determine a strategy of super-parameter optimization, and finally using the trimmed model for final reasoning application.
The training process of the pre-training model comprises the following steps:
data collection, first, a large amount of data related to the present invention is collected, including text, images, etc. To construct a high quality dataset, data may be collected from multiple sources, such as industry forums, design blogs, academic papers, and the like.
Data preprocessing: preprocessing the collected data, including removing irrelevant elements, converting picture formats, word segmentation, labeling and the like. The purpose of the data preprocessing is to convert the raw data into a format suitable for input by the neural network.
Selecting a transducer model: a generative pre-trained transducer model, such as the OpenAI GPT family, suitable for use in the present invention is selected. These models exhibit superior performance in natural language processing tasks, facilitating implementation of the present invention.
Model training: the preprocessed data set is divided into training and validation sets. The training set data is sent to the selected transducer model for multiple rounds of training. In the training process, super parameters such as learning rate, model parameters and the like can be timely adjusted to optimize model performance.
Model verification: the trained model is validated using a validation set. And evaluating the performance of the model by verifying indexes such as loss function values, accuracy and the like on the set. If the model verification result is not good, returning to the super-parameter adjustment link to optimize the model is needed.
After the training of the pre-training language model is completed, the model needs to be fine-tuned, and when the performance of the model on the verification set meets the expectations, the model can be fine-tuned. Fine tuning is typically accomplished by continuing to train the model on targeted data, such as the specialized design related data collected in the present invention. This can make the model better suited to the specific task, improving the performance of the final reasoning application.
Model test: after fine tuning of the model, the model is finally tested using the test set that was set aside before. And evaluating the performance of the model on various indexes, and determining whether the model meets the requirements of actual application scenes.
Model deployment: the trained and fine-tuned model is deployed to the actual application environment, such as an API, an embedded system, and the like. At this time, the model can be used for generating package design description, solving design related problems and other tasks, and the aim of the invention is achieved.
Specifically, the embodiment of the invention selects the existing pre-training language model, and after determining the pre-training language model, text data closely related to the packaging design industry is collected first, which is called packaging design term data in the embodiment. Such data includes, but is not limited to, package design forums, blogs, courses, industry articles, and the like. Data is crawled from sources such as websites, social platforms, online forums, and the like through web crawler technology and API technology.
And carrying out first preprocessing on the package design term data, wherein the first preprocessing comprises the steps of removing HTML labels and special characters, carrying out data cleaning and removing stop words. For data cleansing, in the data acquisition phase, to obtain more accurate and reliable data, the same data may be acquired from multiple data sources, and then erroneous data may be deleted by comparison and verification, a method known as "data fusion". The general process of data cleansing includes: 1. collecting data; 2. and (3) data arrangement: filling in missing values, formatting data and the like; 3. and (3) data verification: such as length check, value range check, correlation check, etc.; 4. data screening: screening correct data according to service requirements; 5. data conversion: and performing data extraction, normalization and other conversions. For text data, the usual cleaning methods are: deleting the blank space and the line feed; correcting spelling errors; normalizing the case; punctuation marks and the like are removed. For digital data, the usual methods are: removing abnormal values; interpolation of the missing values; calibrating data of different dimensions, etc. After the data cleaning is finished, data verification is needed to ensure the quality of the data. The common verification methods are as follows: comparing with the original data source, and checking whether a new error is generated in the cleaning process; sampling, investigating and cleaning the cleaned data, and checking the accuracy of the data; professionals review partial cleaning results, and the like.
And performing word segmentation processing on the first preprocessed package design term data to extract keywords, phrases or industry terms in the package design term data.
In some embodiments, the word segmentation process includes word segmentation of the package design term data using a text processing tool to obtain word segmentation results, the text processing tool including jieba word segmentation or THULAC.
Preferably, the stop words are removed from the word segmentation result, and common stop words in the text, such as 'and' are removed, so that the words with practical meaning are left, and the subsequent keyword extraction is facilitated.
The keyword extraction method comprises the step of extracting the keywords from the segmentation result based on a BERT TextRank or a BERT keyword extraction library, wherein the BERT keyword extraction library comprises Bert-extraction-keywords.
Extracting the phrase and the industry term comprises the step of performing part-of-speech analysis on the word segmentation result through a part-of-speech tagging tool, and extracting the phrase and the industry term containing actual meaning through combining words with different parts of speech, wherein the part-of-speech tagging tool comprises jieba part-of-speech tagging or LTP.
It should be appreciated that keywords, phrases, and industry terms constitute key words in the packaging design area, and may better express emphasis and topics in the packaging design language, which is beneficial to model understanding of user input. The usual methods are: statistical methods, such as TF-IDF, extract phrases of high frequency words; a semantic method for extracting words and phrases representing meanings by using the relation between words; and adopting an expert knowledge method to apply an expert in the field to extract keywords according to the patent content.
The keywords, phrases and industry terms are de-duplicated and then added to the vocabulary of the pre-trained language model.
Specifically, the extracted keywords, phrases, and industry terms are aggregated to create a vocabulary. These vocabularies can be ordered and de-duplicated, guaranteeing uniqueness and accuracy of the vocabulary. The vocabulary in the newly created vocabulary is added to the vocabulary of the pre-trained language model. In this way, the pre-trained language model can better identify and understand the related terms of the industry when processing the related tasks of the package design, and the application performance of the model in the field is improved.
Through the process, word segmentation can be effectively carried out on the text data, keywords, phrases and industry terms are extracted, and then the terms are added into a vocabulary of the pre-training model, so that more accurate industry field information is provided for subsequent tasks.
Fine tuning of the pre-trained language model is then required using the collected specialized data. The fine tuning process involves importing pre-trained language model weights and gradually optimizing the model using custom data sets and loss functions for the packaging design industry. During the optimization process, different super parameters can be adjusted to achieve optimal performance. The fine tuning process includes:
Obtaining a custom package design dataset comprising performing a second preprocessing of package design industry data such that the package design industry data meets an input format of the pre-trained language model, thereby forming the custom package design dataset.
And fine tuning the pre-training language model based on the custom package design data set according to the selected loss function and the optimizer so as to update the network weight of the pre-training language model and word vectors corresponding to vocabulary in the vocabulary, thereby obtaining the package language model.
The pre-training language model weight refers to the numerical value of connection between each layer of neurons in the trained neural network model. These weights are trained on a large amount of data, resulting in the ability to efficiently represent and learn tasks from the input data. In a pre-trained language model, weights typically include two parts: word embedding weights and transducer network weights. Word embedding weights: word embedding weights are used to map each word in the text data into a fixed length vector (commonly referred to as a word vector). These vectors may capture semantic relationships between words, such as similar words being closer together in vector space. The word embedding weight of the pre-training language model is obtained through a large amount of data training, and has good semantic expression capability. Transformer network weights: the transducer network weights contain multiple layers of self-attention mechanisms and position feed forward neural networks. These weights are continually adjusted during the training process to learn the complex relationships and structures of the input text data. The transducer network weights of the pre-trained language model have already learned a certain degree of text representation and can be directly used for solving some natural language processing tasks.
During the fine tuning process, the pre-trained language model weights are optimized using collected specialized data, i.e., package design industry data (e.g., accumulated design data or network resources). This includes:
introducing pre-training language model weights: and importing the trained pre-training model language model weight (such as GPT series model weight) into a custom model to serve as a model initial weight.
And preprocessing the collected package design industry data by using the custom package design data set according to the model input requirement to construct the custom data set.
Setting a loss function and an optimizer: a loss function (e.g., cross entropy loss, mean square error loss, etc.) and an optimizer (e.g., adam, SGD, etc.) are selected for the model to guide the model's optimization process.
Fine tuning the model: the custom package design dataset is fed into the model, and the loss value is calculated. In the optimization process, the weight of the model is adjusted according to the loss value. The model performs better on the custom package design dataset through iterative training rounds (Epochs).
Super-parameter adjustment: during the fine tuning process, attempts may be made to adjust different superparameters, such as learning rate, weight decay, etc., to achieve optimal performance. The selection of the super-parameters can be determined by grid search, random search and other methods.
After the fine tuning process is completed, the pre-trained language model will have the ability to better understand and handle the package design industry tasks.
Preferably, during the fine tuning process, the custom package design dataset is divided into a training set and a verification set. The verification set is used to evaluate the performance of the model on the package design professional task. And performing performance optimization on the model by combining evaluation indexes such as accuracy, recall rate and F1. During the fine tuning process, the data set is divided into training and validation sets in order to evaluate the performance of the model on the package design professional task and avoid overfitting. The method specifically comprises the following steps:
dividing the data set: first, the collected professional dataset was randomly divided into training and validation sets at a ratio of 80% to 20%. The training set is used for model training and updating model weights. The validation set is used to evaluate the performance of the model on the package design task during the training process.
Prevent overfitting: by evaluating model performance over a validation set, we can see if the model is overfitted to training data. Overfitting means that the model performs well on training data but poorly on new data. By setting a validation set, we can track the performance of the model on the new data (validation set) and stop training when the model starts to over-fit.
Evaluation index: in order to measure the performance of the model on the packing design professional task, the accuracy, recall, F1 and other evaluation indexes can be used. The accuracy measures the proportion of the correct result of the model prediction to the total predicted result; the recall rate is measured by the proportion of the correctly predicted result of the model to the true positive example; the F1 score is a harmonic average value of the accuracy and the recall rate, and the accuracy and the recall rate can be comprehensively considered. These evaluation indexes help us to more fully understand the performance of the model.
Performance optimization: in the training process, according to the accuracy rate, recall rate, F1 and other index conditions on the verification set, the model can be subjected to performance optimization. This includes adjusting super-parameters such as learning rate, weight decay, etc., and trying different model structures, loss functions, etc. The optimization objective is to achieve better performance of the model on the task of packaging design expertise.
Through the steps, the training set and the verification set can be effectively utilized, and the performance of the model on the packaging design task can be estimated and optimized. Models that perform well on the validation set are expected to perform well in practical applications as well.
After model tuning is completed, the model is deployed to a server using container technology (e.g., docker). Deployment may also be performed using Cloud services (e.g., AWS, google Cloud, azure). To facilitate client calls, an API interface is created for the client to call, and a language model is embedded into an actual application scene, such as a Web platform and a mobile application program.
In addition, the data is periodically re-collected and the model re-trimmed to accommodate changes in the packaging design area. The actual use condition of the model is monitored, feedback is collected, and the model is optimized and updated according to the requirements. To realize the monitoring and updating of the model.
In some embodiments, further comprising: establishing a box-type library, wherein the packaging model comprises a standard cutting die and a label, the label comprises types, sizes, materials, application range descriptions or manufacturing processes, and the types comprise boxes, bags, bottles, boxes or cans; the box-type library further comprises a three-dimensional modeling of the packaging model and a two-dimensional plan view of the standard cutting die; preferably, the tag also includes a packaged product for which the packaging type is applicable and an applicable design template, and the embodiment matches through the content of the tag when the packaging model is invoked.
The step of obtaining the package model according to the package type comprises matching according to the package type and the label.
The box type library is also called a package model library, and is a pre-established basic model of different sizes and shapes of various package types including boxes, bags, bottles, boxes, cans and the like, and each model is provided with a three-dimensional modeling and a corresponding unfolding cutter layout. In the present invention, computer Aided Design (CAD) software is used for three-dimensional modeling, and a database-based model library management system is designed so that the model is easily accessible and reusable throughout the system.
In some embodiments, the method further comprises automatically analyzing the package types possibly needed according to the user registration information and automatically giving the descriptor suggestion. For example, the user registration information can know that the fruit is a fruit retail chain brand, the system analyzes the brand category through internet data, generates a recommendation of a descriptive word, and matches a packaging type recommendation suitable for packaging fruits to the user.
Through the above process, the invention realizes a vertical domain language model for the packaging design industry. The packaging language model has the understanding of the technical terms and concepts of packaging design, and can effectively complete the tasks of man-machine interaction, design demand prediction, design case generation and the like. The invention provides a practical and high-performance intelligent assistant for package design, which is beneficial to improving the working efficiency of the package design industry.
In some embodiments, further comprising conducting multiple query interactions with the user based on the packaging language model, obtaining design requirements from the user input analysis, comprising:
obtaining a first design requirement according to the user input analysis;
generating a plurality of alternatives according to the first design requirement;
acquiring a second design requirement according to the selected alternative scheme and the modification of the alternative scheme;
The user input includes the registration information, a user selection of the query guidance tag, a user selection of the alternative, or a user requirement description.
Through guided multi-round conversations, the customer is helped to quickly find the appropriate package type. The packaging type, standard and size are matched with the packaging box which is suitable for the requirements of users according to the training data model.
For example, a packaging box capable of containing fresh grapes is designed according to user input, the user is intelligently recommended to use self-carrying box type packaging convenient to carry, key element words of fresh grapes are extracted, patterns of the grapes are generated, and theme words of sweet grapes are generated according to grape characteristics. And automatically sleeving a two-dimensional plan of a standard cutting die of the packaging box according to the abundant preset templates, and rendering into a 3D packaging effect in real time. An introduction to the intermediate design solution is generated by a packaging language model, including material characteristics and advantages. The packaging effect can be automatically generated according to the client demands, and the packaging effect is generated by overlapping the patterns automatically generated by the image generation model with the text rendering extracted from the user demands.
A design element pattern is generated from the packaged product type and the design style based on an image generation model.
And calling a corresponding design template from a template library according to the design style.
In some embodiments, the template library comprises a number of design templates collected in advance, the design templates comprising one or more combinations of text typesetting, overall layout, color, or font; preferably, the template library further comprises a packaging model for which the design template is applicable, and after the packaging model is determined, the proper design template is automatically matched through the information when the design template is called.
The method further comprises classifying style categories of the design templates through image clustering and style migration algorithms;
the step of calling the corresponding design templates from the template library according to the design styles comprises matching according to the design styles and the style categories.
Package design image data is acquired and data augmentation is performed by a data enhancement method to obtain an image dataset.
In particular, a large amount of package design image data needs to be collected first, which should cover a wide variety of styles and types of designs. The data set can be further expanded by using proper data enhancement means (such as rotation, scaling and the like) to improve the generalization capability of the model.
And performing third preprocessing on the image data in the image data set so that the image data meets the input requirement of the neural network. The third preprocessing includes operations such as scaling, clipping and normalizing, so that the image data is suitable for being sent to a neural network for training.
Then, the image data after the third preprocessing is classified, and features of each class are extracted through a convolutional neural network. Grouping is performed according to commonly used pattern classifications and styles. Next, it is necessary to extract features of each category, including information such as size, color, shape, and the like. This can be achieved by pre-trained convolutional neural networks (e.g., resNet, VGG, etc.).
Training a plurality of embedded models according to the classified image data so that each embedded model respectively learns the characteristics of different design styles, wherein each embedded model comprises StyleGAN. And respectively training the embedded models of various styles according to the classified data. The purpose of these models is to learn the characteristics of different design styles. This can be achieved using generation of a countermeasure network (GAN) such as StyleGAN.
And training a generative model according to the image data so that the generative model can perform optimization adjustment when generating an image, wherein the generative model comprises Hypernetwork, lora or VAE. The generative models, such as Hypernetwork, lora, VAE, need to be trained so that they can be optimized and adjusted when generating images. These models may preserve the main features of the input image during generation while locally adjusting to improve the final result.
The embedded models and the generating models which respectively have different styles are fused to train and generate the image generating model, and the image generating model is used for generating a design element image according to user description. So that it can simultaneously process various design elements, such as characters, animals, plants, products, etc.
And evaluating the trained image generation model through test data, analyzing the performances of the package design image quality, the creative degree and the like generated by the model, and adjusting and optimizing the package design image quality, the creative degree and the like in time aiming at the problems. Receiving design requirements and reference patterns provided by a user, and sending the design requirements and the reference patterns into a trained image generation model to generate creative package designs meeting the requirements of the user.
Then, a package design pattern is generated from the design element pattern and the design template.
Finally, the package design pattern is combined with the package model to generate the package graphic.
In some embodiments, the packaging graphic comprises a packaging layout generated from the packaging design pattern in combination with a two-dimensional plan view of the standard die;
the packaging graph further comprises a three-dimensional rendering graph generated according to the three-dimensional modeling of the packaging model and the packaging design pattern, wherein the three-dimensional rendering graph is generated through a real-time ray tracing technology.
In some embodiments, further comprising editing the packaging graphic by an editing module;
the editing module comprises a graphic display unit, and the design scheme display unit comprises a three-dimensional rendering diagram and a packing cutter layout of the design scheme;
the editing module comprises a layout editing unit, wherein the layout editing unit is used for modifying the position relation of each design element or adding or deleting design elements, and the design elements at least comprise text elements or pattern elements;
the editing module further comprises a text editing unit, wherein the text editing unit is used for modifying and adding text, and the text editing unit is in communication connection with the packaging language model to generate a text file according to user input;
the editing module further includes a pattern editing unit communicatively coupled to the image generation model to generate pattern elements based on user input.
An embodiment of a second aspect of the present invention provides an automatic packaging graphic generating device, including:
the demand acquisition module is used for acquiring design demands according to user input analysis based on a packaging language model, wherein the design demands comprise packaging types, packaged product types or design styles, and the packaging model is acquired according to the packaging types;
An image generation module for generating a design element pattern according to the packaged product type and the design style based on an image generation model;
the template acquisition module is used for calling a corresponding design template from a template library according to the design style;
a first synthesis module for generating a package design pattern according to the design element pattern and the design template;
a second synthesis module that combines the packaging design pattern with the packaging model to generate the packaging graphic;
the training method of the image generation model comprises the following steps:
acquiring package design image data and performing data expansion by a data enhancement method to obtain an image dataset;
performing third preprocessing on the image data in the image data set to enable the image data to meet the input requirement of the neural network;
classifying the image data after the third preprocessing, and extracting the characteristics of each category through a convolutional neural network;
training a plurality of embedded models according to the classified image data so that each embedded model respectively learns the characteristics of different design styles, wherein each embedded model comprises StyleGAN;
training a generative model according to the image data so that the generative model can perform optimization adjustment when generating an image, wherein the generative model comprises Hypernetwork, lora or VAE;
The embedded models and the generating models which respectively have different styles are fused to train and generate the image generating model, and the image generating model is used for generating a design element image according to user description.
An embodiment of a third aspect of the present invention provides an electronic device, as shown in fig. 2, including a memory and a processor, where the memory stores a computer program, where the computer program, when executed by the processor, implements the method for automatically generating a packaging graph according to any of the embodiments above.
An embodiment of a fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for automatically generating a wrapper graphic according to any of the embodiments above.
Computer-readable storage media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of function in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. An automatic packaging graphic generation method, comprising:
obtaining design requirements based on a packaging language model according to user input analysis, wherein the design requirements comprise packaging types, packaged product types or design styles, and obtaining a packaging model according to the packaging types;
generating a design element pattern according to the packaged product type and the design style based on an image generation model;
calling a corresponding design template from a template library according to the design style;
generating a package design pattern according to the design element pattern and the design template;
combining the package design pattern with the package model to generate the package graphic;
the training method of the image generation model comprises the following steps:
acquiring package design image data and performing data expansion by a data enhancement method to obtain an image dataset;
performing third preprocessing on the image data in the image data set to enable the image data to meet the input requirement of the neural network;
classifying the image data after the third preprocessing, and extracting the characteristics of each category through a convolutional neural network;
training a plurality of embedded models according to the classified image data so that each embedded model respectively learns the characteristics of different design styles, wherein each embedded model comprises StyleGAN;
Training a generative model according to the image data so that the generative model can perform optimization adjustment when generating an image, wherein the generative model comprises Hypernetwork, lora or VAE;
the embedded models and the generating models which respectively have different styles are fused to train and generate the image generating model, and the image generating model is used for generating a design element image according to user description.
2. The automatic packaging graphic generation method according to claim 1, wherein: the training method of the packaging language model comprises the following steps:
acquiring package design term data and a pre-training language model;
performing first preprocessing on the package design term data, wherein the first preprocessing comprises removing HTML labels and special characters, performing data cleaning and removing stop words;
word segmentation is carried out on the first preprocessed package design term data so as to extract keywords, phrases or industry terms in the package design term data;
adding the keywords, phrases and industry terms into a vocabulary of the pre-trained language model after de-duplication;
obtaining a custom package design dataset comprising performing a second preprocessing of package design industry data such that the package design industry data meets an input format of the pre-trained language model, thereby forming the custom package design dataset;
And fine tuning the pre-training language model based on the custom package design data set according to the selected loss function and the optimizer so as to update the network weight of the pre-training language model and word vectors corresponding to vocabulary in the vocabulary, thereby obtaining the package language model.
3. The automatic packaging graphic generating method according to claim 2, wherein: the word segmentation processing comprises word segmentation processing of the package design term data by adopting a text processing tool to obtain word segmentation results, wherein the text processing tool comprises jieba word segmentation or THULAC;
the keyword extraction method comprises the steps of extracting keywords from the segmentation result based on a BERT TextRank or a BERT keyword extraction library, wherein the BERT keyword extraction library comprises Bert-extraction-keywords;
extracting the phrase and the industry term comprises the step of performing part-of-speech analysis on the word segmentation result through a part-of-speech tagging tool, and extracting the phrase and the industry term containing actual meaning through combining words with different parts of speech, wherein the part-of-speech tagging tool comprises jieba part-of-speech tagging or LTP.
4. The automatic packaging graphic generation method according to claim 1, further comprising: establishing a box-type library, wherein the packaging model comprises a standard cutting die and a label, the label comprises types, sizes, materials, application range descriptions or manufacturing processes, and the types comprise boxes, bags, bottles, boxes or cans; the box-type library further comprises a three-dimensional modeling of the packaging model and a two-dimensional plan view of the standard cutting die;
The step of obtaining the package model according to the package type comprises matching according to the package type and the label.
5. The automatic packaging graphic generation method according to claim 1, wherein:
the template library comprises a plurality of design templates collected in advance, wherein the design templates comprise one or more of text typesetting, overall layout, colors or fonts;
the method further comprises classifying style categories of the design templates through image clustering and style migration algorithms;
the step of calling the corresponding design templates from the template library according to the design styles comprises matching according to the design styles and the style categories.
6. The automatic packaging graphic generating method according to claim 4, wherein: the package graph comprises a package cutter layout generated by combining the package design pattern with the two-dimensional plan of the standard cutter die;
the packaging graph further comprises a three-dimensional rendering graph generated according to the three-dimensional modeling of the packaging model and the packaging design pattern, wherein the three-dimensional rendering graph is generated through a real-time ray tracing technology.
7. The automatic packaging graphic generating method according to claim 6, wherein: the method also comprises the step of editing the package graph through an editing module;
The editing module comprises a graphic display unit, and the design scheme display unit comprises a three-dimensional rendering diagram and a packing cutter layout of the design scheme;
the editing module comprises a layout editing unit, wherein the layout editing unit is used for modifying the position relation of each design element or adding or deleting design elements, and the design elements at least comprise text elements or pattern elements;
the editing module further comprises a text editing unit, wherein the text editing unit is used for modifying and adding text, and the text editing unit is in communication connection with the packaging language model to generate a text file according to user input;
the editing module further includes a pattern editing unit communicatively coupled to the image generation model to generate pattern elements based on user input.
8. An automatic packaging pattern generating apparatus, comprising:
the demand acquisition module is used for acquiring design demands according to user input analysis based on a packaging language model, wherein the design demands comprise packaging types, packaged product types or design styles, and the packaging model is acquired according to the packaging types;
an image generation module for generating a design element pattern according to the packaged product type and the design style based on an image generation model;
The template acquisition module is used for calling a corresponding design template from a template library according to the design style;
a first synthesis module for generating a package design pattern according to the design element pattern and the design template;
a second synthesis module that combines the packaging design pattern with the packaging model to generate the packaging graphic;
the training method of the image generation model comprises the following steps:
acquiring package design image data and performing data expansion by a data enhancement method to obtain an image dataset;
performing third preprocessing on the image data in the image data set to enable the image data to meet the input requirement of the neural network;
classifying the image data after the third preprocessing, and extracting the characteristics of each category through a convolutional neural network;
training a plurality of embedded models according to the classified image data so that each embedded model respectively learns the characteristics of different design styles, wherein each embedded model comprises StyleGAN;
training a generative model according to the image data so that the generative model can perform optimization adjustment when generating an image, wherein the generative model comprises Hypernetwork, lora or VAE;
The embedded models and the generating models which respectively have different styles are fused to train and generate the image generating model, and the image generating model is used for generating a design element image according to user description.
9. An electronic device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, implements the method of automatically generating a wrapper graphic as claimed in any one of claims 1-7.
10. A computer readable storage medium, having stored thereon a computer program which, when executed by a processor, implements the packaging graphic automatic generation method according to any of claims 1-7.
CN202310817194.3A 2023-07-05 2023-07-05 Automatic generation method, device, medium and equipment for packaging graphics Active CN116611131B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310817194.3A CN116611131B (en) 2023-07-05 2023-07-05 Automatic generation method, device, medium and equipment for packaging graphics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310817194.3A CN116611131B (en) 2023-07-05 2023-07-05 Automatic generation method, device, medium and equipment for packaging graphics

Publications (2)

Publication Number Publication Date
CN116611131A true CN116611131A (en) 2023-08-18
CN116611131B CN116611131B (en) 2023-12-26

Family

ID=87674928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310817194.3A Active CN116611131B (en) 2023-07-05 2023-07-05 Automatic generation method, device, medium and equipment for packaging graphics

Country Status (1)

Country Link
CN (1) CN116611131B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117475210A (en) * 2023-10-27 2024-01-30 广州睿狐科技有限公司 Random image generation method and system for API debugging

Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050030578A1 (en) * 2003-08-07 2005-02-10 Hewlett-Packard Development Company, L.P. Method of performing automated packaging and managing workflow in a commercial printing environment
US20050122543A1 (en) * 2003-12-05 2005-06-09 Eric Walker System and method for custom color design
US20130178969A1 (en) * 2010-10-20 2013-07-11 Tetra Laval Holdings & Finance S.A. Marking of packaged consumer products
CN109543359A (en) * 2019-01-18 2019-03-29 李燕清 A kind of artificial intelligence packaging design method and system based on Internet of Things big data
US20210012199A1 (en) * 2019-07-04 2021-01-14 Zhejiang University Address information feature extraction method based on deep neural network model
CN112862569A (en) * 2021-03-04 2021-05-28 上海交通大学 Product appearance style evaluation method and system based on image and text multi-modal data
WO2021137942A1 (en) * 2019-12-31 2021-07-08 Microsoft Technology Licensing, Llc Pattern generation
CN113139220A (en) * 2021-05-12 2021-07-20 深圳市行识未来科技有限公司 Intelligent package design system based on Internet of things big data
CN113255022A (en) * 2021-06-10 2021-08-13 浙江大胜达包装股份有限公司 Corrugated paper structure design method and system based on demand import model
CN113255079A (en) * 2021-06-01 2021-08-13 焦作大学 Artificial intelligence-based package design method and device
CN113642566A (en) * 2021-10-15 2021-11-12 南通宝田包装科技有限公司 Medicine package design method based on artificial intelligence and big data
CN113642262A (en) * 2021-10-15 2021-11-12 南通宝田包装科技有限公司 Toothpaste package appearance auxiliary design method based on artificial intelligence
CN113722783A (en) * 2021-07-08 2021-11-30 浙江海阔人工智能科技有限公司 User-oriented intelligent garment design system and method based on deep learning model
US20220004809A1 (en) * 2020-07-01 2022-01-06 Wipro Limited Method and system for generating user driven adaptive object visualizations using generative adversarial network models
CN114020954A (en) * 2021-09-10 2022-02-08 广西师范大学 Personalized image description method for embodying user intention and style
KR102360561B1 (en) * 2021-10-01 2022-02-09 브이아이코리아 주식회사 Package design providing system using artificial intelligence and operation method thereof
CN114972848A (en) * 2022-05-10 2022-08-30 中国石油大学(华东) Image semantic understanding and text generation based on fine-grained visual information control network
CN115017561A (en) * 2022-04-20 2022-09-06 深圳市渠印包装技术有限公司 Method and system for generating 3D design drawing, terminal device and storage medium
CN115186312A (en) * 2022-05-27 2022-10-14 浙江省送变电工程有限公司 Method for automatically generating cable pipe-burying section view through CAD
CN115496510A (en) * 2022-09-26 2022-12-20 西藏辰云信息技术有限公司 Food circulation tracing method based on transaction
CN115525955A (en) * 2022-10-18 2022-12-27 成都建筑材料工业设计研究院有限公司 Intelligent generation method of digital design product with special structure
CN115587398A (en) * 2022-10-11 2023-01-10 优包(北京)科技有限公司 Intelligent packaging design, proofing and customization integrated method and system
JP2023012228A (en) * 2021-07-13 2023-01-25 凸版印刷株式会社 Package design support method and program
US20230038240A1 (en) * 2021-08-03 2023-02-09 The Procter & Gamble Company Three-dimensional (3d) image modeling systems and methods for automatically generating photorealistic, virtual 3d packaging and product models from 2d imaging assets and dimensional data
KR20230040538A (en) * 2021-09-16 2023-03-23 프로피앤피 유한회사 Production service system and method for exclusive packaging paper for packaging
CN116150826A (en) * 2022-09-27 2023-05-23 重庆鹪鹩茶文化传播有限公司 Intelligent packaging design system
CN116219799A (en) * 2022-12-13 2023-06-06 大家智合(北京)网络科技股份有限公司 Pulp molding packaging material with bouquet and preparation process and application thereof
WO2023155460A1 (en) * 2022-02-16 2023-08-24 南京邮电大学 Reinforcement learning-based emotional image description method and system

Patent Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050030578A1 (en) * 2003-08-07 2005-02-10 Hewlett-Packard Development Company, L.P. Method of performing automated packaging and managing workflow in a commercial printing environment
US20050122543A1 (en) * 2003-12-05 2005-06-09 Eric Walker System and method for custom color design
US20130178969A1 (en) * 2010-10-20 2013-07-11 Tetra Laval Holdings & Finance S.A. Marking of packaged consumer products
CN109543359A (en) * 2019-01-18 2019-03-29 李燕清 A kind of artificial intelligence packaging design method and system based on Internet of Things big data
US20210012199A1 (en) * 2019-07-04 2021-01-14 Zhejiang University Address information feature extraction method based on deep neural network model
WO2021137942A1 (en) * 2019-12-31 2021-07-08 Microsoft Technology Licensing, Llc Pattern generation
US20220004809A1 (en) * 2020-07-01 2022-01-06 Wipro Limited Method and system for generating user driven adaptive object visualizations using generative adversarial network models
CN112862569A (en) * 2021-03-04 2021-05-28 上海交通大学 Product appearance style evaluation method and system based on image and text multi-modal data
CN113139220A (en) * 2021-05-12 2021-07-20 深圳市行识未来科技有限公司 Intelligent package design system based on Internet of things big data
CN113255079A (en) * 2021-06-01 2021-08-13 焦作大学 Artificial intelligence-based package design method and device
CN113255022A (en) * 2021-06-10 2021-08-13 浙江大胜达包装股份有限公司 Corrugated paper structure design method and system based on demand import model
CN113722783A (en) * 2021-07-08 2021-11-30 浙江海阔人工智能科技有限公司 User-oriented intelligent garment design system and method based on deep learning model
JP2023012228A (en) * 2021-07-13 2023-01-25 凸版印刷株式会社 Package design support method and program
US20230038240A1 (en) * 2021-08-03 2023-02-09 The Procter & Gamble Company Three-dimensional (3d) image modeling systems and methods for automatically generating photorealistic, virtual 3d packaging and product models from 2d imaging assets and dimensional data
CN114020954A (en) * 2021-09-10 2022-02-08 广西师范大学 Personalized image description method for embodying user intention and style
KR20230040538A (en) * 2021-09-16 2023-03-23 프로피앤피 유한회사 Production service system and method for exclusive packaging paper for packaging
KR102360561B1 (en) * 2021-10-01 2022-02-09 브이아이코리아 주식회사 Package design providing system using artificial intelligence and operation method thereof
CN113642262A (en) * 2021-10-15 2021-11-12 南通宝田包装科技有限公司 Toothpaste package appearance auxiliary design method based on artificial intelligence
CN113642566A (en) * 2021-10-15 2021-11-12 南通宝田包装科技有限公司 Medicine package design method based on artificial intelligence and big data
WO2023155460A1 (en) * 2022-02-16 2023-08-24 南京邮电大学 Reinforcement learning-based emotional image description method and system
CN115017561A (en) * 2022-04-20 2022-09-06 深圳市渠印包装技术有限公司 Method and system for generating 3D design drawing, terminal device and storage medium
CN114972848A (en) * 2022-05-10 2022-08-30 中国石油大学(华东) Image semantic understanding and text generation based on fine-grained visual information control network
CN115186312A (en) * 2022-05-27 2022-10-14 浙江省送变电工程有限公司 Method for automatically generating cable pipe-burying section view through CAD
CN115496510A (en) * 2022-09-26 2022-12-20 西藏辰云信息技术有限公司 Food circulation tracing method based on transaction
CN116150826A (en) * 2022-09-27 2023-05-23 重庆鹪鹩茶文化传播有限公司 Intelligent packaging design system
CN115587398A (en) * 2022-10-11 2023-01-10 优包(北京)科技有限公司 Intelligent packaging design, proofing and customization integrated method and system
CN115525955A (en) * 2022-10-18 2022-12-27 成都建筑材料工业设计研究院有限公司 Intelligent generation method of digital design product with special structure
CN116219799A (en) * 2022-12-13 2023-06-06 大家智合(北京)网络科技股份有限公司 Pulp molding packaging material with bouquet and preparation process and application thereof

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
刘乘, 卢杰: "缓冲包装智能CAD系统的设计与研发", 包装工程, no. 05 *
卜如飞;罗晓欢;: "重庆市特色调味品包装图形设计研究", 设计, no. 03 *
张新昌, 冯建华, 周防国: "基于CAXA电子图板的包装纸盒图形参数化", 包装工程, no. 04 *
李美满;: "计算机图像处理技术在茶叶包装设计中的应用", 福建茶叶, no. 07 *
胡志才;柯胜海;: "智能变色材料在包装上的应用及设计形式研究", 包装工程, no. 09 *
赵虹;段俐敏;: "基于包装模型的产品包装设计探讨", 赤峰学院学报(自然科学版), no. 07 *
郑芳蕾;: "互动理念下包装形态设计的思考", 西北美术, no. 03 *
魏坤;: "基于案例推理的包装设计系统建构", 现代电子技术, no. 20 *
黄宗元;仲梁维;: "产品包装数据管理信息系统研究", 精密制造与自动化, no. 02 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117475210A (en) * 2023-10-27 2024-01-30 广州睿狐科技有限公司 Random image generation method and system for API debugging

Also Published As

Publication number Publication date
CN116611131B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
US11593458B2 (en) System for time-efficient assignment of data to ontological classes
CN116541911B (en) Packaging design system based on artificial intelligence
CN111858954B (en) Task-oriented text-generated image network model
CN110597735B (en) Software defect prediction method for open-source software defect feature deep learning
CN109284363A (en) A kind of answering method, device, electronic equipment and storage medium
CN105631479A (en) Imbalance-learning-based depth convolution network image marking method and apparatus
CN107368614A (en) Image search method and device based on deep learning
CN107291723A (en) The method and apparatus of web page text classification, the method and apparatus of web page text identification
CN104572965A (en) Search-by-image system based on convolutional neural network
CN106844632A (en) Based on the product review sensibility classification method and device that improve SVMs
CN107357793A (en) Information recommendation method and device
CN107205016A (en) The search method of internet of things equipment
CN116611131B (en) Automatic generation method, device, medium and equipment for packaging graphics
CN104778186A (en) Method and system for hanging commodity object to standard product unit (SPU)
CN114419642A (en) Method, device and system for extracting key value pair information in document image
CN111046170A (en) Method and apparatus for outputting information
CN109947928A (en) A kind of retrieval type artificial intelligence question and answer robot development approach
CN114443899A (en) Video classification method, device, equipment and medium
CN109726331A (en) The method, apparatus and computer-readable medium of object preference prediction
CN112015902A (en) Least-order text classification method under metric-based meta-learning framework
CN114547307A (en) Text vector model training method, text matching method, device and equipment
CN116522912A (en) Training method, device, medium and equipment for package design language model
CN113869609A (en) Method and system for predicting confidence of frequent subgraph of root cause analysis
CN116842263A (en) Training processing method and device for intelligent question-answering financial advisor model
CN114969511A (en) Content recommendation method, device and medium based on fragments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant