CN109102009B - Automatic packaging box design method based on generation countermeasure network - Google Patents

Automatic packaging box design method based on generation countermeasure network Download PDF

Info

Publication number
CN109102009B
CN109102009B CN201810843372.9A CN201810843372A CN109102009B CN 109102009 B CN109102009 B CN 109102009B CN 201810843372 A CN201810843372 A CN 201810843372A CN 109102009 B CN109102009 B CN 109102009B
Authority
CN
China
Prior art keywords
layer
convolution
pixels
convolutional
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810843372.9A
Other languages
Chinese (zh)
Other versions
CN109102009A (en
Inventor
陈万军
蔺广逢
范凤梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201810843372.9A priority Critical patent/CN109102009B/en
Publication of CN109102009A publication Critical patent/CN109102009A/en
Application granted granted Critical
Publication of CN109102009B publication Critical patent/CN109102009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a packing box automatic design method based on a generation countermeasure network, which comprises the following specific steps: a package box for collecting certain type of commodities; placing the packaging box in an image acquisition environment with a white background for acquiring a sample image; preprocessing the acquired image; constructing a deep convolution generation countermeasure network model for automatic design of the packaging box; training a deep convolution for automatic design of the packaging box to generate a confrontation network model; and generating a new packing box style image by utilizing a trained deep convolution generation countermeasure network for automatic design of the packing box. The invention applies the deep convolution generation countermeasure network in the current artificial intelligence field to the design of the commodity packing box, can automatically generate a new packing box design picture by using a computer, and greatly improves the intelligent level and the design efficiency of the packing box design.

Description

Automatic packaging box design method based on generation countermeasure network
Technical Field
The invention relates to the field of image processing and packaging box design methods, in particular to a packaging box automatic design method based on a generation countermeasure network.
Background
In order to improve the favor and purchase desire of consumers to commodities, before the commodities are sold, professional designers can customize exquisite and individual packing boxes to package the commodities. Although good packing boxes for commodities can greatly improve the sales value of the commodities, the sales cost of the commodities is undoubtedly increased, because the design work of the packing boxes which can attract the eyes of the public needs to be done by designers in special fields and takes a lot of time and energy to complete. If the design task of heavy and professional can be automatically completed by a computer, the efficiency of design can be greatly improved, the cost of the commodity is reduced, and the competitiveness and market share of the commodity are further improved.
Currently, with the development of artificial intelligence technology, generative models represented by generative countermeasure networks have received great attention in the academic and industrial application fields. The basic idea of generating a countermeasure network is derived from the two-person zero-sum game of the game theory, which is composed of a generator sub-network and a discriminator sub-network, and is trained in a countermeasure learning manner so as to estimate the potential distribution of data samples and generate new data samples. By utilizing the characteristic of generating the countermeasure network, the invention designs the deep convolution generation countermeasure network to enable a computer to automatically generate a new packing box style image according to the image data of the product packing box, thereby enabling the design work of the packing box to be more fashionable, intelligent, efficient and creative.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of completely relying on manual designers in the process of designing a commodity packaging box in the prior art, and provides an automatic packaging box design method based on a generation countermeasure network.
In order to achieve the purpose, the method comprises the following specific steps:
step 1, collecting training sample images:
step 2, preprocessing each sample image to form a training data set
2.1, cutting the image into a square image under the condition of keeping the integrity of the packaging box;
step 2.2, scaling the cut image to the size of 64X64X3 pixel resolution;
step 3, constructing a deep convolution generation countermeasure network model for automatic design of the packaging box:
step 4, training a depth convolution generation confrontation network model for automatic design of the packaging box;
and 5, generating a new packing box pattern image by using the trained deep convolution generation countermeasure network model for automatic packing box design to finish the automatic packing box design.
The present invention is also characterized in that,
the method for acquiring the training sample image comprises the following steps: collecting packing boxes of certain type of commodities, placing the packing boxes in an image collection environment with a white background for collecting sample images, wherein the packing boxes in the sample images are positioned at the center of the images, and meanwhile, each packing box is kept to be positioned in the same orientation during collection;
the deep convolution generation countermeasure network for automatic design of the packing box in the step 3 is composed of a 6-layer generator sub-network and a 6-layer discriminator sub-network, and the network structure of the generator sub-network is as follows:
the 0 th layer is an input layer p1, and the input data is 100-dimensional noise data generated by a normal distribution function;
the layer 1 is a transposed convolution layer a1, the size of a convolution kernel is 4X4 pixels, the number of the convolution kernels is 512, the convolution sliding step length in the horizontal direction and the vertical direction is 1 pixel, and 512 feature maps are output after data normalization processing in Batch and a ReLU activation function;
the layer 2 is a transposed convolution layer b1, the size of a convolution kernel is 4X4 pixels, the number of convolution kernels is 256, the convolution sliding step lengths in the horizontal direction and the vertical direction are both 2 pixels, the filling size is 1 pixel, and after data normalization processing in Batch and a ReLU activation function, 256 feature maps are output;
the 3 rd layer is a transposed convolution layer c1, the size of a convolution kernel is 4X4 pixels, the number of convolution kernels is 128, the convolution sliding step lengths in the horizontal direction and the vertical direction are both 2 pixels, the filling size is 1 pixel, and after data normalization processing in Batch and a ReLU activation function, 128 feature maps are output;
the 4 th layer is a transposed convolution layer d1, the size of a convolution kernel is 4X4 pixels, the number of convolution kernels is 64, the convolution sliding step lengths in the horizontal direction and the vertical direction are both 2 pixels, the filling size is 1 pixel, and after data normalization processing in Batch and a ReLU activation function, 64 feature maps are output;
the 5 th layer is a transposed convolution layer e1, the size of a convolution kernel is 4X4 pixels, the number of convolution kernels is 3, the convolution sliding steps in the horizontal direction and the vertical direction are 2 pixels respectively, the filling size is 1 pixel, and a color image with the size of 64X64X3 is output after a Tanh activation function.
The network structure of the arbiter subnetwork in step 3 is as follows:
layer 0 is an input layer p2, and the input data is a sample image in a training data set or a pseudo sample image generated by a generator network;
the layer 1 is convolutional layer a2, the size of convolutional kernel is 4X4 pixels, the number of convolutional kernels is 64, the convolutional sliding step length in the horizontal direction and the vertical direction is 2 pixels, and 64 feature mapping graphs are output after the function of LeakyReLU activation;
the layer 2 is convolutional layer b2, the size of a convolutional kernel is 4X4 pixels, the number of convolutional kernels is 128, the convolutional sliding step length in the horizontal direction and the vertical direction is 2 pixels, the filling size is 1 pixel, and after the LeakyReLU activation function is carried out, 128 feature mapping graphs are output;
the layer 3 is convolutional layer c2, the size of convolutional kernel is 4X4 pixels, the number of convolutional kernels is 256, the convolutional sliding step length in the horizontal direction and the vertical direction is 2 pixels, the filling size is 1 pixel, and after the LeakyReLU activation function is carried out, 256 feature mapping graphs are output;
the layer 4 is convolutional layer d2, the size of convolutional kernel is 4X4 pixels, the number of convolutional kernels is 512, the convolutional sliding step length in the horizontal direction and the vertical direction is 2 pixels, the filling size is 1 pixel, and 512 feature mapping graphs are output after the LeakyReLU activation function;
the 5 th layer is convolutional layer e2, the size of the convolutional kernel is 4X4 pixels, the number of the convolutional kernels is 1, the convolution sliding step length in the horizontal direction and the vertical direction is 1 pixel, and after passing through a Sigmoid activation function, input data is output to be discrimination information of a real sample or a pseudo sample image generated by a generator.
The step 4 of the package box automatic design deep convolution generation countermeasure network training comprises the following steps:
step 1, carrying out unsupervised training on a discriminator subnetwork in a deep convolution generation countermeasure network for automatic design of a packing box by using a training data set;
step 2, carrying out unsupervised training on a generator sub-network in a deep convolution generation countermeasure network for automatic design of the packaging box by using noise data generated by normal distribution, then sending an output image of the generator sub-network into a discriminator sub-network, and training the discriminator sub-network again;
and step 3, performing iterative alternate training on the discriminator sub-network trained in the step 1 and the generator sub-network trained in the step 2 for 800-.
The invention has the beneficial effects that: the invention discloses a packing box automatic design method based on a generation countermeasure network, which fully utilizes the generation countermeasure network technology in the current artificial intelligence technical field to solve the automatic design problem of commodity packing boxes in the packing field, and provides a deep convolution generation countermeasure network for automatically generating product packing box style pictures. After the network is trained and converged, the product packaging style images with originality and style different from each other can be automatically generated under the condition of no need of any manual intervention, the design period of the packaging box is greatly shortened, and the design cost of the packaging box is greatly reduced.
Drawings
FIG. 1 is a flow chart of an automatic design method of a packing box based on a generation countermeasure network according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a packing box automatic design method based on a generation countermeasure network, a flow chart is shown in figure 1, and the method is implemented according to the following steps:
step 1, collecting training sample images
And collecting a packaging box of a certain type of commodity, and placing the packaging box in an image collection environment with a white background for collecting a sample image. The packs in the sample image should be centered in the image while maintaining each pack in the same orientation as it was acquired.
Step 2, preprocessing each sample image to form a training data set
2.1, cutting the image into a square image under the condition of keeping the integrity of the packaging box;
step 2.2, scaling the cut image to the size of 64X64X3 pixel resolution;
step 3, constructing a deep convolution generation countermeasure network model for automatic design of the packaging box
The deep convolution generation countermeasure network for package automation is composed of a 6-layer generator sub-network and a 6-layer discriminator sub-network.
The network structure and parameters of the generator subnetwork are as follows:
the 0 th layer is an input layer p1, and the input data is 100-dimensional noise data generated by a normal distribution function;
the layer 1 is a transposed convolution layer a1, the size of a convolution kernel is 4X4 pixels, the number of the convolution kernels is 512, the convolution sliding step length in the horizontal direction and the vertical direction is 1 pixel, and 512 feature maps are output after data normalization processing in Batch and a ReLU activation function;
the layer 2 is a transposed convolution layer b1, the size of a convolution kernel is 4X4 pixels, the number of convolution kernels is 256, the convolution sliding step lengths in the horizontal direction and the vertical direction are both 2 pixels, the filling size is 1 pixel, and after data normalization processing in Batch and a ReLU activation function, 256 feature maps are output;
the 3 rd layer is a transposed convolution layer c1, the size of a convolution kernel is 4X4 pixels, the number of convolution kernels is 128, the convolution sliding step lengths in the horizontal direction and the vertical direction are both 2 pixels, the filling size is 1 pixel, and after data normalization processing in Batch and a ReLU activation function, 128 feature maps are output;
the 4 th layer is a transposed convolution layer d1, the size of a convolution kernel is 4X4 pixels, the number of convolution kernels is 64, the convolution sliding step lengths in the horizontal direction and the vertical direction are both 2 pixels, the filling size is 1 pixel, and after data normalization processing in Batch and a ReLU activation function, 64 feature maps are output;
the 5 th layer is a transposed convolution layer e1, the size of a convolution kernel is 4X4 pixels, the number of convolution kernels is 3, the convolution sliding steps in the horizontal direction and the vertical direction are 2 pixels respectively, the filling size is 1 pixel, and a color image with the size of 64X64X3 is output after a Tanh activation function.
The network structure and parameters of the arbiter subnetwork are as follows:
layer 0 is an input layer p2, and the input data is a sample image in a training data set or a pseudo sample image generated by a generator network;
the layer 1 is convolutional layer a2, the size of convolutional kernel is 4X4 pixels, the number of convolutional kernels is 64, the convolutional sliding step length in the horizontal direction and the vertical direction is 2 pixels, and 64 feature mapping graphs are output after the function of LeakyReLU activation;
the layer 2 is convolutional layer b2, the size of a convolutional kernel is 4X4 pixels, the number of convolutional kernels is 128, the convolutional sliding step length in the horizontal direction and the vertical direction is 2 pixels, the filling size is 1 pixel, and after the LeakyReLU activation function is carried out, 128 feature mapping graphs are output;
the layer 3 is convolutional layer c2, the size of convolutional kernel is 4X4 pixels, the number of convolutional kernels is 256, the convolutional sliding step length in the horizontal direction and the vertical direction is 2 pixels, the filling size is 1 pixel, and after the LeakyReLU activation function is carried out, 256 feature mapping graphs are output;
the layer 4 is convolutional layer d2, the size of convolutional kernel is 4X4 pixels, the number of convolutional kernels is 512, the convolutional sliding step length in the horizontal direction and the vertical direction is 2 pixels, the filling size is 1 pixel, and 512 feature mapping graphs are output after the LeakyReLU activation function;
the 5 th layer is convolutional layer e2, the size of the convolutional kernel is 4X4 pixels, the number of the convolutional kernels is 1, the convolution sliding step length in the horizontal direction and the vertical direction is 1 pixel, and after passing through a Sigmoid activation function, input data is output to be discrimination information of a real sample or a pseudo sample image generated by a generator.
Step 4, training the deep convolution generation countermeasure network model for automatic design of the packing box
The network model training steps are as follows:
firstly, carrying out unsupervised training on a discriminator subnetwork in a deep convolution generation countermeasure network for automatic design of a packing box by using a training data set;
secondly, carrying out unsupervised training on a generator sub-network in a deep convolution generation countermeasure network for automatic design of the packaging box by using noise data generated by normal distribution, then sending an output image of the generator sub-network into a discriminator sub-network, and training the discriminator sub-network again;
and thirdly, carrying out iterative alternate training on the discriminator sub-network trained in the step 1 and the generator sub-network trained in the step 2 for 800-1200 times, and finally obtaining the trained deep convolution generation countermeasure network for the automatic design of the packing box.
And 5, generating a new packing box pattern image by using the trained deep convolution generation countermeasure network model for automatic packing box design to finish the automatic packing box design.
According to the packing box automatic design method based on the countermeasure network, the potential style distribution of the commodity packing box image is fitted by generating the countermeasure network, and the computer automatically generates a new packing box style image, so that the design task of the product packing box is completed more efficiently and intelligently, the production cycle of commodities is shortened, and the production cost is reduced.

Claims (3)

1. A packing box automatic design method based on generation countermeasure network is characterized by comprising the following steps:
step 1, collecting training sample images:
step 2, preprocessing each sample image to form a training data set
2.1, cutting the image into a square image under the condition of keeping the integrity of the packaging box;
step 2.2, scaling the cut image to the size of 64X64X3 pixel resolution;
step 3, constructing a deep convolution generation countermeasure network model for automatic design of the packaging box
The deep convolution generation countermeasure network for automatic design of the packing box is composed of a 6-layer generator sub-network and a 6-layer discriminator sub-network, and the network structure of the generator sub-network is as follows:
the 0 th layer is an input layer p1, and the input data is 100-dimensional noise data generated by a normal distribution function;
the layer 1 is a transposed convolution layer a1, the size of a convolution kernel is 4X4 pixels, the number of the convolution kernels is 512, the convolution sliding step length in the horizontal direction and the vertical direction is 1 pixel, and 512 feature maps are output after data normalization processing in Batch and a ReLU activation function;
the layer 2 is a transposed convolution layer b1, the size of a convolution kernel is 4X4 pixels, the number of convolution kernels is 256, the convolution sliding step lengths in the horizontal direction and the vertical direction are both 2 pixels, the filling size is 1 pixel, and after data normalization processing in Batch and a ReLU activation function, 256 feature maps are output;
the 3 rd layer is a transposed convolution layer c1, the size of a convolution kernel is 4X4 pixels, the number of convolution kernels is 128, the convolution sliding step lengths in the horizontal direction and the vertical direction are both 2 pixels, the filling size is 1 pixel, and after data normalization processing in Batch and a ReLU activation function, 128 feature maps are output;
the 4 th layer is a transposed convolution layer d1, the size of a convolution kernel is 4X4 pixels, the number of convolution kernels is 64, the convolution sliding step lengths in the horizontal direction and the vertical direction are both 2 pixels, the filling size is 1 pixel, and after data normalization processing in Batch and a ReLU activation function, 64 feature maps are output;
the 5 th layer is a transposed convolution layer e1, the size of a convolution kernel is 4X4 pixels, the number of convolution kernels is 3, convolution sliding steps in the horizontal direction and the vertical direction are 2 pixels respectively, the filling size is 1 pixel, and a color image with the size of 64X64X3 is output after a Tanh activation function;
the network structure of the arbiter subnetwork is as follows:
layer 0 is an input layer p2, and the input data is a sample image in a training data set or a pseudo sample image generated by a generator network;
the layer 1 is convolutional layer a2, the size of convolutional kernel is 4X4 pixels, the number of convolutional kernels is 64, the convolutional sliding step length in the horizontal direction and the vertical direction is 2 pixels, and 64 feature mapping graphs are output after the function of LeakyReLU activation;
the layer 2 is convolutional layer b2, the size of a convolutional kernel is 4X4 pixels, the number of convolutional kernels is 128, the convolutional sliding step length in the horizontal direction and the vertical direction is 2 pixels, the filling size is 1 pixel, and after the LeakyReLU activation function is carried out, 128 feature mapping graphs are output;
the layer 3 is convolutional layer c2, the size of convolutional kernel is 4X4 pixels, the number of convolutional kernels is 256, the convolutional sliding step length in the horizontal direction and the vertical direction is 2 pixels, the filling size is 1 pixel, and after the LeakyReLU activation function is carried out, 256 feature mapping graphs are output;
the layer 4 is convolutional layer d2, the size of convolutional kernel is 4X4 pixels, the number of convolutional kernels is 512, the convolutional sliding step length in the horizontal direction and the vertical direction is 2 pixels, the filling size is 1 pixel, and 512 feature mapping graphs are output after the LeakyReLU activation function;
the 5 th layer is convolutional layer e2, the size of a convolutional kernel is 4X4 pixels, the number of the convolutional kernels is 1, the convolutional sliding step length in the horizontal direction and the vertical direction is 1 pixel, and after passing through a Sigmoid activation function, input data is output as discrimination information of a real sample or a pseudo sample image generated by a generator;
step 4, training a depth convolution generation confrontation network model for automatic design of the packaging box;
and 5, generating a new packing box pattern image by using the trained deep convolution generation countermeasure network model for automatic packing box design to finish the automatic packing box design.
2. The automatic design method for the packing box based on the generation countermeasure network of claim 1, wherein the method for acquiring the training sample image comprises the following steps: the method comprises the steps of collecting a packing box of a certain type of commodities, placing the packing box in an image collection environment with a white background for collecting a sample image, wherein the packing box in the sample image is located at the center of the image, and meanwhile, each packing box is kept to be located in the same orientation during collection.
3. The automatic design method for the packing box based on the generation countermeasure network as claimed in claim 1, wherein the step of training the generation countermeasure network by the deep convolution of the automatic design of the packing box in the step 4 is as follows:
step 1, carrying out unsupervised training on a discriminator subnetwork in a deep convolution generation countermeasure network for automatic design of a packing box by using a training data set;
step 2, carrying out unsupervised training on a generator sub-network in a deep convolution generation countermeasure network for automatic design of the packaging box by using noise data generated by normal distribution, then sending an output image of the generator sub-network into a discriminator sub-network, and training the discriminator sub-network again;
and step 3, performing iterative alternate training on the discriminator sub-network trained in the step 1 and the generator sub-network trained in the step 2 for 800-.
CN201810843372.9A 2018-07-27 2018-07-27 Automatic packaging box design method based on generation countermeasure network Active CN109102009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810843372.9A CN109102009B (en) 2018-07-27 2018-07-27 Automatic packaging box design method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810843372.9A CN109102009B (en) 2018-07-27 2018-07-27 Automatic packaging box design method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN109102009A CN109102009A (en) 2018-12-28
CN109102009B true CN109102009B (en) 2021-11-16

Family

ID=64847710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810843372.9A Active CN109102009B (en) 2018-07-27 2018-07-27 Automatic packaging box design method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN109102009B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111026899A (en) * 2019-12-11 2020-04-17 兰州理工大学 Product generation method based on deep learning
CN111597977A (en) * 2020-05-14 2020-08-28 公安部第三研究所 Method for realizing automatic generation of iris biological characteristic picture based on deep convolution generation countermeasure network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867093A (en) * 2012-09-18 2013-01-09 中国标准化研究院 Food moderate packaging design method
CN106981094A (en) * 2015-10-16 2017-07-25 达索系统公司 The computer implemented method of clothes can be manufactured for designing
CN108205816A (en) * 2016-12-19 2018-06-26 北京市商汤科技开发有限公司 Image rendering method, device and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180165845A1 (en) * 2016-12-09 2018-06-14 Free Construction Sp. Z o.o. Method of Analysis of Visualised Data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867093A (en) * 2012-09-18 2013-01-09 中国标准化研究院 Food moderate packaging design method
CN106981094A (en) * 2015-10-16 2017-07-25 达索系统公司 The computer implemented method of clothes can be manufactured for designing
CN108205816A (en) * 2016-12-19 2018-06-26 北京市商汤科技开发有限公司 Image rendering method, device and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《A Survey of Shape Similarity Assessment Algorithms for Product Design and Manufacturing Applications》;Antonio Cardone,et al;《THE AMERICAN SOCIETY OF MECHANICAL ENGINEERS》;20030630;第3卷(第2期);第109-118页 *
《智能化包装设计研究》;苏靓;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20140815(第8期);第C028-45页 *

Also Published As

Publication number Publication date
CN109102009A (en) 2018-12-28

Similar Documents

Publication Publication Date Title
Cheng et al. S3cnet: A sparse semantic scene completion network for lidar point clouds
Garcia-Garcia et al. A review on deep learning techniques applied to semantic segmentation
CN107273502B (en) Image geographic labeling method based on spatial cognitive learning
CN109711481A (en) Neural network, correlation technique, medium and equipment for the identification of paintings multi-tag
CN112465111A (en) Three-dimensional voxel image segmentation method based on knowledge distillation and countertraining
CN110084249A (en) The image significance detection method paid attention to based on pyramid feature
CN107103113A (en) Towards the Automation Design method, device and the optimization method of neural network processor
Wu et al. Constructing 3D CSG models from 3D raw point clouds
CN109102009B (en) Automatic packaging box design method based on generation countermeasure network
Zhiheng et al. PyramNet: Point cloud pyramid attention network and graph embedding module for classification and segmentation
AU2018226403B2 (en) Brush stroke generation with deep neural networks
CN104036242A (en) Object recognition method based on convolutional restricted Boltzmann machine combining Centering Trick
Aiwan et al. Image spam filtering using convolutional neural networks
CN116385902A (en) Remote sensing big data processing method, system and cloud platform
CN110111365B (en) Training method and device based on deep learning and target tracking method and device
CN113888505B (en) Natural scene text detection method based on semantic segmentation
Samani et al. Visual object recognition in indoor environments using topologically persistent features
Ouadiay et al. Simultaneous object detection and localization using convolutional neural networks
Wang et al. CLAST: Contrastive learning for arbitrary style transfer
CN117115404A (en) Method, device, computer equipment and storage medium for three-dimensional virtual scene adjustment
CN113657375B (en) Bottled object text detection method based on 3D point cloud
US20230038240A1 (en) Three-dimensional (3d) image modeling systems and methods for automatically generating photorealistic, virtual 3d packaging and product models from 2d imaging assets and dimensional data
Nishino et al. A synthesized 3DCG contents generator using IEC framework
CN112634399B (en) Closed curve generation method and device, electronic equipment and readable storage medium
Thasarathan et al. Artist-guided semiautomatic animation colorization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant