CN109840924B - Product image rapid generation method based on series countermeasure network - Google Patents
Product image rapid generation method based on series countermeasure network Download PDFInfo
- Publication number
- CN109840924B CN109840924B CN201811621565.6A CN201811621565A CN109840924B CN 109840924 B CN109840924 B CN 109840924B CN 201811621565 A CN201811621565 A CN 201811621565A CN 109840924 B CN109840924 B CN 109840924B
- Authority
- CN
- China
- Prior art keywords
- network
- image
- black
- texture pattern
- generation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to a product image rapid generation method based on a series countermeasure network, which realizes texture structure reasoning and color reasoning by two models respectively; and then, forming a whole network structure by adopting series connection, and finishing the generation of a product image through a discriminant function. The network structure of the invention decomposes the complexity of the problem, reduces the training difficulty and improves the generation effect.
Description
Technical Field
The invention relates to the technical field of digital image generation processing, in particular to a method for quickly generating a product image based on a series countermeasure network.
Background
In computer aided design, existing commercial software basically provides a rendering display function of realistic graphics so that designers check whether the appearance of a product they design meets the needs of customers. However, it is complicated and time consuming for the designer to build a geometric model of the desired scene. Currently, many commercial three-dimensional modeling software, such as 3d max, maya and AutoCAD from Autodesk, are geometric modeling tools developed by professionals. They all require designers to build scene models step by step from very detailed and well-defined geometries. At the heart of these modeling tools is the precise definition of the three-dimensional coordinates of each detailed component in the scene and the geometric relationships of the components to each other. The entire process is quite complex and cumbersome and the designer must expend a great deal of effort to accommodate this unnatural design. In the initial stage of design, the design concept of the designer is not mature, the design concept needs to be repeatedly modified and exchanged, and the precise and detailed modeling mode is increasingly complicated and inefficient. At the beginning of design, designers have many things to think about, and the repeatability and uncertainty are also large, so that a more flexible sketch representation method needs to be selected.
In order to maximize the creativity of designers and improve the design efficiency of designers, it is desirable that the designers need not be stumbled by complicated operations and define the numerical coordinates of each detail in the designed product with high precision. There is a need for a method that can generate a product image with a light-dark contrast from a hand-drawn sketch. The traditional method is mainly photorealistic image rendering. It includes physical modeling for geometry and physical modeling for illumination, and image-based rendering, among other methods. These methods are time consuming to generate a geometric model of a scene and define surface features when designing a new scene, and are not suitable for use at the early stages of design. In the initial stage of design, after drawing a sketch, a designer often wants to see what effect a texture block is attached to, that is, the designer wants to create a product image from the sketch with the texture block. In the prior invention, the image generation method based on the generation countermeasure network has two main defects. One is that user interaction is not supported. The user cannot paste texture blocks on the sketch as desired and the texture structure and color of the generated image are completely determined by the network. And secondly, the invention supporting the texture block input by the user is complex in training, and the difference between the texture structure of the generated image and the texture block structure input by the user is large.
Disclosure of Invention
The invention aims to overcome the defects and provides a method for quickly generating a product image based on a series countermeasure network, which realizes texture structure reasoning and color reasoning by two models respectively; and then, forming a whole network structure by adopting series connection, and finishing the generation of a product image through a discriminant function. The network structure of the invention decomposes the complexity of the problem, reduces the training difficulty and improves the generation effect.
The invention achieves the aim through the following technical scheme: a method for quickly generating a product image based on a series countermeasure network comprises the following steps:
(1) Constructing a black-and-white texture pattern generation network and a black-and-white texture pattern discrimination network by adopting Python and TensorFlow, and forming a black-and-white texture pattern generation countermeasure network by the two constructed networks;
(2) Training a black-and-white texture pattern generation countermeasure network based on pre-prepared training data to obtain a black-and-white texture pattern generation model;
(3) Constructing an image generation network and an image discrimination network by adopting Python and TensorFlow, wherein the two constructed networks jointly form an image generation countermeasure network;
(4) Training the image generation countermeasure network to obtain an image generation model;
(5) Connecting the black and white texture pattern generation model and the image generation model in series to obtain a product image generation system; the product image is rapidly generated by the product image generation system.
Preferably, the construction and target functions of the black-and-white texture pattern generation countermeasure network are as follows;
1) Black and white texture pattern generation countermeasure network generation network G p (y) and discriminating network D p (x, y); black and white texture pattern generation network G p (y) the network comprises a convolution layer and a deconvolution layer, wherein the network inputs a texture block containing a sketch outline and randomly intercepted adaptive threshold value binaryzation, and the output of the network is a wrapped image with an adaptive threshold value binaryzation style;
black and white texture pattern discrimination network D p (x, y) is composed of convolutional layers, in the network, the last convolutional layer is linearized into a vector, and the vector is converted into a scalar by a sigmoid function;
2) Constructing an objective function of the generating network, J p (G)=E[Φ((1-D p (G p (y),y)),l 0 )];
Wherein y represents a condition; for a black and white texture pattern generation network, two input conditions are provided, namely an outline sketch; secondly, randomly intercepting a texture pattern with a binaryzation self-adaptive domain value; phi is the cross entropy cost function, l 0 Representing a desired output value;
3) Constructing a discriminant network objective function as follows:
F p (D)=-E[Φ(D p (x,y),l 1 )]-E[Φ((1-D p (G p (y),y)),l 0 )];
4) The two constructed networks jointly form a black and white texture pattern generation countermeasure network.
Preferably, the training data of step (2) is preprocessed as follows:
(i) Collecting packet image data, uniformly converting the collected pictures into a white background with the resolution of 256 multiplied by 256;
(ii) Converting the packet image data set into a sketch data set by using an edge detection algorithm;
(iii) Converting the packet image data set into a black-and-white binary image with self-adaptive domain value binaryzation;
(iv) And randomly intercepting the black and white texture subjected to self-adaptive threshold value binarization as training data.
Preferably, the training method for generating the antagonistic network by the black and white texture pattern comprises the following steps:
(2.1) inputting the sketch outline and the randomly intercepted black and white texture block into the generation network, and generating a picture with a network generated self-adaptive domain value binarization style;
(2.2) inputting the picture, sketch outline and randomly intercepted texture pattern generated by the generation network into a discrimination network, and outputting a probability value by the discrimination network, wherein the probability value represents the possibility that the input sample comes from a training data set;
(2.3) inputting the original image after the self-adaptive domain value binarization, the sketch outline and the randomly intercepted texture pattern into a discrimination network, and outputting a probability value by the discrimination network, wherein the probability value represents the possibility that the input sample comes from a training data set;
(2.4) calculating and judging the network confrontation loss by using the outputs of the step (2.2) and the step (2.3);
(2.5) calculating the countermeasure loss of the generation network by using the output of the step (2.2);
(2.6) repeating steps (2.1) to (2.5), minimizing an objective function of the generated network,
and judging the objective function of the network until reaching the stop condition; and obtaining a black and white texture pattern generation model.
Preferably, the construction and objective function of the image generation countermeasure network are as follows;
(I) Image generation countermeasure network the image generation network G i (y) and image discrimination network D i (x, y); the image generation network consists of a convolution layer and a deconvolution layer; the input of the network has two parts, namely, a picture with a self-adaptive threshold value binarization style; second, color scheme picture; the output of the network is a color image comprising three channels of RGB;
the image discrimination network is composed of convolution layers, and the network structure of the image discrimination network is the same as that of the black-and-white texture pattern discrimination network; the input of the network has three parts, namely training a sample picture or a picture generated by an image generation network; secondly, a picture with a self-adaptive threshold value binarization style; thirdly, color scheme pictures; the output of the network is a probability that represents the likelihood that the input sample is from the original data set;
(II) constructing an objective function of the image-generating network, J i (G)=E[Φ((1-D(G i (y),y)),l 0 )];
Wherein y represents a condition, and for an image generation network, two input conditions are provided, namely a package picture with a self-adaptive threshold value binarization style; secondly, color scheme pictures;
(III) constructing an objective function of the image discrimination network, which is specifically as follows:
F i (D)=-E[Φ(D i (x,y),l 1 )]-E[Φ((1-D i (G i (y),y)),l 0 )];
(IV) the two constructed networks jointly form an image generation countermeasure network.
Preferably, the training method of the image generation countermeasure network is as follows:
(4.1) inputting the picture subjected to the binarization by the self-adaptive threshold value and the picture subjected to the color scheme into an image generation network, and outputting a color image comprising three channels of RGB by the image generation network;
(4.2) inputting the color image generated by the generation network, the texture pattern of the self-adaptive threshold value binarization style and the color scheme picture into a discrimination network, and outputting the probability that the input sample comes from a training sample set by the discrimination network;
(4.3) inputting the pictures in the color training sample set, the texture patterns in the adaptive threshold value binarization style and the color scheme pictures into a discrimination network, and outputting the probability that the input samples come from the training sample set by the discrimination network;
(4.4) calculating and judging the network countermeasure loss by using the outputs of the step (4.2) and the step (4.3);
(4.5) calculating and generating network countermeasure loss by using the output of the step (4.2);
(4.6) repeating steps (4.1) to (4.5), minimizing the objective function of the generated network, and discriminating the objective function of the network until a stop condition is reached; and obtaining an image generation model.
The invention has the beneficial effects that: two networks are connected in series to generate a product image, wherein the first network is a black and white texture pattern prediction network taking an outer contour and a black and white texture pattern block as input, and the second network is a rendering network taking a sketch with a black and white texture pattern and a color scheme as input; the method allows a user to paste texture blocks on a sketch to indicate the texture and color of the image to be generated by the network; the network can then generate a satisfactory image that meets the user's requirements in terms of contours, texture patterns, and colors. Thereby solving the problem that the texture structure and the color of the generated image can not be controlled by the user.
Drawings
FIG. 1 is a schematic flow chart of an image generation method of the present invention;
FIG. 2 is a schematic diagram of a black and white texture pattern generation network according to the present invention;
FIG. 3 is a schematic diagram of a black and white texture pattern discrimination network according to the present invention;
FIG. 4 is a schematic diagram of an image generation network architecture of the present invention.
Detailed Description
The invention will be further described with reference to specific examples, but the scope of the invention is not limited thereto:
the embodiment is as follows: the product image generation process is divided into two stages, the first stage is to generate a complete black and white image by using black and white texture pattern blocks and sketch outlines. The style of the black-and-white image is the same as that of the black-and-white image after the binarization of the self-adaptive threshold value. The second stage is to generate a color image using the black and white image and color scheme generated in the first stage. In the design, the input of the designer is divided into two parts: sketch contours and texture blocks. The whole image generation process goes through two generation models, one is a black and white texture pattern generation model (GuessPattern Generator), and the other is an image generation model (paintGAN Generator). The input of the black-and-white texture pattern generation model is provided with two parts, namely black-and-white texture patterns obtained by performing self-adaptive threshold binarization on black-and-white sketch outlines and color texture blocks. The output of the network is a black and white image with a black and white texture pattern. The input to the image generation model also has two parts, a sketch with a complete black and white texture pattern, and a color scheme image. The color scheme image is primarily to provide color information to the network. The whole image generation flow is shown in fig. 1.
A method for quickly generating a product image based on a series countermeasure network comprises the following steps:
(1) Constructing a black and white texture pattern generation network and a black and white texture pattern discrimination network by adopting Python and TensorFlow, wherein the two constructed networks jointly form a black and white texture pattern generation countermeasure network;
the construction of the black-and-white texture pattern generation countermeasure network and the target function are as follows;
(1.1) Black and white texture Pattern Generation countermeasure network Generation network G p (y) sum discrimination network D p (x, y); black and white texture pattern generation network G p (y) consists of convolutional layers and deconvolution layers, and the network structure is shown in FIG. 2; inputting a texture block containing a sketch contour and self-adaptive threshold binaryzation intercepted randomly by a network, and outputting a package image in a self-adaptive threshold binaryzation style by the network;
black and white texture pattern discrimination network D p (x, y) is composed of convolutional layers, in the network, the last convolutional layer is linearized into a vector, and the vector is converted into a scalar by a sigmoid function; the network structure is shown in fig. 3.
(1.2) construction of an objective function for generating a network, J p (G)=E[Φ((1-D p (G p (y),y)),l 0 )];
Wherein y represents a condition; for a black and white texture pattern generation network, two input conditions are provided, namely an outline sketch; secondly, randomly intercepting a texture pattern with a binaryzation self-adaptive domain value; phi is a cross entropy cost function;
(1.3) constructing a discrimination network objective function, which is as follows:
F p (D)=-E[Φ(D p (x,y),l 1 )]-E[Φ((1-D p (G p (y),y),l 0 )];
(1.4) the two constructed networks jointly form a black and white texture pattern generation countermeasure network. The black-and-white texture pattern generation countermeasure network adopts a U-shaped structure of a convolution layer and a deconvolution layer. In order to make the back propagation algorithm easier to get the optimal solution, the present invention also splices the convolutional layers to the peer anti-convolutional layers. In order to enable the generation model to have the reasoning capability and the outer contour complementing capability of the texture structure, the method introduces characteristic loss to the black and white texture pattern generation model.
(2) Training a black-and-white texture pattern generation countermeasure network based on pre-prepared training data to obtain a black-and-white texture pattern generation model;
the training data comprises three parts, namely a sketch outline; secondly, randomly intercepting texture blocks; and thirdly, grouping and recording the adaptive threshold value binaryzation. The sketch contours are generated using an edge detection algorithm. In order to make the generated model have the reasoning ability of texture structure, only the texture block of the generated network part is used in training. I.e. randomly cutting a small part of the texture from the full texture picture as the input of the network. And generating the self-adaptive threshold value binaryzation group channel by using a self-adaptive threshold value binaryzation algorithm. The training data is preprocessed as follows:
(i) Collecting packet image data, uniformly converting the collected pictures into a white background with the resolution of 256 multiplied by 256;
(ii) Converting the packet image data set into a sketch data set by using an edge detection algorithm;
(iii) Converting the packet image data set into a black-and-white binary image with self-adaptive domain value binaryzation;
(iv) And randomly intercepting the black and white texture subjected to self-adaptive threshold value binarization as training data.
The training method for generating the countermeasure network by the black and white texture patterns comprises the following steps:
(2.1) inputting the sketch outline and the randomly intercepted black and white texture block into the generation network, and generating a picture with a network generated self-adaptive domain value binarization style;
(2.2) inputting the picture, sketch outline and randomly intercepted texture pattern generated by the generation network into a discrimination network, and outputting a probability value by the discrimination network, wherein the probability value represents the possibility that the input sample comes from a training data set;
(2.3) inputting the original image after the binarization of the self-adaptive domain value, the sketch outline and the randomly intercepted texture pattern into a discrimination network, and outputting a probability value by the discrimination network, wherein the probability value represents the possibility that the input sample comes from a training data set;
(2.4) calculating and judging the network countermeasure loss by using the outputs of the step (2.2) and the step (2.3);
(2.5) calculating the confrontation loss of the generation network by using the output of the step (2.2);
the loss of the generated network consists of three parts, namely, a countermeasure loss, a characteristic loss and an L1 loss. Generating network total lossWherein->Respectively, the countermeasure loss, the characteristic loss, and the L1 loss. W ADV 、W F 、W L1 Weights for the confrontation loss, the characteristic loss, and the L1 loss, respectively. The effect of the counter-loss is to make the generated image have a style of adaptive threshold binarization. The function of the feature loss is to make the generated network have semantic reasoning ability and texture reasoning ability. The loss of L1 may make the counter-training more stable.
(2.6) repeating the steps (2.1) to (2.5), minimizing the objective function of the generated network, and judging the objective function of the network until the stop condition is reached; and obtaining a black and white texture pattern generation model.
(3) Constructing an image generation network and an image discrimination network by adopting Python and TensorFlow, wherein the two constructed networks jointly form an image generation countermeasure network;
the construction and the objective function of the image generation countermeasure network are as follows:
(I) Image generation countermeasure network the image generation network G i (y) and image discrimination network D i (x, y); the image generation network consists of a convolution layer and a deconvolution layer; the input of the network has two parts, namely, a picture with a self-adaptive threshold value binarization style; secondly, color scheme pictures; the output of the network is a color image comprising three channels of RGB; wherein the input to the image generation network consists of two parts. Firstly, a picture with a self-adaptive threshold value is binarized; second is a color scheme picture. The picture of the self-adaptive threshold value binarization is obtained by carrying out the self-adaptive threshold value binarization on the original data set. The color scheme picture is obtained by blurring the original data set. The input to the image discrimination network consists of three parts. Firstly, generating an image output by a network or a training sample image; secondly, a picture with a self-adaptive threshold value is binarized; and thirdly, color scheme pictures. The discrimination network outputs a probability value which represents the possibility that the input picture comes from the training set; the generating network calculates the countermeasure error using the probability value. The structure of the image generation network is shown in fig. 4. Wherein the layers beginning with the letter e are convolutional layers. The convolution kernel size of each convolution layer is 4 multiplied by 4, the step length is 2, and the number of feature maps output by each layer is doubled layer by layer. The layer beginning with the letter d is the deconvolution layer. The size of the deconvolution kernel is 4 multiplied by 4, and the size of the output characteristic graph is doubled layer by layer. The image generation network finally outputs a picture containing three channels of RGB.
The image discrimination network is composed of convolution layers, and the network structure of the image discrimination network is the same as that of the black-and-white texture pattern discrimination network; the input of the network has three parts, namely training a sample picture or a picture generated by an image generation network; secondly, a picture with a self-adaptive threshold value binarization style; thirdly, color scheme pictures; the output of the network is a probability that represents the likelihood that the input sample is from the original data set;
(II) Constructing an objective function of an image-generating network, J i (G)=E[Φ((1-D(G i (y),y)),l 0 )];
Wherein y represents a condition, and for an image generation network, two input conditions are provided, namely a wrapped picture with a self-adaptive threshold value binarization style; secondly, color scheme pictures;
(III) constructing an objective function of the image discrimination network, which is specifically as follows:
F i (D)=-E[Φ(D i (x,y),l 1 )]-E[Φ((1-D i (G i (y),y)),l 0 )];
(IV) the two constructed networks jointly form an image generation countermeasure network.
(4) Training the image generation countermeasure network to obtain an image generation model;
the training method of the image generation countermeasure network comprises the following steps:
(4.1) inputting the picture subjected to the self-adaptive threshold value binaryzation and the color scheme picture into an image generation network, and outputting a color image comprising three channels of RGB by the image generation network;
(4.2) inputting the color image generated by the generation network, the texture pattern of the self-adaptive threshold value binarization style and the color scheme picture into a discrimination network, and discriminating the probability that the input sample is from a training sample set output by the discrimination network;
(4.3) inputting the pictures in the color training sample set, the texture patterns in the self-adaptive threshold value binarization style and the color scheme pictures into a discrimination network, and outputting the probability that the input samples come from the training sample set by the discrimination network;
(4.4) calculating and judging the network countermeasure loss by using the outputs of the step (4.2) and the step (4.3);
(4.5) calculating and generating network countermeasure loss by using the output of the step (4.2);
(4.6) repeating the steps (4.1) to (4.5), minimizing the objective function of the generated network, and judging the objective function of the network until a stop condition is reached; and obtaining an image generation model.
(5) Connecting the black and white texture pattern generation model and the image generation model in series to obtain a product image generation system; the product image is rapidly generated by the product image generation system.
The image generation system is formed by connecting two generation models in series, and the image generation process comprises the following steps: when designing a product, a designer firstly draws a sketch and then pastes texture blocks on the sketch. That is, the designer provides two outputs, a sketch of the outline; second, texture blocks. For the outline sketch, the generation system sends it to the black and white texture pattern generation network as is. For the texture block, the generation system performs adaptive threshold binarization on the texture block through an algorithm, and then the binarized texture block is used as an input of a black and white texture pattern generation model. The black and white texture pattern generation model outputs a packet picture with adaptive threshold binarization. The generation system takes the color texture block provided by the designer as input of the image generation model. And finally, outputting a package product image by the image generation model. The texture structure of the generated image is consistent with the structure of the texture block given by the designer. The color of the generated image is consistent with the color of the texture block given by the designer.
While the invention has been described in connection with specific embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (3)
1. A method for quickly generating a product image based on a series countermeasure network is characterized by comprising the following steps:
(1) Constructing a black-and-white texture pattern generation network and a black-and-white texture pattern discrimination network by adopting Python and TensorFlow, and forming a black-and-white texture pattern generation countermeasure network by the two constructed networks; the construction of the black-and-white texture pattern generation countermeasure network and the target function are as follows;
1) Black and white texture pattern generation countermeasure network generation network G p (y) and discriminating network D p (x, y); black and white texture pattern generation network G p (y) is composed of a convolutional layer and a deconvolution layer,inputting a texture block containing a sketch contour and self-adaptive threshold binaryzation intercepted randomly by a network, and outputting a package image in a self-adaptive threshold binaryzation style by the network;
black and white texture pattern discrimination network D p (x, y) is composed of convolutional layers, in the network, the last convolutional layer is linearized into a vector, and the vector is converted into a scalar by a sigmoid function;
2) Constructing an objective function of a generated network: j. the design is a square p (G)=Ε[Φ((1-D p (G p (y),y)),l 0 )];
Wherein y represents a condition; for a black and white texture pattern generation network, two input conditions are provided, namely an outline sketch; secondly, randomly intercepting a texture pattern with a binaryzation self-adaptive domain value; phi is the cross entropy cost function, l 0 Representing an expected output value of the discrimination network;
3) Constructing a discriminant network objective function as follows:
F p (D)=-Ε[Φ(D p (x,y),l 1 )]-Ε[Φ((1-D p (G p (y),y)),l 0 )];
4) The two constructed networks jointly form a black and white texture pattern generation countermeasure network;
(2) Training a black and white texture pattern generation countermeasure network based on pre-prepared training data to obtain a black and white texture pattern generation model; wherein the pre-processing of the training data is as follows:
(i) Collecting packet image data, uniformly converting the collected pictures into a white background with the resolution of 256 multiplied by 256;
(ii) Converting the packet image data set into a sketch data set by using an edge detection algorithm;
(iii) Converting the packet image data set into a black-and-white binary image with a binary self-adaptive domain value;
(iv) Randomly intercepting black and white textures subjected to self-adaptive threshold value binarization as training data;
(3) Constructing an image generation network and an image discrimination network by adopting Python and TensorFlow, wherein the two constructed networks jointly form an image generation countermeasure network; the construction and the objective function of the image generation countermeasure network are as follows:
(I) Image generation countermeasure network the image generation network G i (y) and image discrimination network D i (x, y); the image generation network consists of a convolution layer and a deconvolution layer; the input of the network has two parts, namely, a picture with a self-adaptive threshold value binarization style; secondly, color scheme pictures; the output of the network is a color image comprising three channels of RGB;
the image discrimination network is composed of convolution layers, and the network structure of the image discrimination network is the same as that of the black-and-white texture pattern discrimination network; the input of the network comprises three parts, namely training a sample picture or a picture generated by an image generation network; secondly, a picture with a self-adaptive threshold value binarization style; thirdly, color scheme pictures; the output of the network is a probability that represents the likelihood that the input sample is from the original data set;
(II) constructing an objective function of the image generation network: j. the design is a square i (G)=Ε[Φ((1-D(G i (y),y)),l 0 )](ii) a Wherein y represents a condition, and for an image generation network, two input conditions are provided, namely a package picture with a self-adaptive threshold value binarization style; secondly, color scheme pictures;
(III) constructing an objective function of the image discrimination network, which is specifically as follows:
F i (D)=-Ε[Φ(D i (x,y),l 1 )]-Ε[Φ((1-D i (G i (y),y)),l 0 )];
(IV) the two constructed networks jointly form an image to generate a countermeasure network;
(4) Training the image generation countermeasure network to obtain an image generation model;
(5) Connecting the black and white texture pattern generation model and the image generation model in series to obtain a product image generation system; the product image is rapidly generated by the product image generation system.
2. The method for rapidly generating the product image based on the tandem countermeasure network as claimed in claim 1, wherein: the training method for generating the countermeasure network by the black and white texture patterns comprises the following steps:
(2.1) inputting the sketch outline and the randomly intercepted black and white texture block into the generation network, and generating a picture with a network generated self-adaptive domain value binarization style;
(2.2) inputting the picture, the sketch outline and the randomly intercepted texture pattern generated by the generation network into a discrimination network, and outputting a probability value by the discrimination network, wherein the probability value represents the possibility that the input sample comes from a training data set;
(2.3) inputting the original image after the self-adaptive domain value binarization, the sketch outline and the randomly intercepted texture pattern into a discrimination network, and outputting a probability value by the discrimination network, wherein the probability value represents the possibility that the input sample comes from a training data set;
(2.4) calculating and judging the network countermeasure loss by using the outputs of the step (2.2) and the step (2.3);
(2.5) calculating the countermeasure loss of the generation network by using the output of the step (2.2);
(2.6) repeating the steps (2.1) to (2.5), minimizing the objective function of the generated network, and judging the objective function of the network until the stop condition is reached; and obtaining a black and white texture pattern generation model.
3. The method for rapidly generating the product image based on the tandem countermeasure network as claimed in claim 1, wherein: the training method of the image generation countermeasure network comprises the following steps:
(4.1) inputting the picture subjected to the self-adaptive threshold value binaryzation and the color scheme picture into an image generation network, and outputting a color image comprising three channels of RGB by the image generation network;
(4.2) inputting the color image generated by the generation network, the texture pattern of the self-adaptive threshold value binarization style and the color scheme picture into a discrimination network, and discriminating the probability that the input sample is from a training sample set output by the discrimination network;
(4.3) inputting the pictures in the color training sample set, the texture patterns in the self-adaptive threshold value binarization style and the color scheme pictures into a discrimination network, and outputting the probability that the input samples come from the training sample set by the discrimination network;
(4.4) calculating and judging the network countermeasure loss by using the outputs of the step (4.2) and the step (4.3);
(4.5) calculating and generating network countermeasure loss by using the output of the step (4.2);
(4.6) repeating the steps (4.1) to (4.5), minimizing the objective function of the generated network, and judging the objective function of the network until a stop condition is reached; and obtaining an image generation model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811621565.6A CN109840924B (en) | 2018-12-28 | 2018-12-28 | Product image rapid generation method based on series countermeasure network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811621565.6A CN109840924B (en) | 2018-12-28 | 2018-12-28 | Product image rapid generation method based on series countermeasure network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109840924A CN109840924A (en) | 2019-06-04 |
CN109840924B true CN109840924B (en) | 2023-03-28 |
Family
ID=66883440
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811621565.6A Active CN109840924B (en) | 2018-12-28 | 2018-12-28 | Product image rapid generation method based on series countermeasure network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109840924B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112132167B (en) * | 2019-06-24 | 2024-04-16 | 商汤集团有限公司 | Image generation and neural network training method, device, equipment and medium |
CN111160529B (en) * | 2019-12-28 | 2023-06-20 | 天津大学 | Training sample generation method in target pose measurement based on convolutional neural network |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564611A (en) * | 2018-03-09 | 2018-09-21 | 天津大学 | A kind of monocular image depth estimation method generating confrontation network based on condition |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10192321B2 (en) * | 2017-01-18 | 2019-01-29 | Adobe Inc. | Multi-style texture synthesis |
-
2018
- 2018-12-28 CN CN201811621565.6A patent/CN109840924B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564611A (en) * | 2018-03-09 | 2018-09-21 | 天津大学 | A kind of monocular image depth estimation method generating confrontation network based on condition |
Also Published As
Publication number | Publication date |
---|---|
CN109840924A (en) | 2019-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Qian et al. | Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors | |
Poole et al. | Dreamfusion: Text-to-3d using 2d diffusion | |
WO2021109876A1 (en) | Image processing method, apparatus and device, and storage medium | |
CN108875935B (en) | Natural image target material visual characteristic mapping method based on generation countermeasure network | |
US10922860B2 (en) | Line drawing generation | |
Rose et al. | Developable surfaces from arbitrary sketched boundaries | |
WO2020165557A1 (en) | 3d face reconstruction system and method | |
Huang et al. | Neural kernel surface reconstruction | |
CN114373056B (en) | Three-dimensional reconstruction method, device, terminal equipment and storage medium | |
US20220245912A1 (en) | Image display method and device | |
CN108596919B (en) | Automatic image segmentation method based on depth map | |
CN109840924B (en) | Product image rapid generation method based on series countermeasure network | |
Tabib et al. | Learning-based hole detection in 3D point cloud towards hole filling | |
CN114511440A (en) | Adaptive convolution in neural networks | |
CN111127658A (en) | Point cloud reconstruction-based feature-preserving curved surface reconstruction method for triangular mesh curved surface | |
CN116416376A (en) | Three-dimensional hair reconstruction method, system, electronic equipment and storage medium | |
CN113592711A (en) | Three-dimensional reconstruction method, system and equipment for point cloud data nonuniformity and storage medium | |
CN117953180B (en) | Text-to-three-dimensional object generation method based on dual-mode latent variable diffusion | |
Xu et al. | Matlaber: Material-aware text-to-3d via latent brdf auto-encoder | |
CN111814895A (en) | Significance target detection method based on absolute and relative depth induction network | |
CN110322548B (en) | Three-dimensional grid model generation method based on geometric image parameterization | |
CN116993955A (en) | Three-dimensional model heavy topology method, device, equipment and storage medium | |
CN111882659A (en) | High-precision human body foot shape reconstruction method integrating human body foot shape rule and visual shell | |
Srivastava et al. | xcloth: Extracting template-free textured 3d clothes from a monocular image | |
CN116563443A (en) | Shoe appearance design and user customization system based on 3D generation countermeasure network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |