CN114359269A - Virtual food box defect generation method and system based on neural network - Google Patents

Virtual food box defect generation method and system based on neural network Download PDF

Info

Publication number
CN114359269A
CN114359269A CN202210221276.7A CN202210221276A CN114359269A CN 114359269 A CN114359269 A CN 114359269A CN 202210221276 A CN202210221276 A CN 202210221276A CN 114359269 A CN114359269 A CN 114359269A
Authority
CN
China
Prior art keywords
defect
image
food box
sample
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210221276.7A
Other languages
Chinese (zh)
Inventor
李晋芳
何明桐
苏健聪
郑泽胜
李博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yidao Intelligent Information Technology Co ltd
Guangdong University of Technology
Original Assignee
Guangzhou Yidao Intelligent Information Technology Co ltd
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yidao Intelligent Information Technology Co ltd, Guangdong University of Technology filed Critical Guangzhou Yidao Intelligent Information Technology Co ltd
Priority to CN202210221276.7A priority Critical patent/CN114359269A/en
Publication of CN114359269A publication Critical patent/CN114359269A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application relates to the technical field of defect image sample generation, in particular to a virtual food box defect generation method and system based on a neural network. The method comprises the following steps: acquiring image data through a generated countermeasure network obtained through pre-training; screening image data to obtain a first sample and a second sample; acquiring an original map; the original mapping is obtained by receiving a simulated defect image generated by a confrontation network output by an image segmentation network model and extracting features; and converting the original mapping into a normal mapping based on the original mapping, and outputting the normal mapping to a preselected area in the three-dimensional virtual food box to realize defect generation of the virtual food box. The image data is generated by generating the countermeasure network, then the first sample and the second sample are screened out to respectively train the image segmentation network and generate the countermeasure network, the image segmentation network model can output a large number of simulated defect images, a large number of food box defect data sets are generated, and the problem that the demand of the defect model samples is insufficient is solved.

Description

Virtual food box defect generation method and system based on neural network
Technical Field
The application relates to the technical field of defect image sample generation, in particular to a virtual food box defect generation method and system based on a neural network.
Background
Defect images are commonly used for simulations in industrial simulations. The industrial simulation is a kind of virtual for the physical industry, each module in the physical industry is converted into data and integrated into a virtual system, and each work and flow in the industrial operation is simulated and realized in the virtual system, and various interactions are realized with the work and flow.
In recent years, with the rapid development of industrial internet of things, the requirement on the fidelity of a model in an industrial simulation technology is higher and higher. However, the high-precision defective model samples obtained by the conventional method still have many problems. For example:
at present, the manual modeling workload is huge, the drawn defect type is single, the diversity and randomness of defects in a model sample cannot be met, meanwhile, the manual modeling efficiency is low, especially, the drawing time and the drawing force aiming at irregular defects are time-consuming and labor-consuming, the effect is often not good, and the condition of large demand of the defect model sample cannot be met.
Disclosure of Invention
In order to solve or at least partially solve the technical problem, the present application provides a virtual food box defect generation method based on a neural network, wherein the method comprises the following steps:
acquiring image data through a generated countermeasure network obtained through pre-training;
screening the image data to obtain a first sample and a second sample;
the first sample is used for training an image segmentation network to obtain an image segmentation network model;
the second sample is used for iteratively training the generation countermeasure network;
acquiring an original map;
the original mapping is obtained by receiving the simulated defect image output by the generated countermeasure network through the image segmentation network model and extracting features;
and converting the original mapping into a normal mapping based on the original mapping, and outputting the normal mapping to a preselected area in the three-dimensional virtual food box to realize defect generation of the virtual food box.
The image data is generated by generating the countermeasure network, then the first sample and the second sample are screened out to respectively train the image segmentation network and generate the countermeasure network, the image segmentation network model can output a large number of simulated defect images, a large number of food box defect data sets are generated, and the problem of insufficient demand of defect model samples is solved;
and the original map is converted into a normal map and is output in a preselected area in the three-dimensional virtual food box, so that the influence on the use caused by the reduction of the simulation degree of the model due to the larger difference of the image background is avoided.
Optionally, the pre-training process for generating the countermeasure network includes:
encoding the real defect image and the defect type label of the food box, inputting the encoded real defect image and the defect type label into a discriminator for training, and updating the discriminator through a loss function;
inputting random noise z and random defect type labels into a generator, generating a random simulated defect image by the generator, simultaneously designating the simulated defect type labels which are the same as input data by the generator, inputting the simulated defect image and the simulated defect type labels into the discriminator for training, and updating the discriminator through the loss function.
The generation countermeasure network trained by the real defect sample can obtain a large number of available simulated defect images of different types according to the specified defect type and the input random noise, so that the acquisition efficiency of the available defect sample is improved, the similarity between the generated defect texture and the real defect is high, the randomness is achieved, and the generation rule of the defect sample in reality is met.
Optionally, the pre-training process for generating the countermeasure network further includes:
training the generator: and after the random noise z and the random defect type label are input into the generator, generating a random image by the generator, assigning a defect type label which is the same as the input data, inputting the real-time random image and the defect type label into the generator for training, and finally updating the weight of the generator through loss back propagation.
Optionally, the first sample includes the screened simulated defect image data set and the screened real defect image which meet the preset requirements;
the second sample comprises a simulated defect image that does not meet preset requirements.
Optionally, the screening the image data includes the following steps:
acquiring an RGB color characteristic diagram based on the real defect image and the simulated defect image;
converting the RGB color characteristic map into a gray scale map;
acquiring a hash sequence according to the difference value of adjacent pixels of the gray-scale image;
based on the Hash sequence, obtaining a similarity numerical value of the simulated defect image and the real defect image of the food box;
when the similarity value is smaller than a preset similarity threshold value, the preset requirement is met; otherwise, the preset requirements are not met.
Optionally, the obtaining the hash sequence specifically includes:
firstly, sequentially traversing the gray-scale image pixels according to the pixel position sequence, and judging the size of the rear pixel of each line to be larger than the size of the front pixel of each line;
when the rear pixel of each row is larger than the front pixel, outputting a number 1; otherwise, the number 0 is output, forming a hash sequence.
The quality of the generated image is quickly and accurately identified by the Hash sequence based on the convolution characteristic diagram, the training effectiveness of the segmentation network is guaranteed, the segmentation precision of the simulated defect area is improved, and the problem that the use is influenced due to the fact that the simulation degree of the model is reduced due to the fact that the background difference of the image is large is avoided.
Optionally, the acquiring the RGB color profile includes:
and feeding the real defect image and the simulated defect image into the existing convolutional neural network, and processing the real defect image and the simulated defect image through the convolutional neural network to generate an RGB color characteristic diagram.
Optionally, the obtaining the original map includes:
firstly, the generation confrontation network model outputs a simulated defect image, the image segmentation network model receives the simulated defect image and generates a color mask of the simulated defect image, then the image segmentation network model carries out binarization on the color mask, an example picture of a defect mask area is extracted by utilizing superposition of the color mask and an original image, and finally the image segmentation network model stores the segmented defect example picture as an original map.
The defect model of the three-dimensional virtual food box is more vivid and has higher precision by utilizing the normal map conversion technology, and the rendering requirement is met.
The present application further provides a virtual food box defect generation system based on a neural network, comprising:
the first acquisition module is used for acquiring image data through a generated countermeasure network obtained through pre-training;
the screening module is used for screening the image data to obtain a first sample and a second sample;
the first sample is used for training an image segmentation network to obtain an image segmentation network model;
the second sample is used for iteratively training the generation countermeasure network;
the second acquisition module is used for acquiring the original map;
the original mapping is obtained by receiving the simulated defect image output by the generated countermeasure network through the image segmentation network model and extracting features;
and the mapping module is used for converting the original mapping into a normal mapping based on the original mapping, outputting the normal mapping in a preselected area in the three-dimensional virtual food box and realizing the defect generation of the virtual food box.
The application also provides an electronic device comprising a memory and a processor; the memory is to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement any of the neural network-based virtual food box defect generation methods.
Has the advantages that:
1. according to the virtual food box defect generation method based on the neural network, the confrontation network is generated to generate image data, then the first sample and the second sample are screened out to train the image segmentation network and generate the confrontation network respectively, the image segmentation network outputs a large number of simulated defect images, a large number of food box defect data sets are generated, and the problem that the demand of defect model samples is insufficient is solved;
and the original map is converted into a normal map and is output in a preselected area in the three-dimensional virtual food box, so that the influence on the use caused by the reduction of the simulation degree of the model due to the larger difference of the image background is avoided.
2. The virtual food box defect generation method based on the neural network can efficiently generate diversified and irregular defect simulation images, and meanwhile, a defect model of the three-dimensional virtual food box is more vivid and higher in precision by utilizing a normal map conversion technology, and the rendering requirement is met. Meanwhile, the quality of the generated image is quickly and accurately identified by utilizing the hash sequence based on the convolution characteristic graph, the training effectiveness of the segmentation network is ensured, the segmentation precision of the simulated defect area is improved, and the reduction of the simulation degree of the model caused by the larger background difference of the image is avoided, so that the use is not influenced.
3. The generation countermeasure network trained by the real defect sample can obtain a large number of available simulated defect images of different types according to the specified defect type and the input random noise, so that the acquisition efficiency of the available defect sample is improved, the similarity between the generated defect texture and the real defect is high, the randomness is achieved, and the generation rule of the defect sample in reality is met.
4. The method is different from a method for directly zooming an image in a traditional difference value hash algorithm, and can achieve the purposes of reducing the dimension of the image and avoiding large-scale calculation amount, retain the position characteristic information of the image and fully consider the diversity and difference of micro characteristics in the generated image.
5. The simulated defect images which can be used for training after being screened and the real defect samples are mixed and then used for training the image segmentation network, so that the network has sufficient training samples, and the defect image segmentation network has certain generalization capability and robustness on the segmentation of the simulated defect images generated by the generator. Generally, the requirement of the generation of the confrontation network on the number of real training samples is not high, so that the method can obtain the high-efficiency image segmentation network under the condition of not having a large number of training samples.
6. And after the simulated defect sample generated by the countermeasure network is extracted by the image segmentation network to obtain the original defect map, the original defect map is converted by the normal map and then is pasted on the virtual food box. The simulation defects are more vivid, and the rendering requirements of the model in the virtual environment are met. Compare in traditional manual drawing food box three-dimensional model with defect and avoided huge work load, improved efficiency.
Drawings
In order to more clearly describe the embodiments of the present application, a brief description will be given below of the relevant drawings. It is to be understood that the drawings in the following description are only intended to illustrate some embodiments of the present application, and that a person skilled in the art may also derive from these drawings many other technical features and connections etc. not mentioned herein.
Fig. 1 is a schematic flowchart of a virtual food box defect generation method based on a neural network according to an embodiment of the present disclosure.
Fig. 2 is a schematic structural diagram of a virtual food box defect generating system based on a neural network according to an embodiment of the present application.
Fig. 3 is a schematic specific flow chart of a virtual food box defect generation method based on a neural network according to an embodiment of the present disclosure.
Fig. 4 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 5 is a schematic block diagram of a computer system suitable for implementing a method according to a first embodiment of the disclosure.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, the number of operations, e.g., 101, 102, etc., merely being used to distinguish between various operations, and the number itself does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical solutions in the exemplary embodiments of the present application will be clearly and completely described below with reference to the drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
According to the technical scheme provided by the embodiment of the application, image data is obtained through a generated countermeasure network obtained through pre-training; screening the image data to obtain a first sample and a second sample; wherein the first sample is used for training an image segmentation network to obtain an image segmentation network model; the second sample is used for iteratively training the generation countermeasure network; acquiring a simulated defect image output by the generation countermeasure network and an extracted defect region output by the image segmentation network model receiving the simulated defect image; and converting the defect area into a normal map based on the defect area, and outputting the normal map to a preselected area in the three-dimensional virtual food box to realize the defect generation of the virtual food box. The image segmentation network model can output a large number of simulated defect images, so that a large number of food box defect data sets can be generated, and the problem of insufficient sample requirements of the defect model is solved.
The following description will be made in more detail with reference to specific embodiments.
Implementation mode one
Referring to fig. 1 and 3, fig. 1 is a schematic flow chart of a virtual food box defect generating method based on a neural network according to an embodiment of the present disclosure, and fig. 3 is a schematic specific flow chart of the virtual food box defect generating method based on the neural network according to the embodiment of the present disclosure. The generation method is applied to a server and comprises the following steps:
s1, obtaining image data through a generated confrontation network model obtained through pre-training;
in step S1, a countermeasure network model is generated by modifying an existing countermeasure network model. The improvement mainly comprises: the generator and the discriminator in the confrontation network can be trained by feeding the surface defect image dataset of the real food box, so as to obtain a model of the confrontation network.
The defect image data set acquisition method comprises the following steps: the size of the defect image of the real food box is fixed to be 416 × 416, the number of the images can be selected according to actual conditions, the embodiment of the application is preferably 180, and the defect image totally comprises 6 types of known defect types, wherein the 6 types of known defect types are respectively as follows: deformation of the box body, cracks of the box body, scratches of the box body, dirt on the box body, holes of the box body and damage of the box body.
In addition, the present applicant has found that, in recent years, with the development of the field of artificial intelligence, neural networks have extremely excellent performance in the field of computer image correlation. People began to generate models or defect images directly using a generation countermeasure network, but the conventional methods had problems: the defect images generated by the generated countermeasure network after training have no clear judgment standard, and the generated images have uneven quality. To this end, the applicant first trains and generates a network model, and a specific training process may include the following steps:
s11, training a discriminator: and encoding the real defect image and the defect type label of the food box, inputting the encoded real defect image and the defect type label into a discriminator for training, and updating the discriminator through a loss function.
In order to make the distribution of the generated defect samples close to the real defect sample score as much as possible, the loss function is shown in formula (1);
Figure 508709DEST_PATH_IMAGE001
(1)
wherein x represents true data, y is a data label, z represents random noise, G represents the output of the generator, V represents the objective function, D represents the output of the discriminator,
Figure 894691DEST_PATH_IMAGE002
in order to be the expectation of the distribution function,
Figure 408849DEST_PATH_IMAGE003
in order to be able to distribute the noise,
Figure 538479DEST_PATH_IMAGE005
in order to be a true distribution of the sample,
Figure 835075DEST_PATH_IMAGE006
the loss function is represented.
And S12, training the discriminator again, inputting the random noise z and the random defect type label into the generator, generating a random simulated defect image by the generator, simultaneously specifying the simulated defect type label which is the same as the input data (namely the random noise z), inputting the simulated defect image and the simulated defect type label into the discriminator for training, updating the discriminator through a loss function, and returning to the step S1 to continue training the discriminator until the discriminator has the capability of distinguishing true and false (distinguishing a true image from a generated image).
In addition, the training process also comprises;
and S13, training a generator, inputting the random noise z and the random defect type label into the generator, generating a random defect image by the generator, assigning the defect type label which is the same as the input data, inputting the defect image and the defect type label into the generator for training, and updating the weight of the generator through loss back propagation.
It should be emphasized that, in the embodiment of the present application, step S11 is obtained by modifying on the basis of the countermeasure network. The specific modules of the countermeasure network are not described in detail herein.
S2, screening the image data to obtain a first sample and a second sample;
wherein the first sample is used for training an image segmentation network to obtain an image segmentation network model;
the second sample is used to iteratively train the generative countermeasure network.
The applicant finds that the difference between the current defect mapping background and the real sample background is large, so that the defect fidelity is not high, and the fusion difficulty is high.
In order to solve the problem, in the method of the application, the first sample may include screening out a simulated defect image data set and a real defect image which meet preset requirements, and the first sample mixed with the simulated defect image and the real defect image is used to train the image segmentation network, so that the image segmentation network has stronger stability, the training effectiveness of the segmentation network is ensured, the segmentation precision of the simulated defect region is improved, and the influence on the use due to the reduction of the model simulation degree caused by the larger difference of the image background is avoided.
The second sample can be a simulated defect image that does not meet preset requirements for iterative training of the countermeasure network.
Images which do not meet the preset requirements are added into a training sample sequence for generating the countermeasure network, so that cyclic utilization of the defect samples can be realized.
In addition, the specific screening steps are as follows:
and S21, acquiring an RGB color feature map.
Feeding the real defect image and the simulated defect image generated in the step S1 into an existing convolution neural network, generating an RGB color feature map with the dimension of 32 × 32 through the convolution neural network processing, setting the convolution kernel in the convolution neural network to be 13 × 13, and setting the step length to be 13;
s22, converting the RGB color characteristic diagram into a 256-level gray scale diagram;
and S23, acquiring a hash sequence according to the difference value of the adjacent pixels of the gray-scale image.
Wherein, acquiring the hash sequence specifically comprises:
firstly, sequentially traversing the gray-scale image pixels according to the pixel position sequence, and judging the size of the rear pixel of each line to be larger than the size of the front pixel of each line;
when the rear pixel of each row is larger than the front pixel, outputting a number 1; otherwise, the number 0 is output, forming a hash sequence.
With the foregoing scheme, a hash sequence of size 1024 consisting of 0 and 1 can be generated from the 32 × 32 RGB color feature map.
S24, acquiring a similarity value between the simulated defect image of the food box and the real defect image based on the obtained hash sequence;
when the similarity value is smaller than a preset similarity threshold value, the preset requirement is met; otherwise, the preset requirements are not met.
In step S24, the number of difference values between the hash sequence of the simulated defect image and the hash sequence of each real defect image needs to be traversed, and then the similarity threshold is determined according to the preset range of the hamming distance of the similar images.
A smaller hamming distance represents a more similar two pictures. And when the Hamming distance is smaller than the similarity threshold, the simulated defect image is firstly subjected to filtering processing, then a training sample data set for an image segmentation network model is added, and otherwise, the simulated defect image is output to a training sample sequence for generating a confrontation network to continue iterative training.
In the embodiment of the present application, the similarity threshold is 5. The preset range, which is usually a general range of similar picture hamming distances, can be determined according to actual situations.
It will be appreciated that hamming distance is used in data transmission error control coding, which is a concept that represents the different number of corresponding bits of two (same length) words, and d (x, y) represents the hamming distance between two words x, y. And carrying out exclusive OR operation on the two character strings, and counting the number of 1, wherein the number is the Hamming distance.
S3, obtaining the original mapping,
and the original mapping is obtained by receiving the simulated defect image output by the generated countermeasure network through the image segmentation network model and extracting features.
Specifically, in the step S3, the image segmentation network model adopts a coding-decoding structure, the coding process extracts features from the image through the convolutional neural network, the coding process remaps the features to each pixel point in the up-sampled image through the transposed convolutional neural network, meanwhile, the feature layers with the same scale are coded and decoded by adopting a channel dimension splicing and fusing method, and finally, each pixel point is classified through the softmax classifier.
The method for calculating the loss function used in the image segmentation network model by adopting the Dice similarity coefficient is as follows
Figure 126379DEST_PATH_IMAGE007
(2)
Figure 127833DEST_PATH_IMAGE008
(3)
Wherein, A is a prediction value set, B is a set of real values, Dice is a similarity coefficient, and DiceLoss is a loss function for calculating the similarity coefficient.
In step S3, the countering network output simulated defect image is generated first, the image segmentation network model receives the simulated defect image and generates a color mask of the simulated defect image, then the image segmentation network model binarizes the color mask, and extracts an example picture of the defect mask region by overlapping the color mask with the original image, and finally the image segmentation network model saves the segmented defect example picture as an original map.
In the scheme, a large number of original maps simulating defects can be obtained.
In order to make the simulated defect image more vivid and meet the rendering requirement of the model in the virtual environment, the method of the application further comprises the following steps:
and S4, converting the original map into a normal map, and outputting the normal map to a preselected area in the three-dimensional virtual food box, so that the defect generation of the virtual food box is completed.
And loading the pre-established three-dimensional model of the virtual food box into three-dimensional rendering software, dividing reasonable areas in which different types of defects exist, and pasting the normal map of the simulated defects to random positions in the areas corresponding to the types of the defects on the three-dimensional model of the virtual food box.
Specifically, the original map can be generated by a preset generation tool Shadermap, so that the common 2D image can be easily converted into a realistic and stereoscopic normal map simulating defects with Z-axis information. And simultaneously importing the three-dimensional model of the virtual food box and the normal map for simulating the defect into three-dimensional rendering software 3DMax, assigning the same material as the food box to the normal map for the defect, and pasting the normal map to a random position in a reasonable area range of the defect of the three-dimensional virtual food box according to the simulated defect type label. The defect generation of the virtual food box is completed.
Second embodiment
Referring to fig. 2, fig. 2 is a schematic structural diagram of a virtual food box defect generating system based on a neural network according to an embodiment of the present application, where the system includes:
a first obtaining module 10, configured to obtain image data through a generated countermeasure network obtained through pre-training;
the generation of the confrontation network model is achieved by improving the existing confrontation network model. The improvement mainly comprises: the generator and the discriminator in the confrontation network can be trained by feeding the surface defect image dataset of the real food box, so as to obtain a model of the confrontation network.
The defect image data set acquisition method comprises the following steps: the size of the defect image of the real food box is fixed to be 416 × 416, the number of the defect images can be selected according to actual conditions, the number of the defect images is preferably 180 in the embodiment of the application, and the defect images totally comprise 6 types of known defects.
The specific training process may comprise the steps of:
training a discriminator: and encoding the real defect image and the defect type label of the food box, inputting the encoded real defect image and the defect type label into a discriminator for training, and updating the discriminator through a loss function.
In order to make the distribution of the generated defect samples close to the real defect sample score as much as possible, the loss function is shown in formula (1);
Figure 61154DEST_PATH_IMAGE001
(1)
wherein x represents true data, y is a data label, z represents random noise, G represents the output of the generator, V represents the objective function, D represents the output of the discriminator,
Figure 464453DEST_PATH_IMAGE002
is a distribution boxThe expectation of the number of the optical fibers,
Figure 192238DEST_PATH_IMAGE003
in order to be able to distribute the noise,
Figure 680988DEST_PATH_IMAGE005
in order to be a true distribution of the sample,
Figure 401688DEST_PATH_IMAGE006
the loss function is represented.
And (3) training the discriminator again, inputting the random noise z and the random defect type label into the generator, generating a random simulated defect image by the generator, simultaneously specifying a simulated defect type label which is the same as the input data (namely the random noise z), inputting the simulated defect image and the simulated defect type label into the discriminator for training, updating the discriminator through a loss function, and returning to the previous step to continue training the discriminator until the discriminator has the capability of distinguishing true and false (distinguishing a real image from a generated image).
In addition, the training process also comprises;
and the training generator is used for generating a random defect image and appointing a defect type label which is the same as the input data after inputting the random noise z and the random defect type label into the generator, inputting the defect image and the defect type label into the generator for training, and finally updating the weight of the generator through loss back propagation.
The screening module 20 is configured to screen the image data to obtain a first sample and a second sample;
the first sample is used for training an image segmentation network to obtain an image segmentation network model;
the second sample is used for iteratively training the generation countermeasure network;
the specific screening steps are as follows:
and acquiring an RGB color feature map.
Feeding the real defect image and the simulated defect image generated in the step S1 into an existing convolution neural network, generating an RGB color feature map with the dimension of 32 × 32 through the convolution neural network processing, setting the convolution kernel in the convolution neural network to be 13 × 13, and setting the step length to be 13;
converting the RGB color characteristic map into a 256-level gray scale map;
and acquiring a hash sequence according to the difference value of the adjacent pixels of the gray-scale image.
Wherein, acquiring the hash sequence specifically comprises:
firstly, sequentially traversing the gray-scale image pixels according to the pixel position sequence, and judging the size of the rear pixel of each line to be larger than the size of the front pixel of each line;
when the rear pixel of each row is larger than the front pixel, outputting a number 1; otherwise, the number 0 is output, forming a hash sequence.
With the foregoing scheme, it is possible that 32 × 32 images will produce a 1024 hash sequence of 0 and 1.
Based on the Hash sequence, obtaining a similarity numerical value of the simulated defect image and the real defect image of the food box;
when the similarity value is smaller than a preset similarity threshold value, the preset requirement is met; otherwise, the preset requirements are not met.
The number of difference values of the hash sequences of the simulated defect image and each real defect image needs to be traversed, and then the similarity threshold is used according to the preset range of the Hamming distance of the similar images.
Wherein, the similarity threshold is 5. The preset range, which is usually a general range of similar picture hamming distances, can be determined according to actual situations.
It will be appreciated that hamming distance is used in data transmission error control coding, which is a concept that represents the different number of corresponding bits of two (same length) words, and d (x, y) represents the hamming distance between two words x, y. And carrying out exclusive OR operation on the two character strings, and counting the number of 1, wherein the number is the Hamming distance.
A second obtaining module 30, configured to obtain an original map;
the original mapping is obtained by receiving the simulated defect image output by the generated countermeasure network through the image segmentation network model and extracting features;
firstly, generating a simulated defect image output by an antagonistic network, receiving the simulated defect image by an image segmentation network model, generating a color mask of the simulated defect image, then binarizing the color mask by the image segmentation network model, extracting an example picture of a defect mask region by utilizing superposition of the color mask and an original image, and finally saving the segmented defect example picture as an original map by the image segmentation network model.
And the mapping module 40 is used for converting the original mapping into a normal mapping based on the original mapping, outputting the normal mapping in a preselected area in the three-dimensional virtual food box, and realizing the defect generation of the virtual food box.
The original mapping is processed by a preset generating tool Shadermap, so that the common 2D image can be easily converted into a vivid and stereoscopic simulated defect normal mapping with Z-axis information. And simultaneously importing the three-dimensional model of the virtual food box and the normal map for simulating the defect into three-dimensional rendering software 3DMax, assigning the same material as the food box to the normal map for the defect, and pasting the normal map to a random position in a reasonable area range of the defect of the three-dimensional virtual food box according to the simulated defect type label. The defect generation of the virtual food box is completed.
Third embodiment
Fig. 4 shows a block diagram of an electronic device according to an embodiment of the present application.
The foregoing embodiments describe a neural network based virtual food box defect generation method and system, which may be integrated into an electronic device in one possible design. As shown in fig. 4, the electronic device 500 may include a processor 501 and a memory 502.
The memory 502 is used for storing programs that support a processor to execute the data processing method or the resource allocation method in any of the above embodiments, and the processor 501 is configured to execute the programs stored in the memory 502.
The memory 502 is used to store one or more computer instructions, which are executed by the processor 501 to implement the steps of:
acquiring image data through a generated countermeasure network obtained through pre-training;
the pre-training process of generating the countermeasure network includes:
encoding the real defect image and the defect type label of the food box, inputting the encoded real defect image and the defect type label into a discriminator for training, and updating the discriminator through a loss function;
inputting random noise z and random defect type labels into a generator, generating a random simulated defect image by the generator, simultaneously designating a simulated defect type label which is the same as input data by the generator, inputting the simulated defect image and the simulated defect type label into the discriminator for training, and updating the discriminator through the loss function;
the pre-training process of generating an antagonistic network further comprises:
training the generator: and after the random noise z and the random defect type label are input into the generator, generating a random image by the generator, assigning a defect type label which is the same as the input data, inputting the real-time random image and the defect type label into the generator for training, and finally updating the weight of the generator through loss back propagation.
Screening the image data to obtain a first sample and a second sample;
the first sample is used for training an image segmentation network to obtain an image segmentation network model;
the second sample is used for iteratively training the generation countermeasure network;
the first sample comprises a simulated defect image data set and a real defect image which are screened out and meet the preset requirements;
the second sample comprises a simulated defect image that does not meet preset requirements.
The screening the image data comprises the steps of:
acquiring an RGB color characteristic diagram based on the real defect image and the simulated defect image;
converting the RGB color characteristic map into a gray scale map;
acquiring a hash sequence according to the difference value of adjacent pixels of the gray-scale image;
based on the Hash sequence, obtaining a similarity numerical value of the simulated defect image and the real defect image of the food box;
when the similarity value is smaller than a preset similarity threshold value, the preset requirement is met; otherwise, the preset requirements are not met.
The obtaining the hash sequence specifically includes:
firstly, sequentially traversing the gray-scale image pixels according to the pixel position sequence, and judging the size of the rear pixel of each line to be larger than the size of the front pixel of each line;
when the rear pixel of each row is larger than the front pixel, outputting a number 1; otherwise, the number 0 is output, forming a hash sequence.
The similarity threshold is 5.
Acquiring an original map;
the original mapping is obtained by receiving the simulated defect image output by the generated countermeasure network through the image segmentation network model and extracting features;
and converting the original mapping into a normal mapping based on the original mapping, and outputting the normal mapping in a preselected area in the three-dimensional virtual food box to realize generation of a food box defect data set.
The obtaining of the original map comprises:
firstly, the generation confrontation network model outputs a simulated defect image, the image segmentation network model receives the simulated defect image and generates a color mask of the simulated defect image, then the image segmentation network model carries out binarization on the color mask, an example picture of a defect mask area is extracted by utilizing superposition of the color mask and an original image, and finally the image segmentation network model stores the segmented defect example picture as an original map.
Fig. 5 is a schematic block diagram of a computer system suitable for implementing a neural network-based virtual food box defect generation method according to an embodiment of the present application.
As shown in fig. 5, the computer system 600 includes a processor (CPU, GPU, FPGA, etc.) 601, which can perform part or all of the processing in the embodiment shown in the above-described drawings, according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the system 600 are also stored. The processor 601, the ROM602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to embodiments of the present application, the method described above with reference to the figures may be implemented as a computer software program. For example, embodiments of the present application include a computer program product comprising a computer program tangibly embodied on a medium readable thereby, the computer program comprising program code for performing the methods of the figures. In such embodiments, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the present application also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the node in the above embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described herein.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (10)

1. A virtual food box defect generation method based on a neural network is characterized by comprising the following steps:
acquiring image data through a generated countermeasure network obtained through pre-training;
screening the image data to obtain a first sample and a second sample;
the first sample is used for training an image segmentation network to obtain an image segmentation network model;
the second sample is used for iteratively training the generative confrontation network;
acquiring an original map;
the original mapping is obtained by receiving the simulated defect image output by the generated countermeasure network through the image segmentation network model and extracting features;
and converting the original mapping into a normal mapping based on the original mapping, and outputting the normal mapping to a preselected area in the three-dimensional virtual food box to realize defect generation of the virtual food box.
2. The method of claim 1, wherein the pre-training process of generating the countermeasure network comprises:
encoding the real defect image and the defect type label of the food box, inputting the encoded real defect image and the defect type label into a discriminator for training, and updating the discriminator through a loss function;
inputting random noise z and random defect type labels into a generator, generating a random simulated defect image by the generator, simultaneously designating the simulated defect type labels which are the same as input data by the generator, inputting the simulated defect image and the simulated defect type labels into the discriminator for training, and updating the discriminator through the loss function.
3. The neural network-based virtual food box defect generation method of claim 2, wherein the pre-training process of generating the countermeasure network further comprises:
training the generator: and after the random noise z and the random defect type label are input into the generator, generating a random image by the generator, assigning a defect type label which is the same as the input data, inputting the real-time random image and the defect type label into the generator for training, and finally updating the weight of the generator through loss back propagation.
4. The method of claim 1, wherein the first sample comprises a simulated defect image dataset and a real defect image that are screened to meet preset requirements;
the second sample comprises a simulated defect image that does not meet preset requirements.
5. The neural network-based virtual food box defect generation method of any one of claims 3 or 4, wherein the screening the image data comprises the steps of:
acquiring an RGB color characteristic diagram based on the real defect image and the simulated defect image;
converting the RGB color characteristic map into a gray scale map;
acquiring a hash sequence according to the difference value of adjacent pixels of the gray-scale image;
based on the Hash sequence, obtaining a similarity numerical value of the simulated defect image and the real defect image of the food box;
when the similarity value is smaller than a preset similarity threshold value, the preset requirement is met; otherwise, the preset requirements are not met.
6. The virtual food box defect generation method based on the neural network as claimed in claim 5, wherein the obtaining the hash sequence specifically comprises:
firstly, sequentially traversing the gray-scale image pixels according to the pixel position sequence, and judging the size of the rear pixel of each line to be larger than the size of the front pixel of each line;
when the rear pixel of each row is larger than the front pixel, outputting a number 1; otherwise, the number 0 is output, forming a hash sequence.
7. The neural network-based virtual food box defect generation method of claim 5, wherein said obtaining an RGB color profile comprises:
and feeding the real defect image and the simulated defect image into the existing convolutional neural network, and processing the real defect image and the simulated defect image through the convolutional neural network to generate an RGB color characteristic diagram.
8. The method of claim 1, wherein the obtaining the original map comprises:
firstly, the generation confrontation network model outputs a simulated defect image, the image segmentation network model receives the simulated defect image and generates a color mask of the simulated defect image, then the image segmentation network model carries out binarization on the color mask, an example picture of a defect mask area is extracted by utilizing superposition of the color mask and an original image, and finally the image segmentation network model stores the segmented defect example picture as an original map.
9. A neural network-based virtual food box defect generation system, comprising:
the first acquisition module is used for acquiring image data through a generated countermeasure network obtained through pre-training;
the screening module is used for screening the image data to obtain a first sample and a second sample;
the first sample is used for training an image segmentation network to obtain an image segmentation network model;
the second sample is used for iteratively training the generation countermeasure network;
the second acquisition module is used for acquiring the original map;
the original mapping is obtained by receiving the simulated defect image output by the generated countermeasure network through the image segmentation network model and extracting features;
and the mapping module is used for converting the original mapping into a normal mapping based on the original mapping, outputting the normal mapping in a preselected area in the three-dimensional virtual food box and realizing the defect generation of the virtual food box.
10. An electronic device comprising a memory and a processor; the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method of any of claims 1-8.
CN202210221276.7A 2022-03-09 2022-03-09 Virtual food box defect generation method and system based on neural network Pending CN114359269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210221276.7A CN114359269A (en) 2022-03-09 2022-03-09 Virtual food box defect generation method and system based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210221276.7A CN114359269A (en) 2022-03-09 2022-03-09 Virtual food box defect generation method and system based on neural network

Publications (1)

Publication Number Publication Date
CN114359269A true CN114359269A (en) 2022-04-15

Family

ID=81095049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210221276.7A Pending CN114359269A (en) 2022-03-09 2022-03-09 Virtual food box defect generation method and system based on neural network

Country Status (1)

Country Link
CN (1) CN114359269A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529689A (en) * 2022-04-24 2022-05-24 广州易道智慧信息科技有限公司 Ceramic cup defect sample amplification method and system based on antagonistic neural network
CN115661156A (en) * 2022-12-28 2023-01-31 成都数联云算科技有限公司 Image generation method, image generation device, storage medium, equipment and computer program product

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463954A (en) * 2014-11-14 2015-03-25 无锡梵天信息技术股份有限公司 Three-dimensional image surface detail simulation method and system
CN110796174A (en) * 2019-09-29 2020-02-14 郑州金惠计算机系统工程有限公司 Multi-type virtual sample generation method and device, electronic equipment and storage medium
CN111832570A (en) * 2020-07-02 2020-10-27 北京工业大学 Image semantic segmentation model training method and system
CN112505065A (en) * 2020-12-28 2021-03-16 上海工程技术大学 Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN112717391A (en) * 2021-01-21 2021-04-30 腾讯科技(深圳)有限公司 Role name display method, device, equipment and medium for virtual role
CN112990335A (en) * 2021-03-31 2021-06-18 江苏方天电力技术有限公司 Intelligent recognition self-learning training method and system for power grid unmanned aerial vehicle inspection image defects
CN113436169A (en) * 2021-06-25 2021-09-24 东北大学 Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation
CN113920096A (en) * 2021-10-14 2022-01-11 广东工业大学 Method for detecting metal packaging defects of integrated circuit
CN114119607A (en) * 2022-01-20 2022-03-01 广州易道智慧信息科技有限公司 Wine bottle defect sample generation method and system based on deep neural network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463954A (en) * 2014-11-14 2015-03-25 无锡梵天信息技术股份有限公司 Three-dimensional image surface detail simulation method and system
CN110796174A (en) * 2019-09-29 2020-02-14 郑州金惠计算机系统工程有限公司 Multi-type virtual sample generation method and device, electronic equipment and storage medium
CN111832570A (en) * 2020-07-02 2020-10-27 北京工业大学 Image semantic segmentation model training method and system
CN112505065A (en) * 2020-12-28 2021-03-16 上海工程技术大学 Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN112717391A (en) * 2021-01-21 2021-04-30 腾讯科技(深圳)有限公司 Role name display method, device, equipment and medium for virtual role
CN112990335A (en) * 2021-03-31 2021-06-18 江苏方天电力技术有限公司 Intelligent recognition self-learning training method and system for power grid unmanned aerial vehicle inspection image defects
CN113436169A (en) * 2021-06-25 2021-09-24 东北大学 Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation
CN113920096A (en) * 2021-10-14 2022-01-11 广东工业大学 Method for detecting metal packaging defects of integrated circuit
CN114119607A (en) * 2022-01-20 2022-03-01 广州易道智慧信息科技有限公司 Wine bottle defect sample generation method and system based on deep neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李明: "面向虚拟试戴系统的人脸位姿估计方法", 《计算机辅助设计与图形学学报》 *
雷钧 等: "基于CT图像序列的膝关节三维重建", 《科学技术与工程》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529689A (en) * 2022-04-24 2022-05-24 广州易道智慧信息科技有限公司 Ceramic cup defect sample amplification method and system based on antagonistic neural network
CN115661156A (en) * 2022-12-28 2023-01-31 成都数联云算科技有限公司 Image generation method, image generation device, storage medium, equipment and computer program product

Similar Documents

Publication Publication Date Title
CN108229479B (en) Training method and device of semantic segmentation model, electronic equipment and storage medium
CN108875935B (en) Natural image target material visual characteristic mapping method based on generation countermeasure network
CN107368845B (en) Optimized candidate region-based Faster R-CNN target detection method
CN112183501B (en) Depth counterfeit image detection method and device
CN110379020B (en) Laser point cloud coloring method and device based on generation countermeasure network
CN114359269A (en) Virtual food box defect generation method and system based on neural network
CN109936745B (en) Method and system for improving decompression of raw video data
CN112884758B (en) Defect insulator sample generation method and system based on style migration method
CN107506792B (en) Semi-supervised salient object detection method
CA3137297C (en) Adaptive convolutions in neural networks
CN111986125A (en) Method for multi-target task instance segmentation
US10922852B2 (en) Oil painting stroke simulation using neural network
CN110298898B (en) Method for changing color of automobile image body and algorithm structure thereof
CN109003287A (en) Image partition method based on improved adaptive GA-IAGA
CN114596290A (en) Defect detection method, defect detection device, storage medium, and program product
CN108520532B (en) Method and device for identifying motion direction of object in video
CN113240790A (en) Steel rail defect image generation method based on 3D model and point cloud processing
CN115294392B (en) Visible light remote sensing image cloud removal method and system based on network model generation
CN111402422A (en) Three-dimensional surface reconstruction method and device and electronic equipment
CN116485892A (en) Six-degree-of-freedom pose estimation method for weak texture object
Liu Literature Review on Image Restoration
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
CN114897884A (en) No-reference screen content image quality evaluation method based on multi-scale edge feature fusion
CN114092494A (en) Brain MR image segmentation method based on superpixel and full convolution neural network
CN114066788A (en) Balanced instance segmentation data synthesis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220415

RJ01 Rejection of invention patent application after publication