CN114419297A - 3D target camouflage generation method based on background style migration - Google Patents
3D target camouflage generation method based on background style migration Download PDFInfo
- Publication number
- CN114419297A CN114419297A CN202210069011.XA CN202210069011A CN114419297A CN 114419297 A CN114419297 A CN 114419297A CN 202210069011 A CN202210069011 A CN 202210069011A CN 114419297 A CN114419297 A CN 114419297A
- Authority
- CN
- China
- Prior art keywords
- target
- camouflage
- style
- scene image
- background
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013508 migration Methods 0.000 title claims abstract description 49
- 230000005012 migration Effects 0.000 title claims abstract description 48
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000013528 artificial neural network Methods 0.000 claims abstract description 33
- 238000001514 detection method Methods 0.000 claims abstract description 12
- 238000000605 extraction Methods 0.000 claims abstract description 12
- 238000009877 rendering Methods 0.000 claims abstract description 11
- 230000006870 function Effects 0.000 claims description 28
- 238000012549 training Methods 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000001537 neural effect Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 2
- 230000007704 transition Effects 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 13
- 238000013135 deep learning Methods 0.000 abstract description 2
- 238000013461 design Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000005034 decoration Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000003042 antagnostic effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005507 spraying Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
- G06T17/205—Re-meshing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2024—Style variation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Geometry (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- Image Analysis (AREA)
Abstract
A3D target camouflage generation method based on background style migration relates to the field of computer vision and deep learning and comprises the following steps: selecting a 3D object model and a concealed scene image; selecting a position area in the scene image after the 3D target model renders the camouflage pattern and extracting a background image; representing the 3D object model by a polygonal mesh mode; constructing a style migration neural network, a style feature extraction network and a neural network renderer; migrating the background style extracted from the scene image to a 3D target model by using a style migration neural network to generate a camouflage pattern; rendering the camouflage pattern to the surface of the 3D target model by utilizing a neural network renderer; fusing the 3D target model after the camouflage pattern is rendered into a scene image; and verifying the effectiveness of the camouflage pattern by using the target detection network. The method ensures the continuity of the camouflage patterns on the target, improves the camouflage effect of the target, and ensures that the generated camouflage patterns have high quality and strong reconnaissance resistance.
Description
Technical Field
The invention relates to the technical field of computer vision and deep learning, in particular to a 3D target camouflage generation method based on background style migration.
Background
With the rapid development of science and technology, the target detection technology is developed more and more, and how to ensure that important targets are not detected by an algorithm becomes more and more important. The camouflage painting technology enables the surface of the target to be consistent with the surrounding environment by spraying the camouflage pattern on the surface of the target, reduces the visual difference between the target and the surrounding background, thereby reducing the probability that the target is detected by reconnaissance and realizing the protection of the target.
Early camouflage design primarily referenced patterns of stripes and blocky spots, which are relatively regular and smooth-edged, resulting in a lack of visual sense of hierarchy due to the lack of consideration of the actual background of the target, and was designed in two-dimensional images (tenxu. digital camouflage creation research based on creating an antagonistic network [ D ]. southwest university of science [ D ]. 2021.DOI:10.27415/D. cnki. gxngc.2021.000342.). In order to verify the effectiveness of the camouflage pattern, the camouflage pattern is usually sprayed on the surface of the target and then the camouflage degree and the camouflage effect of the target relative to the environment are tested, and the phenomenon of discontinuous camouflage pattern appears at the joint of the surface of the target, so that the camouflage effect of the target is influenced; in addition, the verification method is time-consuming and labor-consuming, and the verification effect is not ideal.
Therefore, how to rapidly generate the camouflage with high quality and strong scout resistance aiming at different background environments becomes the key of the modern camouflage generation.
Disclosure of Invention
The invention aims to provide a 3D target camouflage generating method based on background style migration, which is used for camouflage of a 3D target according to the characteristics of a background image in a scene image, wherein the generated camouflage pattern has high correlation with the characteristics of the background image, so that the 3D target can be hidden in the scene image, and the camouflage effect of the 3D target is improved.
The technical scheme adopted by the invention for solving the technical problem is as follows:
the invention discloses a 3D target camouflage generation method based on background style migration, which comprises the following steps:
(1) selecting a 3D target model of the camouflage pattern to be rendered and a scene image hidden after the 3D target model renders the camouflage pattern;
(2) selecting a position area in the scene image after the 3D target model renders the camouflage pattern, and extracting a background image of the position area;
(3) representing the 3D object model by a polygonal mesh mode;
(4) constructing a style characteristic extraction network and a neural network renderer;
(5) background styles extracted from the scene images by utilizing a style characteristic extraction network are transferred to the 3D target model, namely camouflage patterns are generated from the scene images and serve as texture patterns of the 3D target model;
(6) rendering the camouflage pattern generated from the scene image to the surface of the 3D target model by utilizing a neural network renderer to finish camouflage of the 3D target model;
(7) fusing the obtained 3D target model after the camouflage pattern is rendered into a scene image;
(8) and verifying the effectiveness of the camouflage pattern in the scene image by using the trained target detection network.
Further, in the step (4), the style feature extraction network specifically adopts: the VGG-16 convolutional neural network is pre-trained and uses its conv1_2, conv2_3, conv3_3 and conv4_3 layers as style feature extractors.
Further, in the step (4), the neural network renderer specifically employs: neural 3D Mesh Renderer.
Further, the step (5) specifically comprises the following steps:
1) constructing a target loss function L ═ λcLc+λsLsWherein L isc、LsRespectively a content loss function and a style loss function, lambdac、λsRespectively a content loss weight and a style loss weight;
2) and training a style migration neural network according to the target loss function, so that the style migration neural network can migrate the background style extracted from the scene image into the 3D target model, and generate a camouflage pattern from the scene image as a texture pattern of the 3D target model.
Further, the content loss function LcThe expression of (a) is as follows:
wherein m iscIs a target 3D mesh, m is a target 3D mesh after style migration, vi、Respectively representing the vertex in the rendered 3D object model after the background style migration and the vertex corresponding to the original 3D object model.
Further, the style loss function LsThe expression of (a) is as follows:
wherein, x and xsTexture patterns of the rendered 3D object model after background style migration and background style feature maps extracted from the scene image, respectively, M (x) converting the vectors into a Gram matrix, fs(x) A network is extracted for the style features.
Further, in step 2), three-dimensional points are projected to a two-dimensional plane by using a projection transformation operation, colors are extracted from each surface of the 3D object model by a sampling mode based on coordinates of two-dimensional vertices, a final two-dimensional image is generated, and a rasterization operation is completed.
Furthermore, the projection transformation operation is a differentiable discrete function, the color of the pixel is continuously changed by blurring the edge part of the 3D target model surface, and then a gradient value is generated, and the gradient of the error function is transferred to train the style migration neural network.
Further, the step (6) and the step (7) specifically comprise the following steps:
constructing an Adam optimizer training neural network, and setting the network parameters: setting the initial learning rate to 0.1, carrying out 1000 times of iterative training, and obtaining a 3D target model after the camouflage pattern is rendered after the iteration is finished;
fusing the obtained 3D target model after the camouflage pattern rendering into a scene image by using the following formula:
I'=sIbac+(1-s)x
wherein, IbacThe image is a scene image, x is a 3D target model after the camouflage pattern is rendered, s is an image fusion factor, and I' is an image obtained by fusing the 3D target model after the camouflage pattern is rendered into the scene image.
The invention has the beneficial effects that:
the early camouflage design mainly refers to the patterns of stripes and block-shaped spots, does not consider the actual background of a target, and causes lack of layering in vision, and the traditional camouflage design is designed in a two-dimensional image, and when the camouflage pattern is sprayed on the surface of the target, the phenomenon of discontinuity of the camouflage pattern occurs at the surface connection. In order to solve the problem, the invention provides a 3D target camouflage generation method based on background style migration, which mainly comprises the steps of selecting a 3D target model of a camouflage pattern to be rendered and a scene image hidden after the 3D target model renders the camouflage pattern; then selecting a position area in the scene image after the 3D target model renders the camouflage pattern, and extracting a background image of the position area; representing the 3D object model by a polygonal mesh mode; constructing a style migration neural network, a style feature extraction network and a neural network renderer; migrating the background style extracted from the scene image to the 3D target model by using a style migration neural network, namely generating a camouflage pattern from the scene image as a texture pattern of the 3D target model; rendering the camouflage pattern generated from the scene image to the surface of the 3D target model by utilizing a neural network renderer to finish camouflage of the 3D target model; fusing the obtained 3D target model after the camouflage pattern is rendered into a scene image; and finally, verifying the effectiveness of the camouflage pattern in the scene image by using the trained target detection network.
Compared with the existing camouflage patterns with reference stripes and block spot patterns, the 3D target camouflage generation method based on background style migration has a better target camouflage effect. According to the method, the position area in the scene image after the 3D target model renders the camouflage pattern is selected, the background image of the position area is extracted, the influence of the actual background of the target on the vision is fully considered, the final camouflage effect has the visual layering effect, and the camouflage effect of the target is improved.
In addition, in order to ensure the continuity of the camouflage pattern on the target, the image style migration neural network is utilized to generate the camouflage pattern, the camouflage pattern is rendered on the 3D target model for visually evaluating the effect of the camouflage pattern, then the 3D target model with the camouflage pattern rendered is fused in the scene image, and the trained target detection network is utilized to verify the effectiveness of the camouflage pattern in the scene image. The method can intuitively observe the real display effect of the camouflage pattern on the 3D target model, ensures the continuity of the camouflage pattern on the target, verifies the effectiveness of the generated camouflage pattern in the scene by using the target detection algorithm, and is simple and convenient to operate, time-saving and labor-saving.
The camouflage pattern generated by the 3D target camouflage generation method based on background style migration fully utilizes the background characteristics, ensures the continuity of the camouflage pattern at the joint of the target surface, and has the characteristics of high quality and strong reconnaissance capability.
Drawings
Fig. 1 is a flowchart of a 3D target camouflage generation method based on background style migration according to the present invention.
FIG. 2 is an image of a scene to be concealed after a 3D object model renders a camouflage pattern.
FIG. 3 is a schematic diagram of a 3D object model of a camouflage pattern to be rendered. In the figure, a is a right side view of the 3D object model, b is a left side view of the 3D object model, and c is a front view of the 3D object model.
FIG. 4 is a schematic diagram of a 3D object model after a camouflage pattern is rendered. In the figure, a is a right side view of the 3D object model after the camouflage pattern is rendered, b is a left side view of the 3D object model after the camouflage pattern is rendered, and c is a front view of the 3D object model after the camouflage pattern is rendered.
Fig. 5 is an effect diagram of the 3D object model after the camouflage pattern is rendered and fused into the scene image.
FIG. 6 shows the target detection results.
Fig. 7 is a specific flowchart of a 3D target camouflage generation method based on background style migration according to the present invention.
Detailed Description
As shown in fig. 1 and 7, a 3D target camouflage generation method based on background style migration of the present invention mainly includes the following steps:
(1) selecting a 3D target model of the camouflage pattern to be rendered and a scene image hidden after the 3D target model renders the camouflage pattern;
(2) selecting a position area in the scene image after the 3D target model renders the camouflage pattern, and extracting a background image of the position area;
(3) representing the 3D object model by a polygonal mesh mode;
(4) constructing a style feature extraction network and a neural network renderer to complete style migration from the 2D image to the 3D target;
(5) background styles extracted from the scene images by utilizing a style characteristic extraction network are transferred to the 3D target model, namely camouflage patterns are generated from the scene images and serve as texture patterns of the 3D target model;
(6) rendering the camouflage pattern generated from the scene image to the surface of the 3D target model by utilizing a neural network renderer to finish camouflage of the 3D target model;
(7) fusing the obtained 3D target model after the camouflage pattern is rendered into a scene image;
(8) and verifying the effectiveness of the camouflage pattern in the scene image by using the trained target detection network.
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
As shown in fig. 1 and 7, a 3D target camouflage generation method based on background style migration of the present invention specifically includes the following steps:
s1, first, a 3D object model of the camouflage pattern to be rendered is selected, the 3D object model being as shown in fig. 3.
S2, selecting the scene image to be hidden after the 3D object model renders the camouflage pattern, wherein the scene image is as shown in FIG. 2.
And S3, selecting a position area in the scene image after the 3D object model renders the camouflage pattern, wherein the position area is shown as a rectangular frame area in FIG. 2.
And S4, extracting the background image of the position area in the scene image after the 3D object model renders the camouflage pattern.
S5, the 3D object model is represented by means of a polygonal mesh, which is a collection of vertices, edges and faces that make up the 3D object, which is intended to represent the three-dimensional object model in an easy-to-render manner.
And S6, constructing a style feature extraction network and a neural network renderer to finish style migration of the 2D image to the 3D object.
The style feature extraction network specifically adopts the following steps: the VGG-16 convolutional neural network is pre-trained and uses its conv1_2, conv2_3, conv3_3 and conv4_3 layers as style feature extractors.
Wherein, the neural network renderer specifically adopts: neural 3D Mesh Renderer.
And S7, migrating the background style extracted from the scene image into the 3D object model by using the style migration neural network, namely generating a camouflage pattern from the scene image as a texture pattern of the 3D object model.
The method specifically comprises the following steps:
s7.1 constructing a target loss function L ═ lambdacLc+λsLsWherein L isc、LsRespectively a content loss function and a style loss function, Lc、λsRespectively a content loss weight and a style loss weight.
Constructing a content loss function LcThe expression is as follows:
wherein m iscIs a target 3D mesh, m is a target 3D mesh after style migration, vi、Respectively representing the vertex in the rendered 3D object model after the background style migration and the vertex corresponding to the original 3D object model. The constructed content loss function mainly has the function of ensuring the consistency of the shape of the 3D object model before and after the background style migration.
Constructing a style loss function LsThe expression is as follows:
wherein, x and xsTexture patterns of the rendered 3D object model after background style migration and background style feature maps extracted from the scene image, respectively, M (x) converting the vectors into a Gram matrix, fs(x) A network is extracted for the style feature extractor, i.e., the style feature constructed in step S6.
And S7.2, training a style migration neural network according to the constructed target loss function, so that the style of the background extracted from the scene image can be migrated into the 3D target model, and generating a camouflage pattern from the scene image as a texture pattern of the 3D target model.
Rendering the background style extracted from the scene image into the 3D object model requires first projecting three-dimensional points onto a two-dimensional plane, which requires a series of projective transformation operations. Subsequently, based on the coordinates of the two-dimensional vertices, colors can be extracted from the respective faces of the 3D object model by sampling, thereby generating a final two-dimensional image, a process referred to as rasterization.
The projective transformation operation is usually a differentiable discrete function, and the gradient of the rasterization operation is zero in most cases, so that the method cannot be used for transferring the gradient of the error function in the training process of the style migration neural network. And (3) blurring the edge part of the 3D target model surface, so that the color of the pixel is continuously changed, a gradient value is further generated, and the gradient of a transfer error function of the gradient value is utilized to train the style migration neural network.
And S8, rendering the camouflage pattern generated from the scene image to the surface of the 3D object model by using a neural network renderer to finish the camouflage of the 3D object model.
The method specifically comprises the following steps:
firstly, constructing an Adam optimizer to train a neural network, and then setting the network parameters: and setting the initial learning rate to 0.1, and setting the initial learning rate to 0.9 and the initial learning rate to 0.999, and performing 1000 times of iterative training in total to obtain a 3D target model after the camouflage pattern is rendered after the iteration is completed, as shown in fig. 4.
S9, the 3D object model rendered with the camouflage pattern obtained in step S8 is fused to the scene image using the following formula, thereby hiding the 3D object model. The specific treatment is as follows:
I'=sIbac+(1-s)x
wherein, I' is an image obtained by fusing the 3D target model after the camouflage pattern is rendered into the scene image, IbacThe method comprises the steps of obtaining a scene image, wherein x is a 3D target model after a camouflage pattern is rendered, and s is an image fusion factor. The result of the image fusion is shown in fig. 5, and the 3D object model after rendering the camouflage pattern is located in the rectangular frame area in fig. 5.
S10, verifying the performance of the 3D target camouflage generation method based on the background style migration by using the trained target detection network.
The method specifically comprises the following steps:
training a YOLOv5 target detector by using a YOLOv5 target detector as a target detection network and using target data which do not contain rendering camouflage patterns, so that the target data in a scene image can be identified; then, the 3D object model with the camouflage pattern rendered obtained in step S9 is fused with the image of the scene image as an input, and an object detection result is obtained.
As a result, as shown in fig. 6, the rectangular frame region is the detected target. According to the experiment, the generated camouflage pattern has a good camouflage effect by the 3D target camouflage pattern generation method based on the background style migration.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (9)
1. A3D target camouflage generation method based on background style migration is characterized by comprising the following steps:
(1) selecting a 3D target model of the camouflage pattern to be rendered and a scene image hidden after the 3D target model renders the camouflage pattern;
(2) selecting a position area in the scene image after the 3D target model renders the camouflage pattern, and extracting a background image of the position area;
(3) representing the 3D object model by a polygonal mesh mode;
(4) constructing a style characteristic extraction network and a neural network renderer;
(5) background styles extracted from the scene images by utilizing a style characteristic extraction network are transferred to the 3D target model, namely camouflage patterns are generated from the scene images and serve as texture patterns of the 3D target model;
(6) rendering the camouflage pattern generated from the scene image to the surface of the 3D target model by utilizing a neural network renderer to finish camouflage of the 3D target model;
(7) fusing the obtained 3D target model after the camouflage pattern is rendered into a scene image;
(8) and verifying the effectiveness of the camouflage pattern in the scene image by using the trained target detection network.
2. The method for generating 3D object camouflage based on background style migration according to claim 1, wherein in the step (4), the style feature extraction network specifically adopts: the VGG-16 convolutional neural network is pre-trained and uses its conv1_2, conv2_3, conv3_3 and conv4_3 layers as style feature extractors.
3. The method for generating 3D object camouflage based on background style migration according to claim 1, wherein in the step (4), the neural network renderer specifically adopts: neural 3D Mesh Renderer.
4. The 3D object camouflage generation method based on background style migration according to claim 1, wherein the step (5) specifically comprises the following steps:
1) constructing a target loss function L ═ λcLc+λsLsWherein L isc、LsRespectively a content loss function and a style loss function, lambdac、λsRespectively a content loss weight and a style loss weight;
2) and training a style migration neural network according to the target loss function, so that the style migration neural network can migrate the background style extracted from the scene image into the 3D target model, and generate a camouflage pattern from the scene image as a texture pattern of the 3D target model.
5. The method of claim 4, wherein the content loss function L is a function of a background style migration of the 3D objectcThe expression of (a) is as follows:
6. The method as claimed in claim 5, wherein the style loss function L is a three-dimensional (3D) target camouflage generation method based on background style migrationsThe expression of (a) is as follows:
wherein, x and xsTexture patterns of the rendered 3D object model after background style migration and background style feature maps extracted from the scene image, respectively, M (x) converting the vectors into a Gram matrix, fs(x) A network is extracted for the style features.
7. The method for generating 3D object camouflage based on background style migration according to claim 6, wherein in step 2), three-dimensional points are projected to a two-dimensional plane by using a projection transformation operation, colors are extracted from each surface of the 3D object model by a sampling mode based on coordinates of two-dimensional vertexes, a final two-dimensional image is generated, and a rasterization operation is completed.
8. The method as claimed in claim 7, wherein the projective transformation is a discrete function that can be differentiated, and the edge of the 3D object model is blurred to generate continuous variation of the color of the pixel, thereby generating gradient values, and the gradient of the transfer error function is used to train the style transition neural network.
9. The 3D object camouflage generation method based on background style migration according to claim 1, wherein the step (6) and the step (7) specifically comprise the following steps:
constructing an Adam optimizer training neural network, and setting the network parameters: setting the initial learning rate to 0.1, carrying out 1000 times of iterative training, and obtaining a 3D target model after the camouflage pattern is rendered after the iteration is finished;
fusing the obtained 3D target model after the camouflage pattern rendering into a scene image by using the following formula:
I'=sIbac+(1-s)x
wherein, IbacThe image is a scene image, x is a 3D target model after the camouflage pattern is rendered, s is an image fusion factor, and I' is an image obtained by fusing the 3D target model after the camouflage pattern is rendered into the scene image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210069011.XA CN114419297B (en) | 2022-01-21 | 2022-01-21 | 3D target camouflage generation method based on background style migration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210069011.XA CN114419297B (en) | 2022-01-21 | 2022-01-21 | 3D target camouflage generation method based on background style migration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114419297A true CN114419297A (en) | 2022-04-29 |
CN114419297B CN114419297B (en) | 2024-10-15 |
Family
ID=81275993
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210069011.XA Active CN114419297B (en) | 2022-01-21 | 2022-01-21 | 3D target camouflage generation method based on background style migration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114419297B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115631091A (en) * | 2022-12-23 | 2023-01-20 | 南方科技大学 | Selective style migration method and terminal |
CN116418961A (en) * | 2023-06-09 | 2023-07-11 | 深圳臻像科技有限公司 | Light field display method and system based on three-dimensional scene stylization |
CN117078790A (en) * | 2023-10-13 | 2023-11-17 | 腾讯科技(深圳)有限公司 | Image generation method, device, computer equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110533741A (en) * | 2019-08-08 | 2019-12-03 | 天津工业大学 | A kind of camouflage pattern design method rapidly adapting to battlefield variation |
CN111541887A (en) * | 2020-05-21 | 2020-08-14 | 北京航空航天大学 | Naked eye 3D visual camouflage system |
WO2020165557A1 (en) * | 2019-02-14 | 2020-08-20 | Huawei Technologies Co., Ltd. | 3d face reconstruction system and method |
-
2022
- 2022-01-21 CN CN202210069011.XA patent/CN114419297B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020165557A1 (en) * | 2019-02-14 | 2020-08-20 | Huawei Technologies Co., Ltd. | 3d face reconstruction system and method |
CN110533741A (en) * | 2019-08-08 | 2019-12-03 | 天津工业大学 | A kind of camouflage pattern design method rapidly adapting to battlefield variation |
CN111541887A (en) * | 2020-05-21 | 2020-08-14 | 北京航空航天大学 | Naked eye 3D visual camouflage system |
Non-Patent Citations (1)
Title |
---|
贾博文;王世刚;李天舒;张力中;赵岩;: "稀疏采集集成成像系统", 哈尔滨工业大学学报, no. 05, 25 April 2018 (2018-04-25) * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115631091A (en) * | 2022-12-23 | 2023-01-20 | 南方科技大学 | Selective style migration method and terminal |
CN116418961A (en) * | 2023-06-09 | 2023-07-11 | 深圳臻像科技有限公司 | Light field display method and system based on three-dimensional scene stylization |
CN116418961B (en) * | 2023-06-09 | 2023-08-22 | 深圳臻像科技有限公司 | Light field display method and system based on three-dimensional scene stylization |
CN117078790A (en) * | 2023-10-13 | 2023-11-17 | 腾讯科技(深圳)有限公司 | Image generation method, device, computer equipment and storage medium |
CN117078790B (en) * | 2023-10-13 | 2024-03-29 | 腾讯科技(深圳)有限公司 | Image generation method, device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114419297B (en) | 2024-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109003325B (en) | Three-dimensional reconstruction method, medium, device and computing equipment | |
CN114419297A (en) | 3D target camouflage generation method based on background style migration | |
CN110223370B (en) | Method for generating complete human texture map from single-view picture | |
CN102592275B (en) | Virtual viewpoint rendering method | |
CN103473806B (en) | A kind of clothes 3 D model construction method based on single image | |
US6417850B1 (en) | Depth painting for 3-D rendering applications | |
US20170161948A1 (en) | System and method for three-dimensional garment mesh deformation and layering for garment fit visualization | |
CN111968238A (en) | Human body color three-dimensional reconstruction method based on dynamic fusion algorithm | |
US20150178988A1 (en) | Method and a system for generating a realistic 3d reconstruction model for an object or being | |
CN102074020B (en) | Method for performing multi-body depth recovery and segmentation on video | |
CN114998515B (en) | 3D human body self-supervision reconstruction method based on multi-view image | |
CN101303772A (en) | Method for modeling non-linear three-dimensional human face based on single sheet image | |
CN107730587B (en) | Rapid three-dimensional interactive modeling method based on pictures | |
CN111462030A (en) | Multi-image fused stereoscopic set vision new angle construction drawing method | |
CN108830776A (en) | The visible entity watermark copyright anti-counterfeiting mark method of three-dimensional towards 3D printing model | |
CN105261062B (en) | A kind of personage's segmentation modeling method | |
Hudon et al. | Deep normal estimation for automatic shading of hand-drawn characters | |
Kawai et al. | Diminished reality for AR marker hiding based on image inpainting with reflection of luminance changes | |
CN111127658A (en) | Point cloud reconstruction-based feature-preserving curved surface reconstruction method for triangular mesh curved surface | |
CN115984441A (en) | Method for rapidly reconstructing textured three-dimensional model based on nerve shader | |
EP3980975B1 (en) | Method of inferring microdetail on skin animation | |
CN116863101A (en) | Reconstruction model geometry and texture optimization method based on self-adaptive grid subdivision | |
CN114049423B (en) | Automatic realistic three-dimensional model texture mapping method | |
CN104091318B (en) | A kind of synthetic method of Chinese Sign Language video transition frame | |
Pagés et al. | Automatic system for virtual human reconstruction with 3D mesh multi-texturing and facial enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |