CN115984439A - Three-dimensional countertexture generation method and device for disguised target - Google Patents

Three-dimensional countertexture generation method and device for disguised target Download PDF

Info

Publication number
CN115984439A
CN115984439A CN202211722317.7A CN202211722317A CN115984439A CN 115984439 A CN115984439 A CN 115984439A CN 202211722317 A CN202211722317 A CN 202211722317A CN 115984439 A CN115984439 A CN 115984439A
Authority
CN
China
Prior art keywords
target
texture
image
loss
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211722317.7A
Other languages
Chinese (zh)
Inventor
王岳环
陈文�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202211722317.7A priority Critical patent/CN115984439A/en
Publication of CN115984439A publication Critical patent/CN115984439A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Generation (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for generating three-dimensional countertexture of a camouflage target, belonging to the technical field of computer vision and image processing, wherein the method comprises the following steps: s1: inputting a 3D model of a target to be disguised, an initial confrontation texture, a Face _ ID file and camera sampling parameters into a neural network renderer to generate a target foreground image; s2: selecting an environment background to carry out visual angle transformation; s3: fusing the foreground and the background by using a target mask extracted from a target foreground image, respectively calculating a smooth loss, a texture loss and an attack loss, weighting, and then reversely propagating and updating the countertexture; s4: inputting the updated anti-texture camera sampling parameters into a neural network renderer, and selecting another group of parameters from the camera sampling parameters to obtain a target foreground image under a new visual angle; and (5) repeatedly executing S2 and S3 aiming at the target foreground image under the new visual angle until the training is completed, thereby obtaining the final confrontation texture pattern. The camouflage pattern generated by the method can effectively avoid the detection of the target.

Description

Three-dimensional countertexture generation method and device for disguised target
Technical Field
The invention belongs to the technical field of computer vision and image processing, and particularly relates to a three-dimensional confrontation texture generation method and device for a camouflage target.
Background
The development of deep learning greatly improves the detection and identification capability of the imaging target and also increases the difficulty of camouflage or stealth protection of the important target. One important method of target camouflage is camouflage painting, which reduces the visual difference between the target and the environment by painting the camouflage target with patterns similar to the background color, texture and brightness, thereby protecting the camouflage target from being detected by an enemy detection algorithm.
The conventional camouflage design is to design the patch according to a specific stripe pattern according to the dominant colors and proportions of several typical battlefield backgrounds, and then color-endow the patch according to the dominant colors and proportions of the backgrounds. However, different targets have specific shapes, and the scout observes from different visual angles and distances, the appearance characteristics of the obtained targets are different, the conventional camouflage pattern design does not consider the factors, and the increasingly accurate target detection algorithm is difficult to resist, so that the target camouflage under the complex background is not beneficial.
Therefore, how to generate a camouflage pattern capable of adapting to various imaging conditions for a specific camouflage target is important for target camouflage.
Disclosure of Invention
Aiming at the defects or the improvement requirements of the prior art, the invention provides a three-dimensional confrontation texture generation method and a three-dimensional confrontation texture generation device for a camouflaged target, and aims to solve the technical problem that the existing camouflaged target is easy to detect by modeling the target to be camouflaged in a 3D mode, acquiring images of the target to be camouflaged from different visual angles and distances by using a neural network renderer, and effectively avoiding the target from being detected by using generated camouflaging patterns by using texture appearance information required by deep learning and training target camouflaging under the condition that the appearance condition of the target to be camouflaged and various imaging conditions during reconnaissance are considered.
To achieve the above object, according to one aspect of the present invention, there is provided a three-dimensional antagonistic texture generating method of a camouflaged object, comprising:
s1: inputting a 3D model corresponding to a target to be disguised, random initial confrontation textures, a Face _ ID file and camera sampling parameters into a neural network renderer to generate a target foreground image under a specific visual angle, wherein the Face _ ID file is generated according to a spraying area of the target to be disguised;
s2: selecting one from the background image set as an environment background and carrying out view angle conversion to obtain a target background image, wherein the target background image is aligned with the target foreground image in the imaging view angle and the imaging distance;
s3: extracting a target mask O and an edge mask E from the target foreground image; weighted calculation of the smoothing loss L using the edge mask E smooth (ii) a Fusing the target foreground image and the target background image by using the target mask O to obtain a target fusion image, and calculating the texture loss L of the target background image and the target fusion image texture (ii) a Sending the target fusion graph into a target detector and calculating attack loss L according to the output class confidence attack (ii) a For the smoothing loss L smooth The loss of texture L texture And the attack loss L attack After weighting, carrying out backward propagation to obtain gradient information so as to update the confrontation texture;
s4: inputting the updated anti-texture camera sampling parameters into the neural network renderer, and selecting another group of parameters from the camera sampling parameters to obtain the target foreground image under a new visual angle; and (5) repeatedly executing S2 and S3 aiming at the target foreground image under the new view angle until the training is completed, thereby obtaining the final confrontation texture pattern.
In one embodiment, the camera sampling parameters include spatial position information and angular orientation information for generating a target foreground map at a specific viewing angle.
In one embodiment, the S2 includes: and selecting one from the background image set as an environment background and carrying out view angle conversion so as to align the target background image obtained after conversion with the target foreground image rendered by the neural renderer in an imaging view angle and an imaging distance, thereby facilitating subsequent foreground and background fusion.
In one embodiment, the extracting, in S3, a target mask O and an edge mask E from the target foreground map includes:
graying the target foreground image to obtain a gray image, and performing binary segmentation on the gray image to obtain a target mask O; and feathering the target mask O;
and carrying out Canny edge extraction on the gray-scale image to obtain an edge image, and carrying out expansion processing on the edge image to obtain the edge mask E so as to distinguish an edge area from a non-edge area.
In one embodiment, the smoothing loss L is smooth Comprises the following steps:
Figure BDA0004028722830000031
wherein p is i,j A pixel value representing a position (i, j) in the target foreground image, E represents a set of positions of pixel points in an edge mask E, and L smooth Constraining the difference, S, between adjacent pixels in the generated target fusion map 1 And S 2 Is a weighting factor for the smoothing loss.
In one embodiment, the texture loss L texture Comprises the following steps:
Figure BDA0004028722830000032
wherein the content of the first and second substances,
Figure BDA0004028722830000033
a pixel value representing a position (i, j) on the target background map, <' >>
Figure BDA0004028722830000034
A pixel value representing a position (i, j) on the object background map, size representing a Size of the object background map; l is texture And constraining the distance between the generated target fusion image and the target background image.
In one embodiment, the target detector employs a single-phase detector YOLOv5 and a dual-phase detector fast RCNN; the attack loss L attack Comprises the following steps:
Figure BDA0004028722830000035
wherein N represents the number of candidate frames output by the target detector, c represents the target class needing disguising,
Figure BDA0004028722830000036
and the probability value of the ith candidate box output after the target fusion diagram is input into the target detector is the probability value of the target category needing disguising, and t represents the threshold value of the probability value.
According to another aspect of the present invention, there is provided a three-dimensional antagonistic texture generating apparatus for a camouflage target, for performing the three-dimensional antagonistic texture generating method for a camouflage target, comprising:
the generation module is used for inputting a 3D model corresponding to the target to be disguised, a random initial confrontation texture, a Face _ ID file and camera sampling parameters into the neural network renderer so as to generate a target foreground image under a specific visual angle, wherein the Face _ ID file is generated according to a spraying area of the target to be disguised;
the transformation module is used for selecting one background image from the background image set as an environment background and carrying out view angle transformation to obtain a target background image, and the target background image is aligned with the target foreground image in the imaging view angle and the imaging distance;
the training module is used for extracting a target mask O and an edge mask E from the target foreground image; weighted calculation of the smoothing loss L using the edge mask E smooth (ii) a Fusing the target foreground image and the target background image by using the target mask O to obtain a target fusion image, and calculating the texture loss L of the target background image and the target fusion image texture (ii) a Sending the target fusion graph into a target detector and calculating attack loss L according to the output class confidence attack (ii) a For the smoothing loss L smooth The loss of texture L texture And said attack loss L attack After weighting, carrying out backward propagation to obtain gradient information so as to update the confrontation texture;
the updating module is used for inputting the updated anti-texture camera sampling parameters into the neural network renderer and selecting a group of parameters from the camera sampling parameters to obtain the target foreground image under the visual angle; and (5) repeatedly executing S2 and S3 aiming at the target foreground image under the new visual angle until the training is completed, thereby obtaining the final confrontation texture pattern.
According to another aspect of the present invention, there is provided an electronic device comprising a memory storing a computer program and a processor implementing the steps of the above method when executing the computer program.
According to another aspect of the invention, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
the method comprises the steps of coating a confrontation texture on a 3D model by using a neural network renderer and rendering the confrontation texture into a two-dimensional target foreground image; acquiring a target mask O and an edge mask E according to the target foreground image; weighted calculation of smoothing loss L using edge mask E smooth (ii) a Fusing the target foreground image and the target background image by using the target mask O, and calculating the texture loss L of the fused image and the background image texture Then sending the target fusion graph into a target detector, and designing attack loss L according to the output class confidence coefficient of the target detector attack (ii) a And weighting the loss function, then performing backward propagation to obtain gradient information, and updating the confrontation texture information of the 3D model according to the gradient information. And performing iterative training for multiple times to obtain a target texture pattern, wherein the target texture pattern can be sprayed on the modeled target to achieve the purpose of camouflage. According to the method, the target to be disguised is modeled in a 3D mode, the neural network renderer is used for obtaining images of the target rendered from different visual angles and distances, under the condition that the appearance condition of the target to be disguised and various imaging conditions during reconnaissance are fully considered, the texture appearance information required by deep learning and training target disguising is used, the generated disguised pattern can effectively prevent the target from being detected, and therefore the important target is protected.
Drawings
FIG. 1 is a flow chart of a three-dimensional antagonistic texture generation method of a camouflage target of the present invention;
FIG. 2 is a schematic diagram of generating a Face _ ID file by performing 3D modeling and framing a spraying area on a camouflage target according to the present invention;
FIG. 3 is a schematic diagram of the environmental parameters of the present invention as it collects camera samples;
FIG. 4 is a three-dimensional countermeasure texture for a tank generated by the present invention;
FIG. 5 is an example of a tank three-dimensional confrontation texture generated by the present invention in a real background;
figure 6 is a ship three-dimensional confrontation texture generated by the present invention and an example of it in real context.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, the present invention provides a three-dimensional antagonistic texture generation method for a camouflage target, comprising:
s1: inputting a 3D model corresponding to a target to be disguised, a random initial confrontation texture, a Face _ ID file and camera sampling parameters into a neural network renderer so as to generate a target foreground image under a specific visual angle, wherein the Face _ ID file is generated according to a spraying area of the target to be disguised; (ii) a
S2: selecting one from the background image set as an environment background and carrying out view angle conversion to obtain a target background image, wherein the target background image is aligned with the target foreground image in the imaging view angle and the imaging distance;
s3: extracting a target mask O and an edge mask E from the target foreground image; weighted calculation of the smoothing loss L using the edge mask E smooth (ii) a Fusing the target foreground image and the target background image by using the target mask OCalculating texture loss L of the target background image and the target fusion image to the target fusion image texture (ii) a Sending the target fusion graph into a target detector and calculating attack loss L according to the output class confidence attack (ii) a For the smoothing loss L smooth The loss of texture L texture And the attack loss L attack After weighting, carrying out backward propagation to obtain gradient information so as to update the confrontation texture;
s4: inputting the updated anti-texture camera sampling parameters into the neural network renderer, and selecting another group of parameters from the camera sampling parameters to obtain the target foreground image under a new visual angle; and (5) repeatedly executing S2 and S3 aiming at the target foreground image under the new visual angle until the training is completed, thereby obtaining the final confrontation texture pattern.
Specifically, the method comprises the following steps:
(1) Performing 3D modeling of the appearance of a target needing to be disguised, framing a spraying area needing to be sprayed to generate a Face _ ID file, and randomly initializing confrontation texture information of the model, namely the target needing to be optimized in the training process; setting environmental parameters (positions of the camouflaged object and the camera, and view angle orientation information) when the camera images the object;
(2) Collecting image data of a background with imaging parameters, and performing view angle transformation on the background by using the imaging parameters of the image and the camera sampling parameters preset in the step (1) so that the background is aligned with a target foreground rendered by a neural renderer in an imaging view angle and an imaging distance;
(3) Rendering a confrontation texture on the 3D model by using a neural network renderer, and acquiring a target foreground image at a specific view angle according to a camera sampling environment parameter; and extracting a target mask O and an edge mask E according to the foreground image. Weighted calculation of smoothing loss L using edge mask E smooth Fusing the target foreground and the environment background by using the target mask O, and calculating the texture loss L of the background image and the fused image texture . Sending the fused image into a target detector, and designing attack loss L according to the class confidence coefficient output by the target detector attack (ii) a For the obtained smoothing loss L smooth Loss of textureL texture And attack loss L attack After weighting, back propagation is carried out, the optimizer updates the countertexture of the 3D model according to the gradient information, and truncates the updated countertexture to ensure that the color space really exists in the physical world;
(4) And (3) iterating the steps (1) to (3) for multiple times by using the updated countertexture information until the training is finished, and obtaining the final countertexture pattern of the 3D model.
In one embodiment, as shown in fig. 2, 3D modeling software (e.g., 3ds MAX) is used to model the 3D appearance of an object to be camouflaged, and meanwhile, the modeling software is used to select an area (e.g., a vehicle body) to be painted, while areas (e.g., wheels, tracks, etc.) that cannot be painted do not need to be framed, and a script is written to derive a face index of the selected area to be stored in a txt file, so as to distinguish a painted area from a non-painted area. And (3) randomly initializing the confrontation texture information of the 3D model, namely, an object needing to be optimized in the training process.
In one embodiment, as shown in fig. 3, the preset camera sampling parameter C mainly contains spatial position information (x, y, z) and angular orientation information (pitch, yaw, roll) of the target and the camera to be camouflaged. The set of environmental parameters of the invention is: the spatial position information (x, y, z) of the camera and the target are mainly embodied as differences in distance, 4 different sets of distances (10m, 15m,20m and 25 m) are set, the angular orientation of the target only needs to be randomly fixed by one value, and then the angular information of the camera is set, and the spatial position information (x, y, z) of the camera and the target are mainly provided with: pitch angles pitch (22.5 °,45 °,67.5 ° and 90 °) of 4 kinds of cameras, yaw angles yaw (0 °,45 °,90 °,135 °,180 °,225 °,270 ° and 315 °) of 8 groups, and roll angles roll (0 °) of 1 group. By setting different spatial position information and angle orientation information, images of the camouflage target seen from various visual angles and various distances are acquired, and the effectiveness and robustness of the camouflage pattern are improved.
In one embodiment, the image data of the background is collected, which may be an image of a typical background or a real-time background, and the image should have imaging parameters at the time of shooting: the imaging height, pitch angle, roll angle and yaw angle of the camera. And carrying out view angle transformation on the background, so that the background and the target foreground rendered by the neural renderer are aligned on the imaging view angle and the imaging distance, and the two are fused in the following process.
In one embodiment, a Neural network Renderer is used for rendering to obtain a foreground image of a camouflage target, a Neural 3D Mesh render-based Renderer is used, environmental parameters are sampled through a camera, confrontation texture information is rendered on a 3D model, and a target image with a specific visual angle and an imaging distance is obtained.
Firstly, graying the foreground image, carrying out binary segmentation on the gray image to obtain a target mask O, carrying out Canny edge extraction on the gray image, and then carrying out expansion processing on the edge image to obtain a thicker edge mask E.
Specifically, the smoothing loss L is weighted and calculated by using the edge mask E smooth The expression is as follows:
Figure BDA0004028722830000081
wherein p is i,j The pixel value representing position (i, j), E represents the set of positions of the pixel points within the edge mask E, L smooth The difference value between adjacent pixels in the generated camouflage pattern is restrained not to be overlarge, for a non-edge area, only the generated pattern needs to be smooth and not to be abrupt, and for the edge area, the shape characteristic of an object plays an important role in human recognition, the smoothness of the edge area is improved, and the edge characteristic of a target is damaged. In the present invention, the weighting factor S of the smoothing loss 1 Set to 0.9,S 1 Set to 0.1.
Specifically, a target mask O is used to fuse the foreground and the background, and the expression is as follows:
I merge =(1-O)×I background +O×I render
wherein, I background Representing a background image, I render Representing the rendered foreground map.
Specifically, the texture loss L is calculated by using the background image and the fusion image texture The expression is as follows:
Figure BDA0004028722830000082
wherein the content of the first and second substances,
Figure BDA0004028722830000091
a pixel value representing a position (i, j) on the background map, < >>
Figure BDA0004028722830000092
The pixel value at position (i, j) on the background map is indicated, and Size indicates the Size of the image. L is texture The distance between the generated camouflage pattern and the background is constrained such that the texture and the color of the camouflage pattern are as similar as possible to the surroundings of the camouflage object.
Specifically, the fused image is sent to a target detector, and two typical detection algorithms are selected: the single-stage detector YOLOv5 and the double-stage detector Faster RCNN utilize the class probability of each candidate frame output by the detectors to design the attack loss L attack The expression is as follows:
Figure BDA0004028722830000093
wherein N represents the number of candidate frames output by the detector, c represents the object class to be disguised, score i c (I merge ) The input result shows that after the fused photo is input into the detector, the output ith candidate box is the probability value of the object category needing disguising, t represents the threshold value of the probability value, in the invention, the threshold value t of YOLOv5 is set to be 0.4, and the threshold value t of FasterRCNN is set to be 0.3.
Specifically, for the smoothing loss L smooth Loss of texture L texture And attack loss L attack And after weighting, back propagation is carried out, and the optimizer updates the confrontation texture pattern of the target model according to the gradient information. Weighted expression of loss functionThe following were used: l = L attack +λL smooth +βL texture
Wherein, λ and β are balance factors, different values can be set according to requirements, L is the final total loss, and both λ and β are set to 0.0005. The optimizer specifically selects an Adam optimizer, the initial learning rate is 0.01, and appearance camouflage patterns of the target model are updated through back propagation.
Specifically, fig. 4 shows a tank three-dimensional confrontation texture generated by the present invention; FIG. 5 is an example of a tank three-dimensional confrontation texture generated by the present invention in a real background; figure 6 is a ship three-dimensional confrontation texture generated by the present invention and an example of it in real context.
And continuously optimizing the confrontation texture information by multiple iterations by using the updated confrontation texture information. And obtaining the final three-dimensional confrontation texture after all the epoch training is finished.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A three-dimensional confrontation texture generation method of a camouflage target is characterized by comprising the following steps:
s1: inputting a 3D model corresponding to a target to be disguised, a random initial confrontation texture, a Face _ ID file and camera sampling parameters into a neural network renderer so as to generate a target foreground image under a specific visual angle, wherein the Face _ ID file is generated according to a spraying area of the target to be disguised;
s2: selecting one from the background image set as an environment background and carrying out view angle conversion to obtain a target background image, wherein the target background image is aligned with the target foreground image in the imaging view angle and the imaging distance;
s3: extracting a target mask O and an edge mask E from the target foreground image; weighted calculation of the smoothing loss L using the edge mask E smooth (ii) a Fusing the target foreground map and the target mask O by using the target maskObtaining a target fusion image from the target background image, and calculating texture loss L of the target background image and the target fusion image texture (ii) a Sending the target fusion graph into a target detector and calculating attack loss L according to the output class confidence attack (ii) a For the smoothing loss L smooth The loss of texture L texture And the attack loss L attack After weighting, carrying out backward propagation to obtain gradient information so as to update the confrontation texture;
s4: inputting the updated anti-texture camera sampling parameters into the neural network renderer, and selecting another group of parameters from the camera sampling parameters to obtain the target foreground image under a new view angle; and (5) repeatedly executing S2 and S3 aiming at the target foreground image under the new view angle until the training is completed, thereby obtaining the final confrontation texture pattern.
2. The method of claim 1, wherein the camera sampling parameters comprise spatial position information and angular orientation information for generating a foreground map of the target at a specific viewing angle.
3. The three-dimensional antagonistic texture generating method for a camouflage target according to claim 1, wherein said S2 comprises:
and selecting one background image from the background image set as an environment background and carrying out view angle transformation so as to align the transformed target background image with the target foreground image rendered by the neural renderer in an imaging view angle and an imaging distance, thereby facilitating subsequent foreground and background fusion.
4. The method for generating three-dimensional countertexture of a camouflage target as recited in claim 1, wherein the step of extracting a target mask O and an edge mask E from the target foreground image in the step S3 comprises the following steps:
graying the target foreground image to obtain a gray image, and performing binary segmentation on the gray image to obtain a target mask O; and feathering the target mask O;
and carrying out Canny edge extraction on the gray-scale image to obtain an edge image, and carrying out expansion processing on the edge image to obtain the edge mask E so as to distinguish an edge area from a non-edge area.
5. The method of generating three-dimensional antagonistic texture of a camouflage target according to claim 4, wherein said smoothing loss L is smooth Comprises the following steps:
Figure FDA0004028722820000021
wherein p is i,j Representing the pixel value of the position (i, j) in the target foreground image, E representing the collection of the positions of the pixel points in the edge mask E, L smooth Constraining the difference, S, between adjacent pixels in the generated target fusion map 1 And S 2 Is a weighting factor for the smoothing loss.
6. The method of generating three-dimensional antagonistic texture of a camouflage target according to claim 4, wherein said texture loss L is texture Comprises the following steps:
Figure FDA0004028722820000022
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0004028722820000023
a pixel value representing a position (i, j) on the target background map, <' >>
Figure FDA0004028722820000024
A pixel value representing a position (i, j) on the object background map, size representing a Size of the object background map; l is texture And constraining the distance between the generated target fusion image and the target background image.
7. The three-dimensional antagonistic texture generating method of a camouflage target according to claim 4,the target detector adopts a single-stage detector YOLOv5 and a dual-stage detector Faster RCNN; the attack loss L attack Comprises the following steps:
Figure FDA0004028722820000025
wherein N represents the number of candidate frames output by the target detector, c represents the target class needing disguising,
Figure FDA0004028722820000026
and the probability value of the ith candidate box output after the target fusion diagram is input into the target detector is the probability value of the target category needing disguising, and t represents the threshold value of the probability value.
8. A three-dimensional antagonistic texture generating apparatus for a camouflaged object, characterized by performing the three-dimensional antagonistic texture generating method for a camouflaged object according to any one of claims 1 to 7, comprising:
the generation module is used for inputting a 3D model corresponding to the target to be disguised, a random initial confrontation texture, a Face _ ID file and camera sampling parameters into the neural network renderer so as to generate a target foreground image under a specific visual angle, wherein the Face _ ID file is generated according to a spraying area of the target to be disguised;
the transformation module is used for selecting one background image from the background image set as an environment background and carrying out view angle transformation to obtain a target background image, and the target background image is aligned with the target foreground image in the imaging view angle and the imaging distance;
the training module is used for extracting a target mask O and an edge mask E from the target foreground image; weighted calculation of the smoothing loss L using the edge mask E smooth (ii) a Fusing the target foreground image and the target background image by using the target mask O to obtain a target fusion image, and calculating texture loss L of the target background image and the target fusion image texture (ii) a Sending the target fusion graph into a target detector and calculating attack loss L according to the output class confidence attack (ii) a For the smoothing lossL smooth The loss of texture L texture And the attack loss L attack After weighting, carrying out backward propagation to obtain gradient information so as to update the confrontation texture;
the updating module is used for inputting the updated confrontation texture into the neural network renderer and selecting another group of parameters from the camera sampling parameters to obtain the target foreground image under a new visual angle; and (5) repeatedly executing S2 and S3 according to the target foreground camera sampling parameters under the new view angle until the training is completed, thereby obtaining the final confrontation texture pattern.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202211722317.7A 2022-12-30 2022-12-30 Three-dimensional countertexture generation method and device for disguised target Pending CN115984439A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211722317.7A CN115984439A (en) 2022-12-30 2022-12-30 Three-dimensional countertexture generation method and device for disguised target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211722317.7A CN115984439A (en) 2022-12-30 2022-12-30 Three-dimensional countertexture generation method and device for disguised target

Publications (1)

Publication Number Publication Date
CN115984439A true CN115984439A (en) 2023-04-18

Family

ID=85971989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211722317.7A Pending CN115984439A (en) 2022-12-30 2022-12-30 Three-dimensional countertexture generation method and device for disguised target

Country Status (1)

Country Link
CN (1) CN115984439A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116702634A (en) * 2023-08-08 2023-09-05 南京理工大学 Full-coverage concealed directional anti-attack method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116702634A (en) * 2023-08-08 2023-09-05 南京理工大学 Full-coverage concealed directional anti-attack method
CN116702634B (en) * 2023-08-08 2023-11-21 南京理工大学 Full-coverage concealed directional anti-attack method

Similar Documents

Publication Publication Date Title
Cao et al. Adversarial objects against lidar-based autonomous driving systems
CN111428748B (en) HOG feature and SVM-based infrared image insulator identification detection method
CN113610141B (en) Robustness testing method and system for automatic driving multi-sensor fusion perception model
Zhang et al. Object detection/tracking toward underwater photographs by remotely operated vehicles (ROVs)
CN110992378B (en) Dynamic updating vision tracking aerial photographing method and system based on rotor flying robot
CN116343330A (en) Abnormal behavior identification method for infrared-visible light image fusion
CN115359195B (en) Method and device for generating orthophoto, storage medium and electronic equipment
CN106600613B (en) Improvement LBP infrared target detection method based on embedded gpu
CN114782628A (en) Indoor real-time three-dimensional reconstruction method based on depth camera
CN108665484A (en) A kind of dangerous source discrimination and system based on deep learning
Kechagias-Stamatis et al. Local feature based automatic target recognition for future 3D active homing seeker missiles
CN115984439A (en) Three-dimensional countertexture generation method and device for disguised target
CN115481716A (en) Physical world counter attack method based on deep network foreground activation feature transfer
CN117197388A (en) Live-action three-dimensional virtual reality scene construction method and system based on generation of antagonistic neural network and oblique photography
CN108320310A (en) Extraterrestrial target 3 d pose method of estimation based on image sequence
CN112906564B (en) Intelligent decision support system design and implementation method for automatic target recognition of unmanned airborne SAR (synthetic aperture radar) image
CN112598032B (en) Multi-task defense model construction method for anti-attack of infrared image
Babu et al. ABF de-hazing algorithm based on deep learning CNN for single I-Haze detection
CN112115786A (en) Monocular vision odometer method based on attention U-net
CN110826575A (en) Underwater target identification method based on machine learning
CN113902947B (en) Method for constructing air target infrared image generation type countermeasure network by natural image
CN116012739A (en) Unmanned aerial vehicle remote sensing video blind motion blur removing method based on countermeasure learning and contrast learning
CN115937409A (en) Anti-visual intelligent anti-attack texture generation method
CN115424249A (en) Self-adaptive detection method for small and weak targets in air under complex background
CN113487713B (en) Point cloud feature extraction method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination