CN114511644A - Self-adaptive digital camouflage method based on deep learning - Google Patents

Self-adaptive digital camouflage method based on deep learning Download PDF

Info

Publication number
CN114511644A
CN114511644A CN202210069098.0A CN202210069098A CN114511644A CN 114511644 A CN114511644 A CN 114511644A CN 202210069098 A CN202210069098 A CN 202210069098A CN 114511644 A CN114511644 A CN 114511644A
Authority
CN
China
Prior art keywords
convolution
camouflage
deep learning
frame
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210069098.0A
Other languages
Chinese (zh)
Other versions
CN114511644B (en
Inventor
张静
霍一博
杜晓辉
刘娟秀
刘霖
刘永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210069098.0A priority Critical patent/CN114511644B/en
Publication of CN114511644A publication Critical patent/CN114511644A/en
Application granted granted Critical
Publication of CN114511644B publication Critical patent/CN114511644B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a self-adaptive digital camouflage method based on deep learning, and belongs to the field of image processing. According to the invention, the camera and other devices are used for continuously scanning the surrounding environment background, the environment background state corresponding to the position of the disguised target is rapidly analyzed and calculated by utilizing the relation between the spatial information and the front and rear frames of the video, and real-time pixel level disguising is carried out on the disguised target, so that the visual stealth effect is achieved. The technology can realize good camouflage for static objects and moving objects, and has great significance for natural exploration, latent investigation, national security and scientific research.

Description

Self-adaptive digital camouflage method based on deep learning
Technical Field
The invention belongs to the field of image processing, and particularly relates to a technology for calculating the optimal camouflage color of a camouflage target through color information of the surrounding environment, which can ensure that the camouflage target has great concealment.
Background
The traditional digital camouflage color generation method is complicated in implementation steps, generally needs background dominant color extraction and patch extraction to generate a digital camouflage color patch, and finally completes the whole process of generating the digital camouflage color by filling colors on the generated patch. Although the digital camouflage generated by the traditional method is widely applied, due to the persistence of vision effect of human eyes, the method cannot realize good camouflage in the motion process of the camouflage object, and the self-adaptive digital camouflage technology can make up for the defect, so that the camouflage object can also realize good camouflage effect in motion. The self-adaptive digital camouflage algorithm based on deep learning is provided, and the generation of end-to-end self-adaptive digital camouflage is realized. The method provided by the text utilizes the deep neural network to extract the time and space information from the reference frame, and can realize good camouflage in the motion process of the camouflage object.
Disclosure of Invention
The invention aims to solve the problem that the digital camouflage color cannot ensure the good camouflage of the moving target to be camouflaged in the moving process of the target to be camouflaged, and provides a solution for the problem.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows: a self-adaptive digital camouflage method based on deep learning comprises the following steps:
step 1: shooting an environmental information video around a current target to be disguised through a camera to obtain a video image closest to the current time;
step 2: making mask information on the six frames of video images to ensure that the camouflage target is black in the mask video images and the environment information is white in the mask video images;
and 3, step 3: the color information obtained in the step 1 and the mask information obtained in the step 2 are used as input and sent to a self-adaptive camouflage network based on deep learning;
and 4, step 4: calculating the input obtained in the step (3) by using the weight of the deep learning neural network trained in advance to obtain a result of the self-adaptive camouflage algorithm based on deep learning;
and 5: extracting the result to obtain the camouflage color information of the current frame, and then performing color camouflage by using the variable color metamaterial and the flexible OLED of the target to be camouflaged;
step 6: after the disguising is finished at the moment, repeating the steps 1-5 until the disguising task is finished;
the deep learning self-adaptive camouflage network comprises an encoder and a decoder, and has 17 layers in total; the encoder I has 12 layers, namely 6 layers of serial space-time gating convolution modules with different convolution kernels, 4 layers of serial hole convolution modules and 2 layers of serial space-time gating convolution modules; the decoder I has 5 layers which are a transposition convolution module, a gate control convolution module, a transposition convolution module, a gate control convolution module and a gate control convolution module in sequence;
the space-time gating convolution module is used for obtaining a result by combining gating convolution and space-time convolution simultaneously through continuous frame images input by the device; the gate-controlled convolution is to obtain a branch result by using convolution operation on the original input, then perform characteristic selection on the original input by using the same convolution mode and sigmoid activation function to obtain a result of another branch, and then multiply the results of the two branches to obtain the output of space-time gate-controlled convolution; the space-time convolution is that the front quarter of the number of channels in continuous multi-frame images is integrally moved backward by one frame, the vacant first frame is filled with 0, the next quarter is moved forward by one frame, the information of the vacant last frame is filled with zero, and the last remaining half part is kept unchanged; then, performing feature extraction on the processed continuous multi-frame images by using two-dimensional convolution, and then merging feature information;
the cavity convolution module is used for sampling input continuous frame images at equal intervals by using a convolution kernel;
and when the transposition convolution module is used, four sides of the input image are filled with zero, then the convolution kernel is rotated by 180 degrees, and then the convolution is carried out.
Further, the color information of the surrounding environment of sixty frames closest to the current time is obtained in the step 1, the video images of the sixty frames are sampled at equal intervals, the sampling interval is ten frames, finally, 6 frames of video images are obtained, and the 6 frames of video images are adopted for subsequent processing.
The invention aims to solve the problem that the digital camouflage color cannot ensure the good camouflage of the moving target to be camouflaged in the moving process of the target to be camouflaged, and provides a solution for the problem.
In order to meet the requirement of real-time performance, the method adopts space-time gating convolution to replace 3D convolution, and can effectively reduce operation parameters; in the latter half of the neural network, a 2D convolution module of transposition convolution and gating convolution is adopted, and only the image of the current frame is decoded and generated, so that the parameter quantity of the neural network can be reduced to the greatest extent, and the operation time is shortened. According to the experiment, the operation speed can reach more than 12 frames/second under the computer condition that the CPU is AMD ryzen-53600 CPU @3.60GHz, the video card is NVIDIA GeTorceGTX 1660Ti GPU and the memory is 16G RAM.
Drawings
Fig. 1 is a flow diagram of an adaptive digital masquerading technique.
FIG. 2 is a deep learning network structure according to the present invention.
FIG. 3 is a graph of the results of camouflage in one experiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, but the embodiments of the present invention are not limited thereto.
The algorithm flow chart of the invention is shown in fig. 1, and a self-adaptive digital camouflage method based on deep learning specifically comprises the following steps:
step 1: shooting an environmental information video around a current target to be disguised through a camera, acquiring color information of surrounding environment of sixty frames closest to the current time, sampling the video images of the sixty frames at equal intervals, wherein the sampling interval is ten frames, and finally obtaining six video images;
step 2: masking the six frames of video images so that the camouflaged object is black (pixels are 0) in the mask video image and the environmental information is white (pixels are 255) in the mask video image;
and step 3: the color information obtained in the step 1 and the mask information obtained in the step 2 are used as input and sent to a self-adaptive camouflage algorithm based on deep learning;
and 4, step 4: calculating the input obtained in the step (3) by using the weight of the deep learning neural network trained in advance to obtain a result of the self-adaptive camouflage algorithm based on deep learning;
and 5: and extracting the result to obtain the camouflage color information of the current frame, and then performing color camouflage by using the variable color metamaterial and the flexible OLED of the target to be camouflaged.
Step 6: and (5) finishing the disguise at the moment, and repeating the steps 1-5 until the disguise task is finished.
For the adaptive camouflage algorithm based on deep learning in step 3 in the specific embodiment, the algorithm needs to be described in detail as follows:
a. introduction to the overall structure of the algorithm. The algorithm is an end-to-end convolution neural network and mainly comprises a hole convolution, a space-time gating convolution and a transposition convolution. The algorithm structure adopts a coder decoder structure, wherein the coder decoder structure has 17 layers in total, the coder decoder structure has 12 layers in total, firstly, 6 layers of serial space-time gating convolution modules with different convolution kernels, then 2 layers of serial hole convolution modules, and finally 2 layers of serial space-time gating convolution modules, and the decoder structure has 5 layers in total, and is respectively a transposition convolution module, a gating convolution module, a transposition convolution module, a gating convolution module and a gating convolution module.
b. Introduction to hole convolution
Hole convolution is a convolution method that samples the input matrix at equal intervals using a convolution kernel. The hole convolution has two major effects, the first is that the receptive field can be enlarged without reducing the spatial resolution. The second is the ability to capture multi-scale contextual information.
c. Introduction to space-time gated convolution
The space-time gated convolution is composed of a space-time convolution and a gating part. The space-time convolution moves the first quarter of the channel information integrally backward by one frame, and the first frame is filled with 0; and moving one fourth of the frames forward, filling zero in the information of the last frame, keeping the last half part unchanged, performing feature extraction by using two-dimensional convolution, and combining feature information. The gated convolution is to divide the input into two parts, one part adopts convolution modes such as a space-time convolution mode and the like, the other part uses a sigmoid activation function to perform feature selection after the convolution is used, and then the results of the two parts are multiplied to obtain the output of the space-time gated convolution.
Gatingy,x=∑∑Wg*I
TsmFeaturey,x=∑∑Wf*I
Figure BDA0003481350140000041
Where Γ is a sigmoid function, only 0 and 1 are output, with 0 representing an invalid location and 1 representing a valid location.
Figure BDA0003481350140000042
Is an activation function, which may be a ReLU or an ELU, etc. WfAnd WgTwo different spatio-temporal convolution kernels are represented.
d. Introduction to transposed convolution
Transposed convolution is a special convolution that can output information larger than the input size, often used for image upsampling. The operation of the transposition convolution is to fill zero in four sides of the input, rotate a convolution kernel by 180 degrees and then carry out ordinary convolution to obtain the final output.

Claims (2)

1. A self-adaptive digital camouflage method based on deep learning comprises the following steps:
step 1: shooting an environmental information video around a current target to be disguised through a camera to obtain a video image closest to the current time;
step 2: making mask information on the six frames of video images to ensure that the camouflage target is black in the mask video images and the environment information is white in the mask video images;
and step 3: the color information obtained in the step 1 and the mask information obtained in the step 2 are used as input and sent to a self-adaptive camouflage network based on deep learning;
and 4, step 4: calculating the input obtained in the step (3) by using the weight of the deep learning neural network trained in advance to obtain a result of the self-adaptive camouflage algorithm based on deep learning;
and 5: extracting the result to obtain the camouflage color information of the current frame, and then performing color camouflage by using the variable color metamaterial and the flexible OLED of the target to be camouflaged;
step 6: after the disguising is finished at the moment, repeating the steps 1-5 until the disguising task is finished;
the deep learning self-adaptive camouflage network comprises an encoder and a decoder, and has 17 layers in total; the encoder I has 12 layers, namely 6 layers of serial space-time gating convolution modules with different convolution kernels, 4 layers of serial hole convolution modules and 2 layers of serial space-time gating convolution modules; the decoder I has 5 layers in total and sequentially comprises a transposition convolution module, a gate control convolution module, a transposition convolution module, a gate control convolution module and a gate control convolution module;
the space-time gating convolution module is used for obtaining a result by combining gating convolution and space-time convolution simultaneously through continuous frame images input by the device; the gate-controlled convolution is to obtain a branch result by using convolution operation on the original input, then perform characteristic selection on the original input by using the same convolution mode and sigmoid activation function to obtain a result of another branch, and then multiply the results of the two branches to obtain the output of space-time gate-controlled convolution; the space-time convolution is to move the front quarter department of the number of channels in continuous multi-frame images backward by one frame, fill the vacant first frame with 0, then move the next quarter forward by one frame, fill the information of the vacant last frame with zero, and keep the last remaining half unchanged; then, performing feature extraction on the processed continuous multi-frame images by using two-dimensional convolution, and then merging feature information;
the cavity convolution module is used for sampling input continuous frame images at equal intervals by using a convolution kernel;
and when the transposition convolution module is used, four sides of the input image are filled with zero, then the convolution kernel is rotated by 180 degrees, and then the convolution is carried out.
2. The adaptive digital camouflage method based on deep learning as claimed in claim 1, wherein the color information of the surrounding environment of sixty frames closest to the current time is obtained in step 1, the video images of the sixty frames are sampled at equal intervals, the sampling interval is ten frames, finally, 6 frames of video images are obtained, and the 6 frames of video images are used for subsequent processing.
CN202210069098.0A 2022-01-21 2022-01-21 Adaptive digital camouflage method based on deep learning Active CN114511644B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210069098.0A CN114511644B (en) 2022-01-21 2022-01-21 Adaptive digital camouflage method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210069098.0A CN114511644B (en) 2022-01-21 2022-01-21 Adaptive digital camouflage method based on deep learning

Publications (2)

Publication Number Publication Date
CN114511644A true CN114511644A (en) 2022-05-17
CN114511644B CN114511644B (en) 2023-06-09

Family

ID=81549781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210069098.0A Active CN114511644B (en) 2022-01-21 2022-01-21 Adaptive digital camouflage method based on deep learning

Country Status (1)

Country Link
CN (1) CN114511644B (en)

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070171453A1 (en) * 2005-11-24 2007-07-26 Canon Kabushiki Kaisha Image processing apparatus, image processing method, program, and storage medium
CN104376236A (en) * 2014-12-02 2015-02-25 上海出版印刷高等专科学校 Scheme self-adaptive digital watermark embedding and extracting method based on camouflage technology
CN104835104A (en) * 2014-02-07 2015-08-12 湖南第一师范学院 Digital image active camouflage scheme based on fractal
US20170336173A1 (en) * 2014-11-19 2017-11-23 Folium Optics Ltd Adaptive camouflage device, system, method and camouflage apparatus
US10282628B1 (en) * 2016-04-08 2019-05-07 Joshua LaRue System for producing camouflage pattern based upon perception machine vision
CN110245598A (en) * 2019-06-06 2019-09-17 北京瑞莱智慧科技有限公司 It fights sample generating method, device, medium and calculates equipment
CN110348319A (en) * 2019-06-18 2019-10-18 武汉大学 A kind of face method for anti-counterfeit merged based on face depth information and edge image
US20190392583A1 (en) * 2018-06-21 2019-12-26 International Business Machines Corporation Segmenting irregular shapes in images using deep region growing
WO2020037965A1 (en) * 2018-08-21 2020-02-27 北京大学深圳研究生院 Method for multi-motion flow deep convolutional network model for video prediction
CN111142081A (en) * 2020-04-07 2020-05-12 湖南赛博诺格电子科技有限公司 Radar self-adaptive camouflage method based on digital coding super surface
CN111292220A (en) * 2020-01-19 2020-06-16 西北工业大学 Target camouflage image generation method for target image recognition of synthetic aperture radar
WO2020215236A1 (en) * 2019-04-24 2020-10-29 哈尔滨工业大学(深圳) Image semantic segmentation method and system
CN112288647A (en) * 2020-10-13 2021-01-29 武汉大学 Remote sensing image cloud and shadow restoration method based on gating convolution
CN112446357A (en) * 2020-12-15 2021-03-05 电子科技大学 SAR automatic target recognition method based on capsule network
CN112733744A (en) * 2021-01-14 2021-04-30 北京航空航天大学 Camouflage object detection model based on edge cooperative supervision and multi-level constraint
CN112801914A (en) * 2021-02-09 2021-05-14 北京工业大学 Two-stage image restoration method based on texture structure perception
CN112801895A (en) * 2021-01-15 2021-05-14 南京邮电大学 Two-stage attention mechanism-based GAN network image restoration algorithm
CN113298736A (en) * 2021-06-24 2021-08-24 河北工业大学 Face image restoration method based on face pattern
CN113450561A (en) * 2021-05-06 2021-09-28 浙江工业大学 Traffic speed prediction method based on space-time graph convolution-generation countermeasure network
CN113507625A (en) * 2021-06-22 2021-10-15 西安理工大学 Self-adaptive video restoration method
CN113538505A (en) * 2021-06-17 2021-10-22 杭州电子科技大学 Motion estimation system and method of single picture based on deep learning
WO2021244079A1 (en) * 2020-06-02 2021-12-09 苏州科技大学 Method for detecting image target in smart home environment

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070171453A1 (en) * 2005-11-24 2007-07-26 Canon Kabushiki Kaisha Image processing apparatus, image processing method, program, and storage medium
CN104835104A (en) * 2014-02-07 2015-08-12 湖南第一师范学院 Digital image active camouflage scheme based on fractal
US20170336173A1 (en) * 2014-11-19 2017-11-23 Folium Optics Ltd Adaptive camouflage device, system, method and camouflage apparatus
CN104376236A (en) * 2014-12-02 2015-02-25 上海出版印刷高等专科学校 Scheme self-adaptive digital watermark embedding and extracting method based on camouflage technology
US10282628B1 (en) * 2016-04-08 2019-05-07 Joshua LaRue System for producing camouflage pattern based upon perception machine vision
US20190392583A1 (en) * 2018-06-21 2019-12-26 International Business Machines Corporation Segmenting irregular shapes in images using deep region growing
WO2020037965A1 (en) * 2018-08-21 2020-02-27 北京大学深圳研究生院 Method for multi-motion flow deep convolutional network model for video prediction
WO2020215236A1 (en) * 2019-04-24 2020-10-29 哈尔滨工业大学(深圳) Image semantic segmentation method and system
CN110245598A (en) * 2019-06-06 2019-09-17 北京瑞莱智慧科技有限公司 It fights sample generating method, device, medium and calculates equipment
CN110348319A (en) * 2019-06-18 2019-10-18 武汉大学 A kind of face method for anti-counterfeit merged based on face depth information and edge image
CN111292220A (en) * 2020-01-19 2020-06-16 西北工业大学 Target camouflage image generation method for target image recognition of synthetic aperture radar
CN111142081A (en) * 2020-04-07 2020-05-12 湖南赛博诺格电子科技有限公司 Radar self-adaptive camouflage method based on digital coding super surface
WO2021244079A1 (en) * 2020-06-02 2021-12-09 苏州科技大学 Method for detecting image target in smart home environment
CN112288647A (en) * 2020-10-13 2021-01-29 武汉大学 Remote sensing image cloud and shadow restoration method based on gating convolution
CN112446357A (en) * 2020-12-15 2021-03-05 电子科技大学 SAR automatic target recognition method based on capsule network
CN112733744A (en) * 2021-01-14 2021-04-30 北京航空航天大学 Camouflage object detection model based on edge cooperative supervision and multi-level constraint
CN112801895A (en) * 2021-01-15 2021-05-14 南京邮电大学 Two-stage attention mechanism-based GAN network image restoration algorithm
CN112801914A (en) * 2021-02-09 2021-05-14 北京工业大学 Two-stage image restoration method based on texture structure perception
CN113450561A (en) * 2021-05-06 2021-09-28 浙江工业大学 Traffic speed prediction method based on space-time graph convolution-generation countermeasure network
CN113538505A (en) * 2021-06-17 2021-10-22 杭州电子科技大学 Motion estimation system and method of single picture based on deep learning
CN113507625A (en) * 2021-06-22 2021-10-15 西安理工大学 Self-adaptive video restoration method
CN113298736A (en) * 2021-06-24 2021-08-24 河北工业大学 Face image restoration method based on face pattern

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
HOUDI XIAO等: ""Fast self-adaptive Digital Camouflage Design Method Based on Deep Learning"", vol. 10, no. 5, pages 1 - 12 *
QIAN C等: ""Deep-learning-enable self-adaptive microwave cloak without human intervention"", vol. 14, no. 6, pages 383 - 390 *
VAN DER BURG E等: ""Adaptive camouflage of moving target"", pages 6 - 13 *
周颖等: ""基于演化计算的迷彩目标隐蔽策略仿真设计"", vol. 42, no. 9, pages 1987 - 1997 *
徐长勇等: ""视频信息伪装技术综述"", pages 8 - 11 *
杨建喜等: ""基于多周期组件时空神经网络的路网通行速度预测"", vol. 21, no. 3, pages 112 - 119 *

Also Published As

Publication number Publication date
CN114511644B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
Zhang et al. DCSR: Dilated convolutions for single image super-resolution
Liu et al. MLFcGAN: Multilevel feature fusion-based conditional GAN for underwater image color correction
EP4198875A1 (en) Image fusion method, and training method and apparatus for image fusion model
US8639056B2 (en) Contrast enhancement
CN108830913B (en) Semantic level line draft coloring method based on user color guidance
CN111798400A (en) Non-reference low-illumination image enhancement method and system based on generation countermeasure network
CN110889370B (en) System and method for synthesizing face by end-to-end side face based on condition generation countermeasure network
KR102509817B1 (en) Image processing method and apparatus, computer device, and storage medium
Anvari et al. Dehaze-GLCGAN: unpaired single image de-hazing via adversarial training
CN112001843A (en) Infrared image super-resolution reconstruction method based on deep learning
CN114677479A (en) Natural landscape multi-view three-dimensional reconstruction method based on deep learning
CN113902925A (en) Semantic segmentation method and system based on deep convolutional neural network
Virtusio et al. Interactive style transfer: Towards styling user-specified object
CN115035011A (en) Low-illumination image enhancement method for self-adaptive RetinexNet under fusion strategy
CN112435165B (en) Two-stage video super-resolution reconstruction method based on generation countermeasure network
CN114511644B (en) Adaptive digital camouflage method based on deep learning
CN116916114A (en) Speaker video generation method and device and electronic equipment
CN111861877A (en) Method and apparatus for video hyper-resolution
WO2021057091A1 (en) Viewpoint image processing method and related device
CN115512039A (en) 3D face construction method
Zhang et al. Research on image super-resolution reconstruction based on deep learning
CN116012501A (en) Image generation method based on style content self-adaptive normalized posture guidance
CN115883761A (en) Temporal video enhancement method
CN116266336A (en) Video super-resolution reconstruction method, device, computing equipment and storage medium
CN117635478B (en) Low-light image enhancement method based on spatial channel attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant