CN114429436A - Image migration method and system for reducing domain difference - Google Patents

Image migration method and system for reducing domain difference Download PDF

Info

Publication number
CN114429436A
CN114429436A CN202210086618.9A CN202210086618A CN114429436A CN 114429436 A CN114429436 A CN 114429436A CN 202210086618 A CN202210086618 A CN 202210086618A CN 114429436 A CN114429436 A CN 114429436A
Authority
CN
China
Prior art keywords
image
migration
virtual
generator
auxiliary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210086618.9A
Other languages
Chinese (zh)
Inventor
高瑞
梅海艺
张道良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202210086618.9A priority Critical patent/CN114429436A/en
Publication of CN114429436A publication Critical patent/CN114429436A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image migration method and system for reducing domain difference, which comprises the following steps: acquiring a virtual image; generating a plurality of auxiliary graphs of the virtual image by adopting different shaders based on the virtual image; based on a plurality of auxiliary images of the virtual image, adopting a generator for generating a countermeasure network based on a convolution kernel to obtain a migration image; the generator for generating the countermeasure network based on the convolution kernel adopts the convolution layers to convolve a plurality of auxiliary graphs respectively, the output of the convolution layers is subjected to addition operation and then input into one convolution layer to obtain a fusion characteristic graph, and a migration image is obtained based on the fusion characteristic graph. The domain difference between the generated migration image and the real image is reduced, and the reality of the generated migration image is improved.

Description

Image migration method and system for reducing domain difference
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to an image migration method and system for reducing domain difference.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
With the continuous development and iteration of hardware such as Graphics Processing Unit (GPU), modern Graphics technologies represented by Real-time Ray Tracing (Real-time Ray Tracing) technology are gradually known to the public in recent years, and the heat of computer Graphics is further enhanced by technical demonstration of the invida RTX series video card and the illusion engine 5. Virtual images generated by high-quality rendering are almost the same as real images, for example, vivid movie special effects are enough to be falsified, and advanced computer graphics technology gradually enters the visual field of artificial intelligence researchers. The vivid virtual image is used for assisting a deep learning algorithm to train, and the performance of the algorithm trained by combining a large amount of virtual data and a small amount of real data as a training set can be close to or even exceed that of the algorithm trained by only using the real data. The generation of virtual data has the following advantages compared with the collection of real data: the method has the advantages of high generation efficiency and low cost, under the supporting of a high-performance graphic processor and a high-quality real-time rendering algorithm, tens of thousands of virtual data with label information can be generated within one hour, real data with the same order of magnitude are collected, and at least several weeks of time are consumed from the time of searching a field, building collection equipment and collecting to the time of marking the data.
However, virtual images, while having numerous advantages, present a key problem: domain differences (Domain Gap) exist between the virtual images and the real images, the training process of deep learning depends on data, different training data are often needed when different tasks under different scenes are aimed at, the generalization capability of deep learning is limited, the accuracy is greatly reduced when certain Domain differences exist between the training set and the testing set, and the usability of the virtual images is restricted by the Domain differences.
Disclosure of Invention
In order to solve the technical problems in the background art, the present invention provides an image migration method and system for reducing domain differences, which reduces the domain differences between the generated migration image and the real image and improves the authenticity of the generated migration image.
In order to achieve the purpose, the invention adopts the following technical scheme:
a first aspect of the present invention provides an image migration method of reducing a domain difference, including:
acquiring a virtual image;
generating a plurality of auxiliary graphs of the virtual image by adopting different shaders based on the virtual image;
based on a plurality of auxiliary images of the virtual image, adopting a generator for generating a countermeasure network based on a convolution kernel to obtain a migration image;
the generator for generating the countermeasure network based on the convolution kernel adopts the convolution layers to convolve a plurality of auxiliary graphs respectively, the output of the convolution layers is subjected to addition operation and then input into one convolution layer to obtain a fusion characteristic graph, and a migration image is obtained based on the fusion characteristic graph.
Further, the training method for generating the countermeasure network based on the convolution kernel comprises the following steps:
inputting a plurality of auxiliary graphs of virtual images in a training set based on the generator input of the generation countermeasure network of the convolution kernel, and outputting a migration image;
inputting the migration image and the real image in the training set into a discriminator of a generation countermeasure network based on a convolution kernel;
the parameters of the generator and the discriminator are alternately updated.
Further, the alternately updating of the generator and discriminator parameters employs a gradient descent method.
Further, the training set includes a real image and a plurality of pieces of virtual data, and each piece of virtual data includes a virtual image and a plurality of auxiliary images thereof.
Further, the auxiliary map comprises a color map, a depth map, a normal map and a semantic segmentation map.
Further, the virtual image is composed of a virtual scene and a three-dimensional object.
Further, the shader is obtained through a shader editor of the Blender.
A second aspect of the present invention provides an image migration system that reduces a domain difference, comprising:
an image acquisition module configured to: acquiring a virtual image;
an auxiliary graph generation module configured to: generating a plurality of auxiliary graphs of the virtual image by adopting different shaders based on the virtual image;
an image migration module configured to obtain a migration image using a generator for generating a countermeasure network based on a convolution kernel based on a plurality of auxiliary graphs of a virtual image;
the generator for generating the countermeasure network based on the convolution kernel adopts the convolution layers to convolve a plurality of auxiliary graphs respectively, the output of the convolution layers is subjected to addition operation and then input into one convolution layer to obtain a fusion characteristic graph, and a migration image is obtained based on the fusion characteristic graph.
A third aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps in a reduced domain difference image migration method as described above.
A fourth aspect of the present invention provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the steps of the image migration method for reducing the domain difference.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides an image migration method for reducing domain differences, which can rapidly output a large number of virtual images in the form of virtual images in the face of different requirements and tasks, and uses a generation countermeasure network based on a convolution kernel to perform domain migration on the virtual images so as to reduce the domain differences between the virtual images and real images, so that a deep learning algorithm can be trained only through a large number of virtual images and forecast on the real images, the domain differences between the migrated images and the real images are greatly reduced, the feasibility of using the migrated images as deep learning training data is improved, and the landing efficiency of the deep learning in various scenes is further improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
Fig. 1 is a partial structural diagram of a generator according to a first embodiment of the present invention;
FIG. 2 is a diagram of a virtual image according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of a color map, a depth map, a normal map and a semantic segmentation map of a virtual image according to a first embodiment of the present invention;
fig. 4 is a diagram illustrating migration results according to the first embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example one
The embodiment provides an image migration method for reducing domain differences, which uses a rendering engine to generate a virtual image with a label, and uses a generation countermeasure network (CycleGAN) to further reduce the domain differences, and specifically includes the following steps:
step 1, acquiring a virtual image constructed by a user.
Specifically, according to requirements and tasks, a reasonable virtual scene is built in a three-dimensional graphics engine, a virtual three-dimensional scene is formed by the three-dimensional scene and the three-dimensional objects, and the virtual three-dimensional scene is projected to a display plane to generate a virtual image as shown in fig. 2 after a virtual camera is placed. The three-dimensional graphics engine includes a rendering engine (renderer), three-dimensional scene editing, three-dimensional modeling, and the like, and software such as blend, C4D, and the like is a three-dimensional graphics engine.
And 2, based on the three-dimensional virtual scene, the renderer synchronously generates a plurality of auxiliary images (a color image, a depth image, a normal image and a semantic segmentation image) of the virtual image by adopting four different shaders designed by a user.
Specifically, four different shaders are designed, and the renderer generates a color map, a depth map, a normal map and a semantic segmentation map as shown in fig. 3 based on the four different shaders.
In a specific application, the color map, the depth map, the normal map and the semantic segmentation map are not only used for an image migration task, but also used for image semantic segmentation tasks, target detection, classification detection and other tasks, and corresponding labels are required to be labeled at this time (for example, when the image semantic segmentation task is performed, the label of each pixel is an object class to which each pixel belongs, when the target detection is performed, whether each pixel is a detection frame or not is the label, and when the classification detection is performed, the label is the class to which the image belongs), four different shaders can be designed according to the required labels, and the color map, the depth map, the normal map and the semantic segmentation map and the labels thereof shown in fig. 3 are generated.
The shader is designed by using a shader editor of the Blender; the renderer is a Cycles renderer.
And 3, obtaining the migration image by adopting a generator for generating a countermeasure network based on a convolution kernel based on the color image, the depth image, the normal image and the semantic segmentation image of the virtual image. Specifically, as shown in fig. 4, the generator for generating the countermeasure network based on the convolution kernel convolves the color map, the depth map, the normal map, and the semantic segmentation map with four convolution layers (Conv), respectively, then performs an addition operation on outputs of the four convolution layers, inputs the outputs into one convolution layer, obtains a fusion feature map, and obtains a migration image (an image migrated to a target domain) based on the fusion feature map.
Specifically, the input module of the generator in the generated countermeasure network CycleGAN is modified into the convolution block Rconv shown in fig. 4, so as to obtain the generated countermeasure network based on the convolution kernel.
The training method for generating the countermeasure network based on the convolution kernel comprises the following steps:
acquiring a training set, wherein the training set comprises a real image and a plurality of pieces of virtual data, and each piece of virtual data comprises a virtual image (a source domain) and a color image, a depth image, a normal map and a semantic segmentation map thereof;
the generator for generating the countermeasure network based on the convolution kernel inputs a color map, a depth map, a normal map and a semantic segmentation map of a virtual image (source domain) in a training set, and outputs a migration image (image migrated to a target domain); inputting the migration image and a real image (data of a target domain) in a training set into a discriminator of a generation countermeasure network based on a convolution kernel; the parameters of the generator and discriminator are alternately updated using a gradient descent method.
And after the training is finished, only the generator is used for carrying out domain migration on the rendered image, and the virtual image is migrated to the real image. The domain difference between the generated migration image and the real image is reduced, and the accuracy of the generated migration image is improved.
Due to the introduction of RConv, the consistency mapping loss function (identity mapping loss) in the loss function of the CycleGAN also needs to be modified correspondingly. Setting a color map of a source domain as X _ r, a depth map X _ d, a normal map X _ n, a semantic segmentation map X _ m, a generator as G, and an output of the generator as G (X), wherein X ═ X _ r, X _ d, X _ n, X _ m ], and a real image (data of a target domain) is y; the generation countermeasure network based on the convolution kernel is the same as the CycleGAN, and also comprises an auxiliary generator F, the input of the generator is the data of a target domain, and the output of the generator is the data of a source domain, so as to keep the input content and the output content of the generator G consistent (only carrying out domain migration without changing the content) by combining the identification mapping loss, and the invention modifies the identification mapping loss in the CycleGAN into:
Lidentity(G)=Ex~pdata(x)||F(G(X))–x_r||1
wherein pdata is F (g (x)), the RConv block can utilize the advantages of the virtual data to the maximum extent, and the normal map, the depth map and the like are used as supplementary information, so that the generation of the countermeasure network based on the convolution kernel can generate a result more conforming to a real image; the virtual image of the building and the real image of the target domain are input into the generation countermeasure network based on the convolution kernel, domain migration is performed on the virtual data, and as a result, as shown in fig. 4, it can be seen that compared with the original CycleGAN generator (the lower right image of fig. 4) which uses the RConv block as input, the output result ghost is less, and the content is closer to the original rendering. The migration image is used as a data set to train the deep learning algorithm, the accuracy of the algorithm can be improved, compared with the use of virtual data which is not migrated, the migrated data can train deep learning more stably, and better generalization is obtained.
Example two
The embodiment provides an image migration system for reducing domain differences, which specifically comprises the following modules:
an image acquisition module configured to: acquiring a virtual image;
an auxiliary graph generation module configured to: generating a plurality of auxiliary graphs of the virtual image by adopting different shaders based on the virtual image;
an image migration module configured to obtain a migration image using a generator for generating a countermeasure network based on a convolution kernel based on a plurality of auxiliary graphs of a virtual image;
the generator for generating the countermeasure network based on the convolution kernel adopts the convolution layers to convolve a plurality of auxiliary graphs respectively, the output of the convolution layers is subjected to addition operation and then input into one convolution layer to obtain a fusion characteristic graph, and a migration image is obtained based on the fusion characteristic graph.
It should be noted that, each module in the present embodiment corresponds to each step in the first embodiment one to one, and the specific implementation process is the same, which is not described herein again.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in an image migration method for reducing a domain difference as described in the first embodiment above.
Example four
The present embodiment provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the steps in the image migration method for reducing the domain difference according to the first embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and executed by a computer to implement the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An image migration method for reducing a domain difference, comprising:
acquiring a virtual image;
generating a plurality of auxiliary graphs of the virtual image by adopting different shaders based on the virtual image;
based on a plurality of auxiliary images of the virtual image, adopting a generator for generating a countermeasure network based on a convolution kernel to obtain a migration image;
the generator for generating the countermeasure network based on the convolution kernel adopts the convolution layers to convolve a plurality of auxiliary graphs respectively, the output of the convolution layers is subjected to addition operation and then input into one convolution layer to obtain a fusion characteristic graph, and a migration image is obtained based on the fusion characteristic graph.
2. The method for image migration with reduced domain difference according to claim 1, wherein the training method for generating the countermeasure network based on the convolution kernel comprises:
inputting a plurality of auxiliary graphs of virtual images in a training set based on a generator of a convolutional kernel generation countermeasure network, and outputting a migration image;
inputting the migration image and the real image in the training set into a discriminator of a generation countermeasure network based on a convolution kernel;
the parameters of the generator and the discriminator are alternately updated.
3. The method as claimed in claim 2, wherein the updating of the generator and discriminator parameters alternately uses a gradient descent method.
4. The method as claimed in claim 2, wherein the training set comprises a real image and a plurality of pieces of virtual data, each piece of virtual data comprises a virtual image and a plurality of auxiliary images thereof.
5. The method as claimed in claim 1, wherein the auxiliary map includes a color map, a depth map, a normal map and a semantic segmentation map.
6. The method for migrating an image with reduced domain differences according to claim 1, wherein the virtual image is composed of a virtual scene and a three-dimensional object.
7. The method as claimed in claim 1, wherein the shader is obtained by a shader editor of blend.
8. An image migration system that reduces a domain difference, comprising:
an image acquisition module configured to: acquiring a virtual image;
an auxiliary graph generation module configured to: generating a plurality of auxiliary graphs of the virtual image by adopting different shaders based on the virtual image;
an image migration module configured to obtain a migration image using a generator for generating a countermeasure network based on a convolution kernel based on a plurality of auxiliary graphs of a virtual image;
the generator for generating the countermeasure network based on the convolution kernel adopts the convolution layers to convolve a plurality of auxiliary graphs respectively, the output of the convolution layers is subjected to addition operation and then input into one convolution layer to obtain a fusion characteristic graph, and a migration image is obtained based on the fusion characteristic graph.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of a method for reducing domain differences in an image migration method according to any one of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of a method of reducing domain differences image migration according to any one of claims 1 to 7 when executing the program.
CN202210086618.9A 2022-01-25 2022-01-25 Image migration method and system for reducing domain difference Pending CN114429436A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210086618.9A CN114429436A (en) 2022-01-25 2022-01-25 Image migration method and system for reducing domain difference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210086618.9A CN114429436A (en) 2022-01-25 2022-01-25 Image migration method and system for reducing domain difference

Publications (1)

Publication Number Publication Date
CN114429436A true CN114429436A (en) 2022-05-03

Family

ID=81314049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210086618.9A Pending CN114429436A (en) 2022-01-25 2022-01-25 Image migration method and system for reducing domain difference

Country Status (1)

Country Link
CN (1) CN114429436A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705499A (en) * 2019-10-12 2020-01-17 成都考拉悠然科技有限公司 Crowd counting method based on transfer learning
CN111445476A (en) * 2020-02-27 2020-07-24 上海交通大学 Monocular depth estimation method based on multi-mode unsupervised image content decoupling
CN111738172A (en) * 2020-06-24 2020-10-02 中国科学院自动化研究所 Cross-domain target re-identification method based on feature counterstudy and self-similarity clustering
CN111783610A (en) * 2020-06-23 2020-10-16 西北工业大学 Cross-domain crowd counting method based on de-entangled image migration
CN112508031A (en) * 2020-12-22 2021-03-16 北京航空航天大学 Unsupervised remote sensing image semantic segmentation method and model from virtual to reality
CN112819873A (en) * 2021-02-05 2021-05-18 四川大学 High-generalization cross-domain road scene semantic segmentation method and system
CN113744158A (en) * 2021-09-09 2021-12-03 讯飞智元信息科技有限公司 Image generation method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705499A (en) * 2019-10-12 2020-01-17 成都考拉悠然科技有限公司 Crowd counting method based on transfer learning
CN111445476A (en) * 2020-02-27 2020-07-24 上海交通大学 Monocular depth estimation method based on multi-mode unsupervised image content decoupling
CN111783610A (en) * 2020-06-23 2020-10-16 西北工业大学 Cross-domain crowd counting method based on de-entangled image migration
CN111738172A (en) * 2020-06-24 2020-10-02 中国科学院自动化研究所 Cross-domain target re-identification method based on feature counterstudy and self-similarity clustering
CN112508031A (en) * 2020-12-22 2021-03-16 北京航空航天大学 Unsupervised remote sensing image semantic segmentation method and model from virtual to reality
CN112819873A (en) * 2021-02-05 2021-05-18 四川大学 High-generalization cross-domain road scene semantic segmentation method and system
CN113744158A (en) * 2021-09-09 2021-12-03 讯飞智元信息科技有限公司 Image generation method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RICHTER S R: "Enhancing Photorealism Enhancement", ARXIV, 10 May 2021 (2021-05-10), pages 1 - 16 *
王鹏淇;孟令中;董乾;杨光;师源;薛云志;: "ObjectGAN:自动驾驶评估数据集构建", 测控技术, no. 08, 18 August 2020 (2020-08-18) *

Similar Documents

Publication Publication Date Title
CN110675489B (en) Image processing method, device, electronic equipment and storage medium
EP2080167B1 (en) System and method for recovering three-dimensional particle systems from two-dimensional images
JP5055214B2 (en) Image processing apparatus and image processing method
EP3533218B1 (en) Simulating depth of field
KR101670958B1 (en) Data processing method and apparatus in heterogeneous multi-core environment
CN112651881A (en) Image synthesis method, apparatus, device, storage medium, and program product
CN113808005A (en) Video-driving-based face pose migration method and device
CN105892681A (en) Processing method and device of virtual reality terminal and scene thereof
US20160093112A1 (en) Deep image identifiers
CN111353069A (en) Character scene video generation method, system, device and storage medium
Li et al. 2D amodal instance segmentation guided by 3D shape prior
US11217002B2 (en) Method for efficiently computing and specifying level sets for use in computer simulations, computer graphics and other purposes
CN114429436A (en) Image migration method and system for reducing domain difference
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
Le Van et al. An effective RGB color selection for complex 3D object structure in scene graph systems
Li et al. Video vectorization via bipartite diffusion curves propagation and optimization
JP2666353B2 (en) High-speed video generation by ray tracing
CN116721194B (en) Face rendering method and device based on generation model
Liu et al. 3D Animation Graphic Enhancing Process Effect Simulation Analysis
DU et al. Terrain Edge Stitching Based On Least Squares Generative Adversarial Networks
CN112652059B (en) Mesh R-CNN model-based improved target detection and three-dimensional reconstruction method
Meng et al. Synthesizing Data for Autonomous Driving: Multi-Agent Reinforcement Learning Meets Augmented Reality
Yao et al. Neural Radiance Field-based Visual Rendering: A Comprehensive Review
CN103186648B (en) A kind of method and device of page graphic primitive overlap process
Zhan et al. High-Quality Graphic Rendering and Synthesis of Design Graphics Using Generating Adversarial Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination