CN108682041B - Method for performing multi-light-source rendering based on matrix row and column sampling and deep learning - Google Patents
Method for performing multi-light-source rendering based on matrix row and column sampling and deep learning Download PDFInfo
- Publication number
- CN108682041B CN108682041B CN201810320587.2A CN201810320587A CN108682041B CN 108682041 B CN108682041 B CN 108682041B CN 201810320587 A CN201810320587 A CN 201810320587A CN 108682041 B CN108682041 B CN 108682041B
- Authority
- CN
- China
- Prior art keywords
- matrix
- row
- image
- reduction matrix
- random reduction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
Abstract
The invention discloses a method for rendering multiple light sources based on matrix row and column sampling and deep learning, which comprises the following steps: step 1, establishing an illumination matrix according to a three-dimensional scene; step 2, randomly extracting a plurality of rows from the illumination matrix to obtain a primary random reduction matrix; step 3, randomly extracting a plurality of rows from the primary random reduction matrix to obtain a secondary random reduction matrix; step 4, respectively drawing a primary random reduction matrix image and a secondary random reduction matrix image aiming at different viewpoints; step 5, training a deep neural network model by utilizing the primary random reduction matrix image and the secondary random reduction matrix image; and 6, when the high-reality image is drawn in real time, inputting the drawn quadratic random reduction matrix image into the trained deep neural network model, and outputting to obtain a complete high-reality image. The multi-light-source rendering method provided by the invention can be used for quickly and accurately performing multi-light-source rendering by utilizing the trained deep neural network model.
Description
Technical Field
The invention relates to the field of computer image processing, in particular to a method for rendering multiple light sources based on matrix row-column sampling and deep learning.
Background
Rendering complex scenes with indirect lighting, high dynamic range ambient lighting, and multiple direct light sources is a challenging task. Studies have shown that this type of problem can be solved by converting into a multi-light source problem, i.e. all light sources can be converted into a collection of point light sources, thereby converting the rendering problem of indirect lighting into one multi-light source problem. Direct rendering using thousands of point sources is obviously very difficult. The Lightcuts framework provides an extensible approach to the multi-point illuminant problem, with the visibility culling algorithm being able to complete the calculations within minutes with a CPU-based ray tracker.
In practical applications, the light source and the illuminated object have a relative positional relationship, and rendering needs to be performed according to the positional relationship between the light source and the object. In an interactive scene, such as a movie or a structural design process, real-time rendering needs to be performed in response to the relative position change of a light source and an object in time, which brings a great amount of computation. The existing method solves the problem through preprocessing, namely, rendering is finished aiming at various position relations in advance, and rendering results are directly read in an interaction stage. This smoothes out the total amount calculated. However, this approach has two significant drawbacks: 1. and a large amount of memory is occupied for storing the preprocessed data. 2. With the scene of this method, only the light source or the object can move on one side. This greatly limits the range of applications of the method.
The GPU has an acceleration capability inside as hardware dedicated to processing images, and includes a shadow mapping algorithm, a shader, and the like. Computational acceleration and parallel processing capabilities are provided for graphics rendering computations. The GPU is used for rendering, so that the CPU overhead can be effectively reduced, and the rendering calculation efficiency and effect are improved. However, the algorithm still consumes time, the invention utilizes the deep learning network to learn and complement the partially drawn images, and further achieves the purpose of drawing the whole image.
Disclosure of Invention
The invention provides a method for rendering multiple light sources based on matrix row-column sampling and deep learning, which is used for converting a problem of rendering multiple light sources in a complex three-dimensional scene into a problem of training a deep neural network model, and can rapidly and accurately render multiple light sources by using the trained deep neural network model.
A method for multi-light source rendering based on matrix row and column sampling and deep learning comprises the following steps:
step 1, establishing an illumination matrix according to a three-dimensional scene, wherein each column represents all sampling points irradiated by a light source, and each row represents all light source irradiation on one sampling point;
step 2, randomly extracting a plurality of rows from the illumination matrix to obtain a primary random reduction matrix;
step 3, randomly extracting a plurality of rows from the primary random reduction matrix to obtain a secondary random reduction matrix;
step 4, respectively drawing a primary random reduction matrix image and a secondary random reduction matrix image aiming at different viewpoints;
step 5, training a deep neural network model by utilizing the primary random reduction matrix image and the secondary random reduction matrix image;
and 6, when the high-reality image is drawn in real time, firstly drawing a secondary random reduction matrix image, then inputting the secondary random reduction matrix image into the trained deep neural network model, and outputting to obtain a complete high-reality image.
When the number of rows in the primary random reduction matrix is enough, the primary random reduction matrix image can be regarded as an image obtained by completely drawing the whole three-dimensional scene, and after the deep neural network model is trained, the primary random reduction matrix image can be obtained as long as the secondary reduction matrix image is input into the trained deep neural network model, namely, the image obtained by completely drawing the whole three-dimensional scene is obtained, so that the drawing efficiency is greatly improved.
Preferably, in step 4, the specific process of the step of drawing the random reduction matrix image once is as follows:
step 4-a-1, using sampling clustering to divide the primary random reduction matrix into a plurality of clusters, selecting a complete row as a representative in each cluster, and rendering the row according to an RGB color channel to obtain a complete illumination sample of the row;
step 4-a-2, expanding the illumination sampling of the agent row to obtain the illumination intensity sum of each cluster on the RGB channel;
and 4-a-3, combining the illumination intensity of each cluster to obtain a multi-light-source rendering result.
Preferably, the clustering of samples in step 4-a-1 comprises the steps of:
step 4-a-1-1, randomly selecting in a primary random reduction matrixTaking each column as a cluster center, dividing a point closest to the cluster center into clusters represented by the cluster centers, and c is the number of columns in the primary random reduction matrix;
step 4-a-1-2, aiming at a certain column in the primary random reduction matrix, preferentially and randomly selecting a column from the columns far away from the column, and when the certain column is selected, increasing the weight of the column according to a fixed proportion until the certain column is selectedThe individual columns are selected;
and 4-a-1-3, taking the row with larger weight as a cluster center, and dividing the primary random reduction matrix into a plurality of clusters according to the distance between the primary random reduction matrix and the cluster center.
Preferably, in step 4, the specific process of the step of drawing the quadratic stochastic reduction matrix image is as follows:
step 4-b-1, clustering the primary random reduction matrix according to a clustering factor, wherein the clustering factor is the number of rows of each cluster;
4-b-2, randomly selecting some clusters from the divided clusters, selecting a complete row as a representative in each randomly selected cluster, and rendering the row according to an RGB color channel to obtain complete illumination samples of the row;
step 4-b-3, expanding the illumination sampling of the agent row to obtain the illumination intensity sum of each cluster on the RGB channel;
and 4-b-4, combining the illumination intensity of each cluster to obtain a multi-light-source rendering result.
Preferably, in step 5, the number of image pairs is not less than 10000.
Each viewpoint corresponds to an image pair of the primary random reduction matrix image and the secondary random reduction matrix image, the number of the image pairs is not less than 10000, namely the number of the viewpoints is not less than 10000.
The method for rendering multiple light sources based on matrix row-column sampling and deep learning provided by the invention converts the problem of rendering multiple light sources in a complex three-dimensional scene into the problem of training a deep neural network model, and can rapidly and accurately render multiple light sources by utilizing the trained deep neural network model.
Drawings
FIG. 1 is a schematic diagram of a deep neural network model according to the present invention.
Detailed Description
The method for performing multi-light source rendering based on illumination matrix row-column sampling and deep learning is described in detail below with reference to the accompanying drawings.
Step 1, establishing an illumination matrix with multiple light sources according to a scene, wherein each column in the illumination matrix represents all sampling points irradiated by one light source, and each row represents all light source irradiation on one sampling point.
For a multi-light source scene with m sampling points and n light sources, calculating the sum of the contributions of all the light sources on each sampling point can obtain a complete scene rendering result, and the problem is converted into: an illumination matrix A with a specification of m × n, wherein any element AijAnd representing the contribution of the light source j on the sampling point i, wherein each element in the illumination matrix A adopts an RGB scalar, and all columns in the illumination matrix A are accumulated to obtain the contribution value of each light source on all sampling points.
The elements of the complete illumination matrix are calculated according to the above formula with a computational complexity of o (mn). If the sample point i is not visible to the light source j, then AijIs 0, according to the actual situation, there are a large number of 0 elements in the actual illumination matrix, i.e. the illumination matrix A is low rank, and r rows are randomly extracted from the illumination matrix to form a reductionWhen r is large enough, the illumination matrix can contain enough information of the complete illumination matrix, the reduced illumination matrix can be regarded as a sampling version of the complete illumination matrix, and the rendering effect of the complete illumination matrix can be obtained by rendering the reduced illumination matrix.
Using the j-th column of the illumination matrix ARepresents the rendering result sigma of the global multiple light sourcesACan be expressed by the following formula:
and 2, randomly extracting a plurality of rows from the illumination matrix to form a reduced illumination matrix, namely a primary random reduced illumination matrix.
Randomly selecting R rows from the illumination matrix A to form a R x n illumination matrix R, rhojIs a column of R, converts the elements in R from scalar using RGB to scalar using 2-norm of RGB triple, ρjIs a complete columnIs called a random reduction of columns at a time, the illumination matrix R is a reduced version of the complete illumination matrix a.
By dividing the illumination matrix R into a plurality of portions (i.e., a plurality of clusters) and processing them separately, the computational complexity can be reduced to O (m + n). The approximate result can be obtained by processing each cluster respectively and synthesizing the complete illumination matrix R, and the clustering method influences the error degree of the final result, so that the clustering method needs to be determined according to error estimation, and the error of the final result is minimized.
Step 3, respectively drawing a primary random reduction matrix image and a secondary random reduction matrix image aiming at different viewpoints, wherein the specific implementation details are as follows:
3.1, one-time rendering of random reduced matrix images
Stochastic in three-dimensional scenes10000 viewpoints are selected for drawing, and each viewpoint drawing image is numbered as V1........V10000. For each view image, the rendering process is as follows:
for an illumination matrix A of the specification m × n, n rows are divided into l clusters C1,C2,…,ClNorm of reduced column | | ρj| | is cluster CkThe medium light source j illuminates the entire image with an intensity.
Definition ofskIs a cluster CkDefine cluster C, from the measured values of total light intensitykHas an illumination intensity ofThe following formula can be obtained:
Wherein the jth column is in the cluster CkIn the percentage ofXA represents the estimated amount of illumination intensity of the illumination matrix a.
In each cluster, the reduced illumination matrix norm representative column is selected according to percentage, and generally more than 50% is selected as the reduced percentage. When all | | | ρjWith | 0, the following equation can be obtained:
from this it can be seen that E [ X ]A]Is actually sigmaAIs estimated unbiased. According to E [ X ]A]=∑AComplete lighting momentThe error evaluation formula for array a can be expressed as: e [ | | XA-∑A||2]The most effective clustering method is to make E [ | | XA-∑A||2]The value of (c) is minimized.
The illumination matrix R is an R x n illumination matrix formed by randomly selecting R rows from the illumination matrix A, sinceIs independent of XRIs thatIs calculated as the sum of the expected errors. Random variable XRAnd its corresponding evaluation value ∑RLabeled X and X', respectively, E [ | | X [ ]A-∑A||2]Expressed by the following formula:
measure the cluster as E [ | | XR-∑R||2]Expressed by the following formula:
The following equation is obtained by the above derivation:
the distance between any two vectors x and y is defined as
d represents a measure of the difference between two light sources in the same cluster, and can be represented by the following formula:
is the cosine of the angle between x and y. The contribution of the illumination intensity of the two light sources on the image can be evaluated by the angle between them.
By derivation of the above formula, the cluster-induced error can be represented by the following formula:
wherein, X represents pixel point, w represents weight corresponding to pixel point, k represents cluster number according to E [ | | XR-∑R||2]Is defined by the formula
According to the error evaluation formula, clustering the primary random reduction matrix in two steps:
step 3-1, dividing the illumination matrix into illumination matrices using sampling clusteringAnd (C is the column number of the illumination matrix R), the specific method is as follows: random selectionThe points (here, the points are equivalent to the columns in the illumination matrix R) are taken as cluster center points, and the points closest to the cluster center points are classified into the clusters represented by the cluster center points according to the calculation formula of the distance d.
Definition of alphaiIs the sum of all costs (i.e. light intensity) incident on point i:
p is to beiAs point i at αiThe ratio of (1) to (b) is such that, for a point i, a point is selected randomly, preferably among all points which are farther from the point i, and when the point i is selected, the weight is set to 1/pi. The weight is increased by 1/p every time the point i is selectediIteratively performing the processing until a pointThe points are selected. Then, the center point of the cluster is determined according to the weight of each point (the point with larger weight is used as the cluster center point), and then all the points are divided into clusters based on the distance d from the cluster center point.
And 3-2, completing all clustering by using a top-down separation method. Based on completion of a preceding stepClustering, namely further decomposing the clustered illumination matrix, wherein the specific method comprises the following steps: will be dottedProjected onto a random line (r-dimensional space) and then the best point to divide the line into two segments is found. In this way, a clustered illumination matrix R is divided into two parts, i.e. the number of clusters is doubled.
Through the above steps, the illumination matrix R can be divided into a plurality of sections, and the final calculation result error is minimized.
Clustering columns in a once-through random reduction matrixDividing to obtain k clusters C1,C2,…,Ck. And in each cluster, selecting a complete row as a representative, respectively rendering according to RGB color channels, and rendering the row by using a shader on the GPU to obtain a complete illumination sampling value of the row.
Definition ofIs p after coloringiAccording to the formulaThe cluster representations are expanded to obtain the sum of the illumination intensities of each cluster on the RGB channels.
By processing each cluster by using the method, each cluster C can be obtained1,C2,…,CkRBG illumination intensity.
And combining the clusters to obtain the sum of the illumination intensity of each row in the random reduction matrix R for one time, and finishing the rendering.
And merging the k clusters subjected to rendering processing obtained in the previous step to obtain the sum of all columns of the original illumination matrix R, namely obtaining a multi-light-source rendering result of the original scene.
3.2 quadratic stochastic reduction illumination matrix rendering
Similar to the primary random reduction illumination matrix drawing, 10000 viewpoints are randomly selected in a three-dimensional scene to carry out secondary random reduction illumination matrix drawing, and drawing images of each viewpoint are numbered and are respectively V'1........V’10000. For each viewpoint image, the specific rendering method is as follows:
the primary random reduction matrix R is clustered according to rows, and the specific division can be defined according to a clustering factor, for example, a clustering factor of 2 is to cluster every two rows, and if the clustering factor is 5, the random reduction matrix R is to cluster every 5 rows. Once clustering is complete, some clusters are randomly selected from the clustered clusters for rendering, and the rendered image is missing because some clusters are randomly selected, resulting in some image pixels not being computed. Once some clusters are randomly selected, the drawing method is similar to the drawing of a reduced illumination matrix;
and 4, training a deep neural network model by using the image pair drawn by the primary random reduction matrix and the secondary reduction illumination matrix.
The deep neural network model is shown in FIG. 1, where X represents an image generated by primary reduction of illumination matrix, Y represents an image generated by secondary reduction of illumination matrix, G and F are depth generation network models respectively representing a conversion function, DXAnd DYRespectively representing discriminative deep neural network models, DXLearning image features of once-reduced illumination matrix, DYImage features of the quadratic reduction illumination matrix are learned.
10000 images of a primary reduced illumination matrix and a secondary reduced illumination matrix are input for training, when training is carried out, a loss function is required to be minimum, the loss function is that the image drawn by the primary reduced illumination matrix is converted into the image drawn by the secondary reduced illumination matrix through a deep neural network model, then the image drawn by the secondary reduced illumination matrix is converted back into the image drawn by the primary reduced illumination matrix, the converted image is different from the input image drawn by the primary reduced illumination matrix, and when the loss function reaches the minimum, the whole neural network is considered to be trained.
And 5, by using the trained deep neural network model, when the real-time high-reality drawing is carried out, firstly drawing the illumination matrix image obtained by secondary reduction, then inputting the image into the deep neural network model, wherein the output image of the deep neural network model is the required complete drawing image.
The method converts the multi-light source rendering problem in the complex scene into the training learning problem of the deep neural network model, obtains better rendering images through the processing of the deep neural network model, improves the rendering efficiency and the real-time property, and can be applied to scenes with rendering requirements on the real-time property and the high quality.
Appropriate changes and modifications to the embodiments described above will become apparent to those skilled in the art from the disclosure and teachings of the foregoing description. Therefore, the present invention is not limited to the specific embodiments disclosed and described above, and some modifications and variations of the present invention should fall within the scope of the claims of the present invention. Furthermore, although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
Claims (5)
1. A method for performing multi-light source rendering based on matrix row and column sampling and deep learning is characterized by comprising the following steps:
step 1, establishing an illumination matrix according to a three-dimensional scene, wherein each column represents all sampling points irradiated by a light source, and each row represents all light source irradiation on one sampling point;
step 2, randomly extracting a plurality of rows from the illumination matrix to obtain a primary random reduction matrix;
step 3, randomly extracting a plurality of rows from the primary random reduction matrix to obtain a secondary random reduction matrix;
step 4, respectively drawing a primary random reduction matrix image and a secondary random reduction matrix image aiming at different viewpoints;
step 5, training a deep neural network model by utilizing the primary random reduction matrix image and the secondary random reduction matrix image;
and 6, when the high-reality image is drawn in real time, firstly drawing a secondary random reduction matrix image, then inputting the secondary random reduction matrix image into the trained deep neural network model, and outputting to obtain a complete high-reality image.
2. The method for multi-light-source rendering through matrix row-column sampling and deep learning according to claim 1, wherein in the step 4, the specific process of the step of drawing the once randomly reduced matrix image is as follows:
step 4-a-1, using sampling clustering to divide the primary random reduction matrix into a plurality of clusters, selecting a complete row as a representative in each cluster, and rendering the row according to an RGB color channel to obtain a complete illumination sample of the row;
step 4-a-2, expanding the illumination sampling of the agent row to obtain the illumination intensity sum of each cluster on the RGB channel;
and 4-a-3, combining the illumination intensity of each cluster to obtain a multi-light-source rendering result.
3. The method for multi-light source rendering by matrix row-column sampling and deep learning of claim 2, wherein the clustering of samples in step 4-a-1 comprises the steps of:
step 4-a-1-1, randomly selecting in a primary random reduction matrixTaking each column as a cluster center, dividing a point closest to the cluster center into clusters represented by the cluster centers, wherein c is the total number of columns in the primary random reduction matrix;
step 4-a-1-2, aiming at a certain column in the primary random reduction matrix, preferentially and randomly selecting a column from the columns far away from the column, and when the certain column is selected, increasing the weight of the column according to a fixed proportion until the certain column is selectedThe individual columns are selected;
and 4-a-1-3, taking the row with larger weight as a cluster center, and dividing the primary random reduction matrix into a plurality of clusters according to the distance between the primary random reduction matrix and the cluster center.
4. The method for multi-light-source rendering through matrix row-column sampling and deep learning according to claim 1, wherein in the step 4, the specific process of the step of drawing the quadratic stochastic reduction matrix image is as follows:
step 4-b-1, clustering the primary random reduction matrix according to a clustering factor, wherein the clustering factor is the number of rows of each cluster;
4-b-2, randomly selecting some clusters from the divided clusters, selecting a complete row as a representative in each randomly selected cluster, and rendering the row according to an RGB color channel to obtain complete illumination samples of the row;
step 4-b-3, expanding the illumination sampling of the agent row to obtain the illumination intensity sum of each cluster on the RGB channel;
and 4-b-4, combining the illumination intensity of each cluster to obtain a multi-light-source rendering result.
5. The method for multi-light source rendering by matrix row-column sampling and deep learning of claim 1, wherein in step 5, the number of image pairs is not less than 10000.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810320587.2A CN108682041B (en) | 2018-04-11 | 2018-04-11 | Method for performing multi-light-source rendering based on matrix row and column sampling and deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810320587.2A CN108682041B (en) | 2018-04-11 | 2018-04-11 | Method for performing multi-light-source rendering based on matrix row and column sampling and deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108682041A CN108682041A (en) | 2018-10-19 |
CN108682041B true CN108682041B (en) | 2021-12-21 |
Family
ID=63799860
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810320587.2A Active CN108682041B (en) | 2018-04-11 | 2018-04-11 | Method for performing multi-light-source rendering based on matrix row and column sampling and deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108682041B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104200513A (en) * | 2014-08-08 | 2014-12-10 | 浙江传媒学院 | Matrix row-column sampling based multi-light-source rendering method |
CN104200512A (en) * | 2014-07-30 | 2014-12-10 | 浙江传媒学院 | Multiple-light source rendering method based on virtual spherical light sources |
CN104732579A (en) * | 2015-02-15 | 2015-06-24 | 浙江传媒学院 | Multi-light-source scene rendering method based on light fragmentation |
CN105447906A (en) * | 2015-11-12 | 2016-03-30 | 浙江大学 | Method for calculating lighting parameters and carrying out relighting rendering based on image and model |
CN106558092A (en) * | 2016-11-16 | 2017-04-05 | 北京航空航天大学 | A kind of multiple light courcess scene accelerated drafting method based on the multi-direction voxelization of scene |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9349214B2 (en) * | 2008-08-20 | 2016-05-24 | Take-Two Interactive Software, Inc. | Systems and methods for reproduction of shadows from multiple incident light sources |
-
2018
- 2018-04-11 CN CN201810320587.2A patent/CN108682041B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104200512A (en) * | 2014-07-30 | 2014-12-10 | 浙江传媒学院 | Multiple-light source rendering method based on virtual spherical light sources |
CN104200513A (en) * | 2014-08-08 | 2014-12-10 | 浙江传媒学院 | Matrix row-column sampling based multi-light-source rendering method |
CN104732579A (en) * | 2015-02-15 | 2015-06-24 | 浙江传媒学院 | Multi-light-source scene rendering method based on light fragmentation |
CN105447906A (en) * | 2015-11-12 | 2016-03-30 | 浙江大学 | Method for calculating lighting parameters and carrying out relighting rendering based on image and model |
CN106558092A (en) * | 2016-11-16 | 2017-04-05 | 北京航空航天大学 | A kind of multiple light courcess scene accelerated drafting method based on the multi-direction voxelization of scene |
Non-Patent Citations (4)
Title |
---|
A matrix sampling-and-recovery approach for many-lights render;Yuchi Huo 等;《ACM Transactions on Graphics》;20151130;第34卷(第6期);1-12 * |
matrix row column sampling for the many light problem;Milosˇ Hasˇan 等;《ACM Transactions on Graphics》;20070731;第26卷(第3期);1-10 * |
三维模拟演练系统中光照模型的研究与实现;唐宇;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20161115(第11期);I138-428 * |
多光绘制框架下光泽场景的高效绘制;金师豪;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20160715(第7期);I138-1043 * |
Also Published As
Publication number | Publication date |
---|---|
CN108682041A (en) | 2018-10-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106683048B (en) | Image super-resolution method and device | |
Zhu et al. | Data Augmentation using Conditional Generative Adversarial Networks for Leaf Counting in Arabidopsis Plants. | |
CN111882002A (en) | MSF-AM-based low-illumination target detection method | |
CN107525588B (en) | Rapid reconstruction method of dual-camera spectral imaging system based on GPU | |
DE102018117813A1 (en) | Timely data reconstruction with an external recurrent neural network | |
DE102018108324A1 (en) | System and method for estimating an optical flow | |
CN112418074A (en) | Coupled posture face recognition method based on self-attention | |
CN112862792B (en) | Wheat powdery mildew spore segmentation method for small sample image dataset | |
CN110517352B (en) | Three-dimensional reconstruction method, storage medium, terminal and system of object | |
CN110490924B (en) | Light field image feature point detection method based on multi-scale Harris | |
CN111161199A (en) | Spatial-spectral fusion hyperspectral image mixed pixel low-rank sparse decomposition method | |
CN105701493A (en) | Methods and systems for image matting and foreground estimation based on hierarchical graphs | |
CN111950406A (en) | Finger vein identification method, device and storage medium | |
CN110738663A (en) | Double-domain adaptive module pyramid network and unsupervised domain adaptive image segmentation method | |
CN108765540B (en) | Relighting method based on image and ensemble learning | |
CN111476835B (en) | Unsupervised depth prediction method, system and device for consistency of multi-view images | |
CN113744136A (en) | Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion | |
CN111680579B (en) | Remote sensing image classification method for self-adaptive weight multi-view measurement learning | |
CN113139904A (en) | Image blind super-resolution method and system | |
CN104200513A (en) | Matrix row-column sampling based multi-light-source rendering method | |
Zhou et al. | Personalized and occupational-aware age progression by generative adversarial networks | |
CN111274964A (en) | Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle | |
CN110163149A (en) | Acquisition methods, device and the storage medium of LBP feature | |
CN114049531A (en) | Pedestrian re-identification method based on weak supervision human body collaborative segmentation | |
Zhu et al. | Dual-decoder transformer network for answer grounding in visual question answering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |