CN109712109A - A kind of optical imagery phase unwrapping winding method based on residual error convolutional neural networks - Google Patents

A kind of optical imagery phase unwrapping winding method based on residual error convolutional neural networks Download PDF

Info

Publication number
CN109712109A
CN109712109A CN201811313055.2A CN201811313055A CN109712109A CN 109712109 A CN109712109 A CN 109712109A CN 201811313055 A CN201811313055 A CN 201811313055A CN 109712109 A CN109712109 A CN 109712109A
Authority
CN
China
Prior art keywords
optical imagery
phase
winding
wound
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811313055.2A
Other languages
Chinese (zh)
Inventor
颜成钢
张腾
张永兵
陈智
陈子豪
张勇东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201811313055.2A priority Critical patent/CN109712109A/en
Publication of CN109712109A publication Critical patent/CN109712109A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of optical imagery phase unwrapping winding method based on residual error convolutional neural networks.The present invention includes the following steps: step 1, generates the optical imagery that phase is not wound using Zernike multinomial;Step 2 carries out phase winding operation using the optical imagery not wound, obtains phase winding image;Step 3 utilizes convolutional neural networks training pattern;Step 4 utilizes trained model prediction.Present invention utilizes residual error convolutional neural networks, the specific aim of this method is very strong, mainly for the phase unwrapping in optical imagery around.There is very big application prospect in optical imagery research field.It compares and for traditional phase unwrapping winding method, has the advantages that solving speed is fast, it is high to solve accuracy rate.

Description

A kind of optical imagery phase unwrapping winding method based on residual error convolutional neural networks
Technical field
The invention belongs to image phase unwrapping fields, especially for optical imagery, and in particular to one kind is rolled up based on residual error The optical imagery phase unwrapping winding method of product neural network.
Background technique
The signal that many times optical imagery obtains is plural form, contains range value and phase value.But believe from plural number When extracting true phase in number, phase value can be limited in the section of [- π, π], and the true phase outside the section is twined In this section.This phenomenon is known as phase winding, and obtained phase is known as winding phase.It is obtained from winding phase true Phase be known as phase unwrapping around.
Existing phase unwrapping winding method mainly has following three kinds.First method is based on Discrete Particle Swarm Optimization Algorithm Branch tangential method.The residual error of entire image is first divided into several groups by this method;Discrete Particle Swarm Optimization Algorithm is used in every group Positive-negative polarity residual error is matched;Well matched positive-negative polarity residual error in every group is connected with branch tangent line;Finally bypass these branches Tangent line carries out phase unwrapping.Second of phase unwrapping method is the weighting minimum L based on direct solving methodpNorm method.It will be whole The solution of a phase image twines phase gradient and winds the weighting L of difference between phase gradientpNorm is as optimization object function;It will This objective function is converted to an equation group, and coefficient matrix is stored and expressed using sparsity structure;Finally using directly asking Solution solve system of equation.Since to twine phase related for coefficient matrix and the solution of equation group, iterative manner is taken to obtain final Disentanglement fruit.The third method is the region growth method based on mask.This method will using a kind of new mask extracting mode Residual error is reasonably connected as the zero point in mask;Mask and phase derivative variance are bonded final Quality Map, The point that connection residual error is passed through so is treated as the point of zero mass (namely quality is worst), can be detained to the end just by phase Solution twines;Entire image is divided into multiple regions then according to Quality Map, individually carries out phase unwrapping in each region, wherein matter It is average from multiple directions progress phase weighting to measure that worst region;Finally multiple regions are fused together.However it is above-mentioned Method faces the disadvantage that solving speed is slow, solving precision is poor, robustness is insufficient.Therefore the invention proposes a kind of new based on volume The optical imagery phase unwrapping winding method of product neural network.
Summary of the invention
It is more and more to be applied to phase in imaging signal present invention primarily contemplates the development with optical image technology Information.How preferably to solve the problems, such as that the unwrapping of optical phase is to be worth inquiring into.The present invention is directed to by Zernike multinomial The optical imagery of generation has carried out phase unwrapping around research.A kind of optical imagery phase based on residual error convolutional neural networks is provided Unwrapping method.
The technical solution adopted by the present invention to solve the technical problems includes the following steps:
Step 1 generates the optical imagery that phase is not wound using Zernike multinomial
Aberration refers to the image defects in optical system.Aberration is divided into monochromatic aberraation of light and chromasia in geometric optics Difference, the former includes spherical aberration, coma, astigmatism, the curvature of field and distortion, and the latter includes chromatism of position and ratio chromatism,;And in physical optics Aberration is referred to as wave front aberration or wavefront aberration, is the waveform that the spherical wave of point light source sending is formed after optical system The distance between ideal spherical face wave.Wave front aberration can pass through the geometric images such as Zernike polynomial period table or spherical aberration, coma Difference is expressed.
Zernike introduced one group of complex function collection { V being defined on unit circle in 1934pq(x, y) }, { Vpq(x, Y) } there is completeness and orthogonality, it is allowed to indicate to be defined on any quadractically integrable function in unit circle.It is defined Are as follows: Vpq(x, y)=Vpq(ρ, θ)=Rpq(ρ)ejqθ
Wherein, ρ indicates the vector length of former point-to-point (x, y);θ indicates vector ρ and the anticlockwise angle of x-axis;Rpq (ρ) is real value radial polynomial:
Referred to as Zernike multinomial, Zernike multinomial meet orthogonality.Due to the polynomial Orthogonal Complete of Zernike Property, so any image in unit circle can be indicated uniquely.Due to being observed in Zernike multinomial and optical detection To the form of aberrational polynomial be consistent, thus common Zernike describes wavefront properties.
Therefore, the optical imagery that we are not wound using Zernike multinomial generation phase in the present invention.
Step 2 carries out phase winding operation using the optical imagery not wound, obtains phase winding image
By the way that after step 1, we have obtained the optical imagery similar with experiment that a batch is not wound.And then we are logical It crosses following formula and obtains the difference of corresponding phase winding image and the two.
imgdiff=imgunwrap-imgwrap
Wherein, imgwrap、imgunwrapRespectively represent the optical imagery for not winding and winding;Angle (x) represents the phase of x Position;imgdiffIt is the difference for the optical imagery for not winding and winding.
Step 3 utilizes convolutional neural networks training pattern
In computer vision, " grade " of feature is got higher with network depth is increased, studies have shown that the depth of network An important factor for being the effect realized.However gradient disperse, explosion become the obstacle of the profound network of training, cause to train It can not restrain.Although there is certain methods that can make up, if normalizing initializes, each layer input normalization allows to convergent net The depth of network is promoted to original ten times.Although increasing the network number of plies but however, convergence, network start to deteriorate Lead to bigger error.Then in 2015, He Kaiming, which proposes Resnet network, can allow network to increase with depth without moving back Change.Resnet study is residual error function f (x)=H (x)-x.
In the present invention, the convolutional neural networks with residual error that we also use 25 layers have carried out related optical imagery solution and have twined Around feature learning.In our algorithm, H (x) represents the feature of our the final optical imagery unwrapping algorithms to be learned.Its The optical imagery being wound around is inputted, output is the optical imagery not wound.X represents our input, that is, the optics wound Image.What f (x) was represented is the difference not wound between image and winding image that we learn.Why residual error net is selected Network, reason have following two points.First is that residual error network in profound network both can with fast convergence it is also ensured that network not Increase with depth and degenerates;Second is that being not random because not winding has certain regularity with the difference of winding image Number.
Model of the invention uses the convolution kernel of 3*3,64 characteristic patterns of every layer of extraction.Activation primitive is Relu, activation It is normalized later using Batch Normalization.It is finally solved using ADMM algorithm, specific loss function Are as follows:
L (Y, y (x))=(Y-y (x))2
Wherein, Y represents the image that true optical imagery phase is not wound, and y (x) represents the optics that the present invention is predicted The image that image phase is not wound.
Finally our network passes through the training of 50 epoch, and simulation result, which has substantially surmounted traditional image solution, to be twined Around algorithm.
Step 4 utilizes trained model prediction
The network structure and parameter obtained present invention utilizes step 3 training result is saved.We utilize later Above-mentioned preservation result has carried out the unwrapping of optics winding image.The prediction result accuracy of model and solving speed relative to Traditional image unwrapping algorithm has very big improvement.
The method of the present invention has the advantage that and beneficial outcomes are as follows:
1, the invention proposes a kind of new optical imagery phase unwrapping around method.This process employs residual error convolution minds Through network, the specific aim of this method is very strong, mainly for the phase unwrapping in optical imagery around.Have in optical imagery research field Very big application prospect.
2, the phase unwrapping winding method proposed by the present invention based on residual error convolutional neural networks, compares and traditional phase For unwrapping method, have the advantages that solving speed is fast, it is high to solve accuracy rate.
Detailed description of the invention
Fig. 1 (a), 1 (b), 1 (c) are the optical imagerys in example of the present invention.
The optical imagery that the phase that wherein Fig. 1 (a) generates for Zernike multinomial is not wound, Fig. 1 (b) is corresponding The optical imagery of phase winding, Fig. 1 (c) are that phase does not wind image and winds the difference of image;
Fig. 2 is residual error convolutional neural networks structure chart of the invention;
Fig. 3 (a), 3 (b), 3 (c), 3 (d) are the optical phase winding image prediction knots in example of the present invention Fruit.
Wherein Fig. 3 (a) is the optical imagery of phase winding, and Fig. 3 (b) is that the phase of prediction does not wind image and winding image Difference, Fig. 3 (c) is the optical imagery that the phase of prediction is winding, and Fig. 3 (d) is the light that corresponding true phase is not wound Learn image.
Specific embodiment
The present invention will be described in detail With reference to embodiment.
As shown in Figure 1-3, the present invention proposes the optical imagery phase unwrapping winding method based on residual error convolutional neural networks, press Implement according to following steps.
Step 1 generates the optical imagery that phase is not wound using Zernike multinomial
Aberration refers to the image defects in optical system.Aberration is divided into monochromatic aberraation of light and chromasia in geometric optics Difference, the former includes spherical aberration, coma, astigmatism, the curvature of field and distortion, and the latter includes chromatism of position and ratio chromatism,;And in physical optics Aberration is referred to as wave front aberration or wavefront aberration, is the waveform that the spherical wave of point light source sending is formed after optical system The distance between ideal spherical face wave.Wave front aberration can pass through the geometric images such as Zernike polynomial period table or spherical aberration, coma Difference is expressed.
Zernike introduced one group of complex function collection { V being defined on unit circle in 1934pq(x, y) }, { Vpq(x, Y) } there is completeness and orthogonality, it is allowed to indicate to be defined on any quadractically integrable function in unit circle.It is defined Are as follows: Vpq(x, y)=Vpq(ρ, θ)=Rpq(ρ)ejqθ
Wherein, ρ indicates the vector length of former point-to-point (x, y);θ indicates vector ρ and the anticlockwise angle of x-axis;Rpq (ρ) is real value radial polynomial:
Referred to as Zernike multinomial, Zernike multinomial meet orthogonality.Due to the polynomial Orthogonal Complete of Zernike Property, so any image in unit circle can be indicated uniquely.Due to being observed in Zernike multinomial and optical detection To the form of aberrational polynomial be consistent, thus common Zernike describes wavefront properties.
Therefore, the optical imagery that we are not wound using Zernike multinomial generation phase in the present invention.
Step 2 carries out phase winding operation using the optical imagery not wound, obtains phase winding image
By the way that after step 1, we have obtained the optical imagery similar with experiment that a batch is not wound.And then we are logical It crosses following formula and obtains the difference of corresponding phase winding image and the two.
imgdiff=imgunwrap-imgwrap
Wherein, imgwrap、imgunwrapRespectively represent the optical imagery for not winding and winding;Angle (x) represents the phase of x Position;imgdiffIt is the difference for the optical imagery for not winding and winding.
Step 3 utilizes convolutional neural networks training pattern
In computer vision, " grade " of feature is got higher with network depth is increased, studies have shown that the depth of network An important factor for being the effect realized.However gradient disperse, explosion become the obstacle of the profound network of training, cause to train It can not restrain.Although there is certain methods that can make up, if normalizing initializes, each layer input normalization allows to convergent net The depth of network is promoted to original ten times.Although increasing the network number of plies but however, convergence, network start to deteriorate Lead to bigger error.Then in 2015, He Kaiming, which proposes Resnet network, can allow network to increase with depth without moving back Change.Resnet study is residual error function f (x)=H (x)-x.
In the present invention, the convolutional neural networks with residual error that we also use 25 layers have carried out related optical imagery solution and have twined Around feature learning.In our algorithm, H (x) represents the feature of our the final optical imagery unwrapping algorithms to be learned.Its The optical imagery being wound around is inputted, output is the optical imagery not wound.X represents our input, that is, the optics wound Image.What f (x) was represented is the difference not wound between image and winding image that we learn.Why residual error net is selected Network, reason have following two points.First is that residual error network in profound network both can with fast convergence it is also ensured that network not Increase with depth and degenerates;Second is that being not random because not winding has certain regularity with the difference of winding image Number.
Regression model of the invention uses the convolution kernel of 3*3,64 characteristic patterns of every layer of extraction.Activation primitive is Relu, It is normalized after activation using Batch Normalization.It is finally solved using ADMM algorithm, specific loss Function are as follows:
L (Y, y (x))=(Y-y (x))2
Wherein, Y represents the image that true optical imagery phase is not wound, and y (x) represents the optics that the present invention is predicted The image that image phase is not wound.
Finally our network passes through the training of 50 epoch, and simulation result, which has substantially surmounted traditional image solution, to be twined Around algorithm.
Step 4 utilizes trained model prediction
The network structure and parameter obtained present invention utilizes step 3 training result is saved.We utilize later Above-mentioned preservation result has carried out the unwrapping of optics winding image.The prediction result accuracy of model and solving speed relative to Traditional image unwrapping algorithm has very big improvement.
Embodiment
Training set is similar to more by Zernike shown in Fig. 1 (a), 1 (b), 1 (c) in embodiment of the present invention The optical imagery that item formula generates.Corresponding phase, which is generated, according to step 2 later winds image.Model training by step 3 it Afterwards, it is tested using the parameter learnt.Shown in final testing result such as Fig. 3 (a), 3 (b), 3 (c), 3 (d).

Claims (1)

1. a kind of optical imagery phase unwrapping winding method based on residual error convolutional neural networks, it is characterised in that including walking as follows It is rapid:
Step 1 generates the optical imagery that phase is not wound using Zernike multinomial;
Step 2 carries out phase winding operation using the optical imagery not wound, obtains phase winding image;
Step 3 utilizes convolutional neural networks training pattern;
Step 4 utilizes trained model prediction;
The step 1 is specific:
{Vpq(x, y) } it is one group of complex function collection being defined on unit circle, there is completeness and orthogonality;
{Vpq(x, y) } it can indicate any quadractically integrable function being defined in unit circle, is defined as:
Vpq(x, y)=Vpq(ρ, θ)=Rpq(ρ)ejqθ
Wherein, ρ indicates the vector length of former point-to-point (x, y);θ indicates vector ρ and the anticlockwise angle of x-axis;Rpq(ρ) is Real value radial polynomial:
Since the form of the aberrational polynomial observed in Zernike multinomial and optical detection is consistent, thus use Zernike describes wavefront properties.
The step 2 is specific:
The optical imagery similar with experiment not wound by obtaining a batch after step 1, obtains corresponding phase by following formula The difference of position winding image and the two.
imgdiff=imgunwrap-imgwrap
Wherein, imgwrap、imgunwrapRespectively represent the optical imagery for not winding and winding;Angle (x) represents the phase of x; imgdiffIt is the difference for the optical imagery for not winding and winding.
The step 3 is specific:
3-1. carries out the feature learning in relation to optical imagery unwrapping using 25 layers of the convolutional neural networks with residual error, wherein residual Difference function indicates are as follows: f (x)=H (x)-x;
Wherein, H (x) represents the feature of the optical imagery unwrapping algorithm finally to be learned.It inputs the optical imagery being wound around, defeated It is the optical imagery not wound out;X represents input, that is, the optical imagery wound;What f (x) representative learnt does not wind figure Difference between picture and winding image.
3-2. model uses the convolution kernel of 3*3,64 characteristic patterns of every layer of extraction.Activation primitive is Relu, is utilized after activation Batch Normalization is normalized.
3-3. is solved using ADMM algorithm, specific loss function are as follows:
L (Y, y (x))=(Y-y (x))2
Wherein, Y represents the image that true optical imagery phase is not wound, and y (x) represents the optical imagery phase predicted and do not twine Around image.
The step 4 is specific:
The network structure and parameter obtain to step 3 training result saves, and carries out optics winding figure using result is saved The unwrapping of picture.
CN201811313055.2A 2018-11-06 2018-11-06 A kind of optical imagery phase unwrapping winding method based on residual error convolutional neural networks Pending CN109712109A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811313055.2A CN109712109A (en) 2018-11-06 2018-11-06 A kind of optical imagery phase unwrapping winding method based on residual error convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811313055.2A CN109712109A (en) 2018-11-06 2018-11-06 A kind of optical imagery phase unwrapping winding method based on residual error convolutional neural networks

Publications (1)

Publication Number Publication Date
CN109712109A true CN109712109A (en) 2019-05-03

Family

ID=66254232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811313055.2A Pending CN109712109A (en) 2018-11-06 2018-11-06 A kind of optical imagery phase unwrapping winding method based on residual error convolutional neural networks

Country Status (1)

Country Link
CN (1) CN109712109A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110309910A (en) * 2019-07-03 2019-10-08 清华大学 The adaptive micro imaging method of optimization and device based on machine learning
CN111325317A (en) * 2020-01-21 2020-06-23 北京空间机电研究所 Wavefront aberration determination method and device based on generation countermeasure network
CN111461224A (en) * 2020-04-01 2020-07-28 西安交通大学 Phase data unwrapping method based on residual self-coding neural network
CN111561877A (en) * 2020-04-24 2020-08-21 西安交通大学 Variable resolution phase unwrapping method based on point diffraction interferometer
CN111812647A (en) * 2020-07-11 2020-10-23 桂林电子科技大学 Phase unwrapping method for interferometric synthetic aperture radar
WO2021003003A1 (en) * 2019-07-02 2021-01-07 Microsoft Technology Licensing, Llc Phase depth imaging using machine-learned depth ambiguity dealiasing
CN112381172A (en) * 2020-11-28 2021-02-19 桂林电子科技大学 InSAR interference image phase unwrapping method based on U-net
CN113238227A (en) * 2021-05-10 2021-08-10 电子科技大学 Improved least square phase unwrapping method and system combined with deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441066A (en) * 2008-12-23 2009-05-27 西安交通大学 Phase de-packaging method of color fringe coding
CN107202550A (en) * 2017-06-09 2017-09-26 北京工业大学 A kind of method based on least square method Phase- un- wrapping figure

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441066A (en) * 2008-12-23 2009-05-27 西安交通大学 Phase de-packaging method of color fringe coding
CN107202550A (en) * 2017-06-09 2017-09-26 北京工业大学 A kind of method based on least square method Phase- un- wrapping figure

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GILI等: ""Phase Unwrapping Using Residual Neural Networks"", 《COMPUTATIONAL OPTICAL SENSING AND IMAGING》 *
MANUEL等: ""Temporal phase-unwrapping of static surfaces with 2-sensitivity fringe-patterns"", 《OPTICS EXPRESS》 *
NARUTO_Q: ""像差与zernike多项式"", 《HTTPS://BLOG.CSDN.NET/PIAOXUEZHONG/ARTICLE/DETAILS/65444605》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021003003A1 (en) * 2019-07-02 2021-01-07 Microsoft Technology Licensing, Llc Phase depth imaging using machine-learned depth ambiguity dealiasing
US10929956B2 (en) 2019-07-02 2021-02-23 Microsoft Technology Licensing, Llc Machine-learned depth dealiasing
CN110309910A (en) * 2019-07-03 2019-10-08 清华大学 The adaptive micro imaging method of optimization and device based on machine learning
CN111325317A (en) * 2020-01-21 2020-06-23 北京空间机电研究所 Wavefront aberration determination method and device based on generation countermeasure network
CN111325317B (en) * 2020-01-21 2023-12-12 北京空间机电研究所 Wavefront aberration determining method and device based on generation countermeasure network
CN111461224A (en) * 2020-04-01 2020-07-28 西安交通大学 Phase data unwrapping method based on residual self-coding neural network
CN111461224B (en) * 2020-04-01 2022-08-16 西安交通大学 Phase data unwrapping method based on residual self-coding neural network
CN111561877A (en) * 2020-04-24 2020-08-21 西安交通大学 Variable resolution phase unwrapping method based on point diffraction interferometer
CN111561877B (en) * 2020-04-24 2021-08-13 西安交通大学 Variable resolution phase unwrapping method based on point diffraction interferometer
CN111812647A (en) * 2020-07-11 2020-10-23 桂林电子科技大学 Phase unwrapping method for interferometric synthetic aperture radar
CN112381172A (en) * 2020-11-28 2021-02-19 桂林电子科技大学 InSAR interference image phase unwrapping method based on U-net
CN113238227A (en) * 2021-05-10 2021-08-10 电子科技大学 Improved least square phase unwrapping method and system combined with deep learning

Similar Documents

Publication Publication Date Title
CN109712109A (en) A kind of optical imagery phase unwrapping winding method based on residual error convolutional neural networks
CN109886880A (en) A kind of optical imagery phase unwrapping winding method based on U-Net segmentation network
CN106950195B (en) Programmable optical elements and light field regulator control system and method based on scattering medium
CN108615010A (en) Facial expression recognizing method based on the fusion of parallel convolutional neural networks characteristic pattern
CN112116601B (en) Compressed sensing sampling reconstruction method and system based on generation of countermeasure residual error network
CN107798697A (en) A kind of medical image registration method based on convolutional neural networks, system and electronic equipment
CN110044498A (en) A kind of Hartmann wave front sensor modal wavefront reconstruction method based on deep learning
CN111915545B (en) Self-supervision learning fusion method of multiband images
CN110490818B (en) Computed ghost imaging reconstruction recovery method based on CGAN
Tang et al. RestoreNet: a deep learning framework for image restoration in optical synthetic aperture imaging system
CN108010029A (en) Fabric defect detection method based on deep learning and support vector data description
Wu Identification of maize leaf diseases based on convolutional neural network
CN116309062A (en) Remote sensing image super-resolution reconstruction method
CN104036242A (en) Object recognition method based on convolutional restricted Boltzmann machine combining Centering Trick
Zhu et al. An improved generative adversarial networks for remote sensing image super-resolution reconstruction via multi-scale residual block
Lin et al. Deep learning-assisted wavefront correction with sparse data for holographic tomography
Sun et al. Iris recognition based on local circular Gabor filters and multi-scale convolution feature fusion network
CN109597291A (en) A kind of optical scanner hologram image recognition methods based on convolutional neural networks
Cui et al. Neural invertible variable-degree optical aberrations correction
Liu et al. Ultrasound super resolution using vision transformer with convolution projection operation
CN115330759B (en) Method and device for calculating distance loss based on Hausdorff distance
Liu et al. Dual UNet low-light image enhancement network based on attention mechanism
Chen et al. Contrastive learning with feature fusion for unpaired thermal infrared image colorization
Wang et al. High-resolution three-dimensional microwave imaging using a generative adversarial network
Ashiquzzaman et al. Compact deeplearning convolutional neural network based hand gesture classifier application for smart mobile edge computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190503

RJ01 Rejection of invention patent application after publication