CN112699917A - Image identification method of nonlinear optical convolution neural network - Google Patents
Image identification method of nonlinear optical convolution neural network Download PDFInfo
- Publication number
- CN112699917A CN112699917A CN202011456497.XA CN202011456497A CN112699917A CN 112699917 A CN112699917 A CN 112699917A CN 202011456497 A CN202011456497 A CN 202011456497A CN 112699917 A CN112699917 A CN 112699917A
- Authority
- CN
- China
- Prior art keywords
- optical
- neural network
- convolution
- nonlinear
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 82
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000013528 artificial neural network Methods 0.000 title abstract description 10
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 23
- 230000004913 activation Effects 0.000 claims abstract description 15
- 238000003062 neural network model Methods 0.000 claims abstract description 15
- 238000010586 diagram Methods 0.000 claims abstract description 14
- 238000004364 calculation method Methods 0.000 claims abstract description 12
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims abstract description 6
- 238000013507 mapping Methods 0.000 claims abstract description 6
- 238000012549 training Methods 0.000 claims description 22
- 238000012360 testing method Methods 0.000 claims description 21
- 238000012795 verification Methods 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 2
- 238000010998 test method Methods 0.000 claims 1
- 239000010410 layer Substances 0.000 abstract description 17
- 239000002356 single layer Substances 0.000 abstract description 7
- 230000007547 defect Effects 0.000 abstract description 4
- 238000000605 extraction Methods 0.000 abstract description 2
- 238000002474 experimental method Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 26
- 238000012545 processing Methods 0.000 description 4
- 210000002569 neuron Anatomy 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/067—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using optical means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an image identification method of a nonlinear optical convolution neural network. Aiming at the defects of insufficient nonlinear data feature extraction capability and weak classification performance of the existing single-layer optical convolutional neural network, a nonlinear optical convolutional neural network method based on a Swish activation function is designed for image recognition. Firstly, establishing a layer of optical convolution neural network model, including optical convolution kernel structure, quantity, calculation method and optical characteristic diagram; then, sending the generated optical convolution characteristic diagram into a Swish activation unit to form nonlinear mapping; and finally, establishing a multilayer nonlinear optical convolution neural network model, and optimizing model parameters by adopting an Adam algorithm. The method is used for carrying out experiments on the MNIST handwritten digital image data set, and results show that compared with the existing single-layer optical convolution neural network, the method provided by the invention has the advantages of stronger nonlinear data separability, high classification precision and simplicity and convenience in calculation.
Description
Technical Field
The invention belongs to the field of optics and deep learning, and particularly relates to an image identification method of a nonlinear optical convolution neural network.
Background
Convolutional neural networks are one of the most active methods of deep learning and have been successful in various applications, such as computer vision, language processing, image semantic analysis, and the like. However, the number of layers and the number of connections of the convolutional neural network are continuously increased, so that the calculation cost is also continuously increased, and the application is limited to a certain extent. Although the deep convolutional neural network training process can be completed at the server side, and then the optimized model and parameters are deployed to the equipment side to complete the reasoning work, the model still needs a large amount of power consumption and storage space for storing the optimized network parameters in the reasoning process. Therefore, in recent years, many companies, colleges and research institutions at home and abroad implement a convolutional neural network structure by using hardware, including a turenonth chip of an IMB, a Visual Processing Unit (VPU) of Movidius, a google Tensor Processing Unit (TPU), and the like. Although these methods improve the reasoning efficiency of convolutional neural networks, they still consume more power and computational power.
Optical computing has gained attention from many researchers at home and abroad due to its high bandwidth, high interconnectivity and high parallel processing capability. The common doctor at Stanford university innovatively proposes that a single-layer optical convolution neural network model is designed by using a light calculation mode for image recognition, and theoretical support is provided for future photonic chip development. This result is published in the Scientific Reports journal in the form of a paper. However, the current single-layer optical convolutional neural network model does not realize nonlinearity, and nonlinear data is difficult to model. In addition, because the network model only comprises one convolutional layer, the network cannot effectively extract enough data features, so that the classification accuracy is reduced. Therefore, in view of the defects in the prior art, the invention adopts the Swish activation function as the nonlinear function of the convolutional neural network to construct a multilayer nonlinear optical convolutional neural network model for the image recognition task.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an image identification method of a nonlinear optical convolutional neural network.
In order to solve the technical problems, the invention adopts the technical scheme that: a method of image recognition of a nonlinear optical convolutional neural network, the method comprising the steps of: step a, designing an optical convolution kernel structure, an optical convolution characteristic diagram and a calculation method; b, constructing an optical convolution neural network model, and giving a convolution calculation formula of the optical convolution kernel structure and the optical convolution characteristic diagram; step c, carrying out nonlinear mapping on the optical convolution characteristic diagram by using a Swish nonlinear activation function; d, constructing a multilayer optical convolution neural network model and optimizing by using an Adam algorithm; and e, training and testing the model on the MNIST database to obtain a test result.
Preferably, in step a, the optical convolution kernel adopts a tiling mode, the number of convolution kernels is 16, the optical convolution is calculated by a point spread function, and the formula is as follows:
where denotes a two-dimensional convolution, PSF denotes a point spread function, Γ denotes an offset, Δ x and Δ y denote distances moved in the vertical and horizontal directions, respectively, in order that the convolution kernel does not overlap the image, WiCorresponding to the ith convolution kernel, IoutAn optical convolution signature representing the output of the corresponding convolution kernel, in a tiled fashion,
Preferably, in step d, when the Adam algorithm is used for optimization, the nonlinear optical convolutional neural network loss function is represented as:wherein the content of the first and second substances,in order to achieve the target value,for the estimation, the constraints are: w > 0, i.e., the point spread function must not have a negative value.
Preferably, in step e, an MNIST handwritten digit image set is used as a test image, and 70000 images with the size of 28 x 28 handwritten digits of 0-9 are taken as the data image set, wherein 55000 images are used as a training set, 5000 images are used as a verification set, and 10000 images are used as a test set.
Preferably, in step e, the testing method comprises the following steps: preprocessing three image data sets in MNIST, and amplifying a handwritten digital image to 200 x 200; and step two, training a nonlinear optical convolution neural network model by using the 55000 training images, and then verifying the trained model by using the 5000 verification image sets. The training parameters were: the training period is 1000, and the batch adopted in each period is 100 samples; and step three, testing the model obtained by training in the step two by using the 10000 test image, wherein the identification accuracy is 97.87%.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, a Swish activation function is added into the optical convolution neural network, and the cross entropy is adopted as a target function, so that a multilayer nonlinear optical convolution neural network model is creatively constructed, and the image identification accuracy is substantially and remarkably improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
Further objects, features and advantages of the present invention will become apparent from the following description of embodiments of the invention, with reference to the accompanying drawings, in which:
FIG. 1 schematically shows a process flow diagram of the present invention;
FIG. 2 schematically illustrates an optical convolution calculation of the present invention;
FIG. 3 is a schematic diagram of a 3-layer optical convolution structure constructed in accordance with the present invention;
fig. 4 schematically shows a test accuracy chart of the present invention on a MNIST handwritten digital image set.
Detailed Description
The objects and functions of the present invention and methods for accomplishing the same will be apparent by reference to the exemplary embodiments. However, the present invention is not limited to the exemplary embodiments disclosed below; it can be implemented in different forms. The nature of the description is merely to assist those skilled in the relevant art in a comprehensive understanding of the specific details of the invention.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same reference numerals denote the same or similar parts, or the same or similar steps.
The invention aims to solve the defects that the existing single-layer optical convolutional neural network model cannot realize nonlinearity and feature extraction is insufficient in image recognition, and provides an image recognition method of a multilayer nonlinear optical convolutional neural network to solve the problems.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
an image identification method of a nonlinear optical convolution neural network comprises the following steps:
step 1, designing an optical convolution kernel structure, an optical convolution characteristic diagram and a calculation method;
step 2, constructing a layer of optical convolution neural network model according to the design result of the step 1, and giving a convolution calculation formula of an optical convolution kernel and an image;
step 3, according to the step 2, carrying out nonlinear mapping on the optical convolution characteristic diagram by using a Swish nonlinear activation function;
step 4, constructing a multilayer optical convolution neural network model according to the steps 2 and 3, wherein an Adam algorithm is adopted in the optimization method;
and 5, testing the embodiment, namely firstly, acquiring a training and verification image set in an MNIST handwritten digital image database, training the nonlinear optical convolutional neural network, then acquiring a test image set in the MNIST handwritten digital image database, inputting the test image set into the trained nonlinear optical convolutional neural network, and obtaining the identification result class of the image to be tested according to the minimum judgment of the output value and the target value of the image to be tested.
In the above method, in step 1, in the optical system, the convolution kernel is represented by a point spread function, and then the point spread function is convolved with the input image to obtain an optical convolution feature map.
In the above method, in step 3, the negative activation coefficient β of the Swish activation function is 0.1, but is not limited to this coefficient value.
In the above method, in step 4, the established multilayer convolutional neural network structure includes: 3 optical convolution layers and 1 full-connection layer, and the target function adopts cross entropy.
The invention will be further illustrated with reference to the following examples and drawings:
an image identification method of a nonlinear optical convolution neural network comprises the following steps:
step 1: the optical convolution kernel adopts a tiling mode, the number of the convolution kernels is 16, the optical convolution is calculated through a point spread function, and the formula is expressed as follows:
Iout=Iin*PSF(x,y)
where denotes a two-dimensional convolution, the PSF denotes a point spread function, expressed as:
where Γ denotes the offset, Δ x and Δ y denote the distance moved in the vertical and horizontal directions, respectively, in order that the convolution kernel does not overlap the image.
According to the above formula, the optical convolution formula is expressed as:
wherein, WiCorresponding to the ith convolution kernel.
IoutAnd (3) representing an optical convolution characteristic diagram corresponding to the convolution kernel output, which is a tiling mode, and specifically referring to the content of step 2.
Step 2: according to step 1, the optical convolution process is as shown in fig. 2, the optical convolution adopts a single-channel image, the input image (feature map) and the optical convolution kernel tiled on the image are calculated to obtain an optical feature map, and the optical feature map adopts a tiling mode.
And step 3: adding a non-linear activation function at the output of the optical signature for performing the non-linear mapping, wherein the activation function Swish is represented as:
wherein β is 0.1.
And 4, step 4: according to the step 2 and the step 3, a multilayer nonlinear optical convolutional neural network is established, and the structural schematic diagram is shown in fig. 3. The convolution layer number is 3 layers, the first convolution layer comprises 16 optical convolution kernels, the second convolution layer comprises 25 optical convolution kernels, the third convolution layer comprises 25 convolution kernels, convolution output values of each layer are sent into a Swish activation function to complete nonlinear mapping, the hidden layer is a full-connection layer, the number of the neurons is 2048, the number of the output layers is 16 neurons, the neurons correspond to different categories, and the output layers adopt softmax functions.
The nonlinear optical convolutional neural network loss function is expressed as:
wherein isThe target value is,for the estimation, the constraints are: w > 0, i.e., the point spread function must not have a negative value.
Step six: embodiments employ a set of MNIST handwritten digit images as test images, the data set comprising 70,000 images of 28 x 28 handwritten digits 0-9, wherein 55,000 images are a training set, 5,000 images are a validation set, and 10,000 images are a test set.
The method comprises the following specific steps:
(1) preprocessing three image data sets in MNIST, and amplifying the handwritten digital image to 200 x 200;
(2) setting relevant parameters of the whole calculation model, wherein the specific values are described above;
(3) the nonlinear optical convolutional neural network model was trained using 55,000 training images, and then the trained model was validated using a 5,000 validation image set. The training parameters were: the training period (epoch) is 1000, and the batch adopted in each period is 100 samples;
(4) and (3) testing the model obtained by training in the step (3) by using a 10,000 test image, wherein the recognition accuracy (10 classification) is 97.87%, and compared with the existing single-layer optical convolution neural network model, the nonlinear optical convolution neural network constructed by the method has better performance than the former single-layer optical convolution neural network model.
The invention has the beneficial effects that: according to the invention, a Swish activation function is added into the optical convolution neural network, and the cross entropy is adopted as a target function, so that a multilayer nonlinear optical convolution neural network model is creatively constructed, and the image identification accuracy is substantially and remarkably improved.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
Claims (6)
1. An image recognition method of a nonlinear optical convolutional neural network, the method comprising the steps of:
step a, designing an optical convolution kernel structure, an optical convolution characteristic diagram and a calculation method;
b, constructing an optical convolution neural network model, and giving a convolution calculation formula of the optical convolution kernel structure and the optical convolution characteristic diagram;
step c, carrying out nonlinear mapping on the optical convolution characteristic diagram by using a Swish nonlinear activation function;
d, constructing a multilayer optical convolution neural network model and optimizing by using an Adam algorithm;
and e, training and testing the model on the MNIST database to obtain a test result.
2. According to claim1, in the step a, the optical convolution kernel adopts a tiling mode, the number of the convolution kernels is 16, the optical convolution is calculated by a point spread function, and the formula is as follows:
where denotes a two-dimensional convolution, PSF denotes a point spread function, Γ denotes an offset, Δ x and Δ y denote distances moved in the vertical and horizontal directions, respectively, in order that the convolution kernel does not overlap the image, WiCorresponding to the ith convolution kernel, IoutAn optical convolution signature representing the output of the corresponding convolution kernel, in a tiled fashion,
4. The identification method according to claim 1, wherein in the step d, when the Adam algorithm is used for optimization, the nonlinear optical convolutional neural network loss function is expressed as:wherein the content of the first and second substances,in order to achieve the target value,for the estimation, the constraints are: w > 0, i.e., the point spread function must not have a negative value.
5. The method according to claim 1, wherein in step e, MNIST hand-written digital image set is used as the test image, and the data image set comprises 70000 images with the size of 28 x 28 hand-written digital 0-9, wherein 55000 images are training set, 5000 images are verification set and 10000 images are test set.
6. The test method according to claim 5, characterized in that it comprises the steps of:
preprocessing three image data sets in MNIST, and amplifying a handwritten digital image to 200 x 200;
and step two, training a nonlinear optical convolution neural network model by using the 55000 training images, and then verifying the trained model by using the 5000 verification image sets. The training parameters were: the training period is 1000, and the batch adopted in each period is 100 samples;
and step three, testing the model obtained by training in the step two by using the 10000 test image, wherein the identification accuracy is 97.87%.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011456497.XA CN112699917A (en) | 2020-12-11 | 2020-12-11 | Image identification method of nonlinear optical convolution neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011456497.XA CN112699917A (en) | 2020-12-11 | 2020-12-11 | Image identification method of nonlinear optical convolution neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112699917A true CN112699917A (en) | 2021-04-23 |
Family
ID=75509037
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011456497.XA Pending CN112699917A (en) | 2020-12-11 | 2020-12-11 | Image identification method of nonlinear optical convolution neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112699917A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113630517A (en) * | 2021-10-08 | 2021-11-09 | 清华大学 | Intelligent imaging method and device for light-electric inductance calculation integrated light field |
CN114255387A (en) * | 2021-12-28 | 2022-03-29 | 山东捷讯通信技术有限公司 | Image identification method of all-optical nonlinear convolutional neural network |
CN114626011A (en) * | 2022-05-12 | 2022-06-14 | 飞诺门阵(北京)科技有限公司 | Photon calculation neural network operation acceleration method, device, equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875696A (en) * | 2018-07-05 | 2018-11-23 | 五邑大学 | The Off-line Handwritten Chinese Recognition method of convolutional neural networks is separated based on depth |
CN109671031A (en) * | 2018-12-14 | 2019-04-23 | 中北大学 | A kind of multispectral image inversion method based on residual error study convolutional neural networks |
CN110441271A (en) * | 2019-07-15 | 2019-11-12 | 清华大学 | Light field high-resolution deconvolution method and system based on convolutional neural networks |
-
2020
- 2020-12-11 CN CN202011456497.XA patent/CN112699917A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875696A (en) * | 2018-07-05 | 2018-11-23 | 五邑大学 | The Off-line Handwritten Chinese Recognition method of convolutional neural networks is separated based on depth |
CN109671031A (en) * | 2018-12-14 | 2019-04-23 | 中北大学 | A kind of multispectral image inversion method based on residual error study convolutional neural networks |
CN110441271A (en) * | 2019-07-15 | 2019-11-12 | 清华大学 | Light field high-resolution deconvolution method and system based on convolutional neural networks |
Non-Patent Citations (1)
Title |
---|
MARIO MISCUGLIO等: ""ALL-OPTICAL NONLINEAR ACTIVATION FUNCTION FOR PHOTONIC NEURAL NETWORKS"", 《OPTICAL MATERIALS EXPRESS》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113630517A (en) * | 2021-10-08 | 2021-11-09 | 清华大学 | Intelligent imaging method and device for light-electric inductance calculation integrated light field |
CN113630517B (en) * | 2021-10-08 | 2022-01-25 | 清华大学 | Intelligent imaging method and device for light-electric inductance calculation integrated light field |
US11425292B1 (en) | 2021-10-08 | 2022-08-23 | Tsinghua University | Method and apparatus for camera-free light field imaging with optoelectronic intelligent computing |
CN114255387A (en) * | 2021-12-28 | 2022-03-29 | 山东捷讯通信技术有限公司 | Image identification method of all-optical nonlinear convolutional neural network |
CN114255387B (en) * | 2021-12-28 | 2024-05-14 | 山东捷讯通信技术有限公司 | Image recognition method of all-optical nonlinear convolutional neural network |
CN114626011A (en) * | 2022-05-12 | 2022-06-14 | 飞诺门阵(北京)科技有限公司 | Photon calculation neural network operation acceleration method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tian | Artificial intelligence image recognition method based on convolutional neural network algorithm | |
Lu et al. | Human face recognition based on convolutional neural network and augmented dataset | |
CN111242208B (en) | Point cloud classification method, segmentation method and related equipment | |
Rao et al. | Research on a handwritten character recognition algorithm based on an extended nonlinear kernel residual network | |
WO2022160771A1 (en) | Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model | |
CN112699917A (en) | Image identification method of nonlinear optical convolution neural network | |
CN112991354B (en) | High-resolution remote sensing image semantic segmentation method based on deep learning | |
CN113705769A (en) | Neural network training method and device | |
Zhang et al. | An efficient lightweight convolutional neural network for industrial surface defect detection | |
CN113011386B (en) | Expression recognition method and system based on equally divided characteristic graphs | |
CN111598167A (en) | Small sample image identification method and system based on graph learning | |
CN117454116A (en) | Ground carbon emission monitoring method based on multi-source data interaction network | |
CN110136113B (en) | Vagina pathology image classification method based on convolutional neural network | |
CN110135435B (en) | Saliency detection method and device based on breadth learning system | |
Li et al. | BViT: Broad attention-based vision transformer | |
Zhou et al. | MSAR‐DefogNet: Lightweight cloud removal network for high resolution remote sensing images based on multi scale convolution | |
Jiao et al. | Non-local duplicate pooling network for salient object detection | |
CN114882494A (en) | Multi-mode attention-driven three-dimensional point cloud feature extraction method | |
CN117315481A (en) | Hyperspectral image classification method based on spectrum-space self-attention and transducer network | |
CN116630700A (en) | Remote sensing image classification method based on introduction channel-space attention mechanism | |
Jin et al. | Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization | |
CN115546848A (en) | Confrontation generation network training method, cross-device palmprint recognition method and system | |
Yu et al. | Pyramidal and conditional convolution attention network for hyperspectral image classification using limited training samples | |
CN107633010B (en) | Identification method and system for GRC plate image with complex modeling | |
CN111915621B (en) | Defect image segmentation method fusing deep neural network and CV model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210423 |