CN113033796A - Image identification method of all-optical nonlinear diffraction neural network - Google Patents
Image identification method of all-optical nonlinear diffraction neural network Download PDFInfo
- Publication number
- CN113033796A CN113033796A CN202011456487.6A CN202011456487A CN113033796A CN 113033796 A CN113033796 A CN 113033796A CN 202011456487 A CN202011456487 A CN 202011456487A CN 113033796 A CN113033796 A CN 113033796A
- Authority
- CN
- China
- Prior art keywords
- neural network
- diffraction
- optical
- layer
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000005540 biological transmission Effects 0.000 claims abstract description 18
- 230000003287 optical effect Effects 0.000 claims abstract description 14
- 230000004913 activation Effects 0.000 claims abstract description 13
- 238000013507 mapping Methods 0.000 claims abstract description 8
- 238000012360 testing method Methods 0.000 claims description 22
- 238000012549 training Methods 0.000 claims description 22
- 238000003062 neural network model Methods 0.000 claims description 10
- 239000000126 substance Substances 0.000 claims description 7
- 238000012795 verification Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 7
- 238000004422 calculation algorithm Methods 0.000 abstract description 5
- 230000008901 benefit Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- CURLTUGMZLYLDI-UHFFFAOYSA-N Carbon dioxide Chemical compound O=C=O CURLTUGMZLYLDI-UHFFFAOYSA-N 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 229910002092 carbon dioxide Inorganic materials 0.000 description 2
- 239000001569 carbon dioxide Substances 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000012937 correction Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 229960003855 solifenacin Drugs 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/067—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using optical means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an image recognition method of an all-optical nonlinear diffraction deep neural network. A nonlinear diffraction depth neural network method based on leakage (LReLU) modified linear activation function is designed for image recognition. Firstly, setting physical parameters of a diffraction depth neural network, including light source wavelength, pixel size, the number of pixels on each layer and grating layer spacing, calculating a light wave transmission coefficient by using an optical Rayleigh-Sommerfeld (Rayleigh-Sommerfeld) diffraction formula, and further establishing a single pixel output function; then, sending each pixel output value into an LReLU activation unit to form nonlinear mapping; and finally, establishing a complete all-optical nonlinear diffraction deep neural network forward propagation model, and optimizing neural network parameters by adopting a random gradient descent algorithm. Compared with the existing all-optical diffraction deep neural network, the method provided by the invention has the advantages of stronger non-linear data separability, high classification precision and simple and convenient calculation.
Description
Technical Field
The invention belongs to the field of optics and deep learning, and particularly relates to an image recognition method of an all-optical nonlinear diffraction neural network.
Background
In recent years, deep neural networks have been developed in a breakthrough manner in the field of image recognition as an important method of machine learning. However, deep neural networks are more computationally expensive than traditional machine learning algorithms. Therefore, many companies, research institutions and colleges and universities at home and abroad adopt different physical mechanisms to realize deep learning algorithms, such as FPGA, quantum computation, photon computation and the like. In these studies, the full-gloss diffraction deep neural network framework proposed by doctor, division of school, los angeles, california, etc. achieves higher image recognition accuracy under simulation, and the results are published in the Science journal in the form of a paper. The method utilizes an optical Rayleigh-Sophia diffraction formula to establish a single neuron output function and construct a forward propagation reasoning model. In the forward propagation path, by adjusting the phase and amplitude of light to connect the various nerve layers, the greatest advantage is that computation power consumption is low, speed is fast, and is not limited by von neumann bottlenecks.
However, the current diffraction deep neural network model does not realize nonlinearity, and the expressed mapping is only linear, so that the nonlinear data in practical application is difficult to express. Compared with the traditional deep neural network (electronic mode), the data nonlinear separability is weak, and a large space is left for image recognition accuracy. In view of the defects in the prior art, the invention adopts a leakage (leakage LReLU) correction linear unit as a neuron activation function of a diffraction depth neural network to construct a nonlinear diffraction depth neural network model for a graph recognition task.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an image recognition method of an all-optical nonlinear diffraction neural network, the provided method has stronger nonlinear data separability, high classification precision and simple and convenient calculation, and the applicability of the device is improved.
In order to solve the technical problems, the invention adopts the technical scheme that: an image recognition method of an all-optical nonlinear diffractive neural network, the method comprising the steps of: a, setting physical parameters of an all-optical nonlinear diffraction depth neural network, wherein the parameters comprise the wavelength of a used light source, the size of pixels, the number of pixels on each layer and the distance between grating layers; b, calculating the light wave transmission coefficient of the light path between the adjacent layers of the neural network by using an optical Rayleigh-Sophia diffraction formula; step c, constructing a pixel output function; d, adopting an LrelU activation function to act on the pixel output value to complete nonlinear mapping; step e, establishing an all-optical nonlinear diffraction deep neural network model; and f, training and testing a model on an MNIST handwritten digital image database to obtain a test result.
Preferably, in the parameters, the wavelength of the light source is 10.6 μm, the size of the pixels is 5 μm, the grating layer spacing is 300 μm, and the number of pixels in each layer is 784.
Preferably, the optical rayleigh-solifenacin diffraction formula is:wherein, λ represents the wavelength of light,representing the euclidean distance between layer i picture elements and layer l +1 picture elements p,
preferably, after obtaining the optical wave transmission coefficient, each pixel output function is expressed as:wherein the content of the first and second substances,expressed as the sum of diffracted light waves of all pixels of the l +1 layer to the ith pixel of the l layer,expressed as the transmission coefficient, α ═ 1 represents the amplitude, and Φ represents the phase.
Preferably, the output value of the ith pixel on the ith layer is obtained by the activation functionObtaining a non-linear mapping in which the function is activatedExpressed as:wherein the content of the first and second substances,is a fixed parameter of the interval (1, + ∞), the invention
Preferably, in step e, the loss function of the all-optical nonlinear diffraction depth neural network is represented as:wherein the content of the first and second substances,in order to achieve the target value,for the estimation, the constraints are:
preferably, an MNIST handwritten digit image set is used as a test image, and 70000 images with the size of 28 x 28 handwritten digits of 0-9 are taken as the data image set, wherein 55000 images are used as a training set, 5000 images are used as a verification set, and 10000 images are used as a test set.
Preferably, the test method comprises the following steps: processing three image data sets in MNIST, and converting two-dimensional image data into one-dimensional data; and step two, training a full-gloss nonlinear diffraction deep neural network model by using 55000 training images, and then verifying the training model by using 5000 verification image sets. The training parameters were: the training period is 500, and the batch adopted in each period is 100 samples; and step three, testing the training model by using 10000 testing images.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, a leakage (Leaky ReLU, LReLU) modified linear activation function is added into the all-optical diffraction depth neural network, and cross entropy is adopted as a target function, so that an all-optical nonlinear diffraction depth neural network model is creatively constructed, and the efficiency and accuracy of the identification process are substantially and remarkably improved; the method provided by the invention has stronger non-linear data separability, high classification precision and simple and convenient calculation.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
Further objects, features and advantages of the present invention will become apparent from the following description of embodiments of the invention, with reference to the accompanying drawings, in which:
FIG. 1 schematically shows a process flow diagram of the present invention;
FIG. 2 is a schematic diagram illustrating optical transmission between adjacent layers of a network according to the present invention;
FIG. 3 is a schematic diagram showing the structure of the all-optical diffraction deep neural network of the present invention;
FIG. 4 schematically illustrates exemplary MNIST handwritten digit images 0-9 used by the present invention;
FIG. 5 is a schematic diagram showing the test accuracy of the present invention on a MNIST handwritten digital image collection;
fig. 6 schematically shows a test accuracy chart of each digit of the MNIST handwritten digit image set according to the present invention.
Detailed Description
The objects and functions of the present invention and methods for accomplishing the same will be apparent by reference to the exemplary embodiments. However, the present invention is not limited to the exemplary embodiments disclosed below; it can be implemented in different forms. The nature of the description is merely to assist those skilled in the relevant art in a comprehensive understanding of the specific details of the invention.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same reference numerals denote the same or similar parts, or the same or similar steps.
The invention aims to solve the defect that the existing all-optical diffraction deep neural network model cannot realize nonlinearity in image recognition, and provides an image recognition method of an all-optical nonlinear diffraction deep neural network to solve the problem.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
an image recognition method of an all-optical nonlinear diffraction deep neural network comprises the following steps:
and 6, testing the embodiment, namely firstly, acquiring a training and verifying image set in an MNIST (MNIST handwriting digital database), training the all-optical nonlinear diffraction depth neural network, then acquiring a testing image set in the MNIST handwriting digital database, inputting the testing image set into the trained all-optical nonlinear diffraction depth neural network, and obtaining the identification result class of the to-be-tested handwritten digital image according to the minimum judgment of the output value and the target value of the to-be-tested image.
In the method, in the step 1, a carbon dioxide laser with the wavelength of 10.6 μm is used as a light source, the pixel size is 5 μm, and the grating layer spacing is 300 μm.
In the above method, in step 3, the optical wave transmission coefficients of all pixels in the previous layer and the transmission coefficients of the receiving pixels in the next layer are complex numbers, and the multiplication processes are independently completed in the real part and the imaginary part respectively, so as to combine into new complex values.
In the above method, in step 3, the negative activation coefficient of the lreul activation function is 0.01, but not limited to this coefficient value.
In the above method, in step 4, the output value of each pixel in the hidden layer is transmitted to the lreol activation function.
In the above method, in step 4, the output layer uses a Softmax function to obtain probability values of the images belonging to the respective categories.
In the above method, in step 5, the parameter to be optimized is the phase value (phi) of the pixel, and the cross entropy is used as the objective function.
The invention will be further illustrated with reference to the following examples and drawings:
an image recognition method of an all-optical nonlinear diffraction deep neural network is shown in fig. 1, and the specific steps are described as follows:
step 1: a carbon dioxide laser with the wavelength of 10.6 mu m is selected as a system light source, the pixel size is 5 mu m, the grating layer interval is 300 mu m, and the number of pixels in each layer is 784(28 multiplied by 28).
Step 2: calculating the light wave transmission coefficient of the light path between adjacent layers of the network, wherein the used optical Rayleigh-Sophia diffraction formula is as follows:
wherein, λ represents the wavelength of light,representing the euclidean distance between layer i picture elements and layer l +1 picture elements p,
and step 3: after the light wave transmission coefficient is obtained, the output function of each pixel is expressed as:
wherein the content of the first and second substances,defined as the sum of diffracted light waves from all pixels of the l +1 layer to the ith pixel of the l layer,defined as the transmission coefficient, α represents the amplitude, α in the present invention is 1 (in an ideal state), and Φ represents the phase. Optical transmission between adjacent layers of the network, e.g.As shown in fig. 2.
The ith pixel output value on the ith layer is activated by the functionObtaining a non-linear mapping in which the function is activatedExpressed as:
Because the numerical value is in a complex form, the invention adopts a complex calculation method, and the process is expressed as follows: W.X ═ Wr·Xr-Wi·Xi)+i(Wi·Xr+Wr·Xi)
Wherein, WrAnd WiIs the real and imaginary parts, X, of the optical wave transmission coefficientrAnd XiThe real and imaginary parts of t · m.
And 4, step 4: on the basis of the formula, an all-optical nonlinear diffraction deep neural network forward propagation model is constructed, and the overall network structure schematic diagram is shown in fig. 3.
where o denotes the input layer, k denotes input layer pixels, p denotes pixels on the hidden layer,defined as input mode, here tableShown as the values of the pixels of the image,calculated in the step (2).
The hidden layer is represented as:
wherein i represents the l-th layer of picture elements, and p represents the l +1 layer of picture elements.
wherein, M represents the total number of hidden layers, and M is 5 in the invention.
And 5: the loss function of the all-optical nonlinear diffraction deep neural network is expressed as:
the phase parameter values are then optimized by a random gradient descent algorithm.
Step six: an embodiment uses a MNIST handwritten digit image set as a test image, the data set comprises 70,000 handwritten digit 0-9 images with the size of 28 x 28, wherein 55,000 images are a training set, 5,000 images are a verification set, and 10,000 images are a test set, and the sample diagram is shown in FIG. 4.
The method comprises the following specific steps:
(1) processing three image data sets in the MNIST, and converting two-dimensional image data into one-dimensional data;
(2) setting relevant parameters of the whole calculation model, wherein the specific values are described above;
(3) the plenoptic nonlinear diffraction deep neural network model was trained using 55,000 training images, and then the trained model was validated using a 5,000 validation image set. The training parameters were: the training period (epoch) is 500, and the batch adopted in each period is 100 samples;
(4) the model trained in (3) was tested using 10,000 test images, fig. 5 and 6 for image accuracy and identification accuracy for each class, respectively. As can be observed from fig. 5 and 6, compared with the existing all-optical diffraction deep neural network, the performance of the all-optical nonlinear diffraction deep neural network constructed by the invention is superior to that of the existing all-optical diffraction deep neural network.
The invention has the beneficial effects that: according to the invention, a leakage (Leaky ReLU, LReLU) modified linear activation function is added into the all-optical diffraction depth neural network, and cross entropy is adopted as a target function, so that an all-optical nonlinear diffraction depth neural network model is creatively constructed, and the efficiency and accuracy of the identification process are substantially and remarkably improved; the method provided by the invention has stronger non-linear data separability, high classification precision and simple and convenient calculation.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
Claims (8)
1. An image recognition method of an all-optical nonlinear diffraction neural network is characterized by comprising the following steps of:
a, setting physical parameters of an all-optical nonlinear diffraction depth neural network, wherein the parameters comprise the wavelength of a used light source, the size of pixels, the number of pixels on each layer and the distance between grating layers;
b, calculating the light wave transmission coefficient of the light path between the adjacent layers of the neural network by using an optical Rayleigh-Sophia diffraction formula;
step c, constructing a pixel output function;
d, adopting an LrelU activation function to act on the pixel output value to complete nonlinear mapping;
step e, establishing an all-optical nonlinear diffraction deep neural network model;
and f, training and testing a model on an MNIST handwritten digital image database to obtain a test result.
2. The identification method according to claim 1, wherein in the parameters, the light source wavelength is 10.6 μm, the pixel size is 5 μm, the grating layer pitch is 300 μm, and the number of pixels per layer is 784.
4. the identification method of claim 1, wherein after obtaining the optical wave transmission coefficients, each pixel output function is represented as:wherein the content of the first and second substances,is shown asThe diffracted light waves of all the pixels of the l +1 layer to the ith pixel of the l layer are accumulated,expressed as the transmission coefficient, α ═ 1 represents the amplitude, and Φ represents the phase.
5. Identification method according to claim 1, characterized in that the ith picture element output value on the ith layer is passed through the activation functionObtaining a non-linear mapping in which the function is activatedExpressed as:wherein the content of the first and second substances,is a fixed parameter of the interval (1, + ∞), the invention
7. the recognition method according to claim 1, wherein a MNIST hand-written digital image set is used as the test image, and the data image set comprises 70000 images with the size of 28 x 28 hand-written digital 0-9, wherein 55000 images are training sets, 5000 images are verification sets, and 10000 images are test sets.
8. The testing method of claim 7, wherein the testing method comprises the steps of:
processing three image data sets in MNIST, and converting two-dimensional image data into one-dimensional data;
and step two, training a full-gloss nonlinear diffraction deep neural network model by using 55000 training images, and then verifying the training model by using 5000 verification image sets. The training parameters were: the training period is 500, and the batch adopted in each period is 100 samples;
and step three, testing the training model by using 10000 testing images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011456487.6A CN113033796A (en) | 2020-12-11 | 2020-12-11 | Image identification method of all-optical nonlinear diffraction neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011456487.6A CN113033796A (en) | 2020-12-11 | 2020-12-11 | Image identification method of all-optical nonlinear diffraction neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113033796A true CN113033796A (en) | 2021-06-25 |
Family
ID=76459216
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011456487.6A Pending CN113033796A (en) | 2020-12-11 | 2020-12-11 | Image identification method of all-optical nonlinear diffraction neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113033796A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113822424A (en) * | 2021-07-27 | 2021-12-21 | 湖南大学 | All-optical diffraction neural network system based on super-structured surface |
CN115113508A (en) * | 2022-05-07 | 2022-09-27 | 四川大学 | Holographic display speckle suppression method based on optical diffraction neural network |
CN115358381A (en) * | 2022-09-01 | 2022-11-18 | 清华大学 | Optical full adder and neural network design method, device and medium thereof |
CN116957031A (en) * | 2023-07-24 | 2023-10-27 | 浙江大学 | Photoelectric computer based on optical multi-neuron activation function module |
CN117521746A (en) * | 2024-01-04 | 2024-02-06 | 武汉大学 | Quantized optical diffraction neural network system and training method thereof |
CN116957031B (en) * | 2023-07-24 | 2024-05-24 | 浙江大学 | Photoelectric computer based on optical multi-neuron activation function module |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106682694A (en) * | 2016-12-27 | 2017-05-17 | 复旦大学 | Sensitive image identification method based on depth learning |
CN110287985A (en) * | 2019-05-15 | 2019-09-27 | 江苏大学 | A kind of deep neural network image-recognizing method based on the primary topology with Mutation Particle Swarm Optimizer |
-
2020
- 2020-12-11 CN CN202011456487.6A patent/CN113033796A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106682694A (en) * | 2016-12-27 | 2017-05-17 | 复旦大学 | Sensitive image identification method based on depth learning |
CN110287985A (en) * | 2019-05-15 | 2019-09-27 | 江苏大学 | A kind of deep neural network image-recognizing method based on the primary topology with Mutation Particle Swarm Optimizer |
Non-Patent Citations (3)
Title |
---|
MARIO MISCUGLIO 等: "ALL-OPTICAL NONLINEAR ACTIVATION FUNCTION FOR PHOTONIC NEURAL NETWORKS", 《OPTICAL MATERIALS EXPRESS》 * |
XING LIN等: "All-optical machine learning using diffractive deep neural networks", 《SCIENCE》 * |
陈绵书等: "基于卷积神经网络的多标签图像分类", 《吉林大学学报》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113822424A (en) * | 2021-07-27 | 2021-12-21 | 湖南大学 | All-optical diffraction neural network system based on super-structured surface |
CN113822424B (en) * | 2021-07-27 | 2023-10-20 | 湖南大学 | All-optical diffraction neural network system based on super-structured surface |
CN115113508A (en) * | 2022-05-07 | 2022-09-27 | 四川大学 | Holographic display speckle suppression method based on optical diffraction neural network |
CN115113508B (en) * | 2022-05-07 | 2023-11-28 | 四川大学 | Holographic display speckle suppression method based on optical diffraction neural network |
CN115358381A (en) * | 2022-09-01 | 2022-11-18 | 清华大学 | Optical full adder and neural network design method, device and medium thereof |
CN115358381B (en) * | 2022-09-01 | 2024-05-31 | 清华大学 | Optical full adder and neural network design method, equipment and medium thereof |
CN116957031A (en) * | 2023-07-24 | 2023-10-27 | 浙江大学 | Photoelectric computer based on optical multi-neuron activation function module |
CN116957031B (en) * | 2023-07-24 | 2024-05-24 | 浙江大学 | Photoelectric computer based on optical multi-neuron activation function module |
CN117521746A (en) * | 2024-01-04 | 2024-02-06 | 武汉大学 | Quantized optical diffraction neural network system and training method thereof |
CN117521746B (en) * | 2024-01-04 | 2024-03-26 | 武汉大学 | Quantized optical diffraction neural network system and training method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113033796A (en) | Image identification method of all-optical nonlinear diffraction neural network | |
CN111160171B (en) | Radiation source signal identification method combining two-domain multi-features | |
CN111523546B (en) | Image semantic segmentation method, system and computer storage medium | |
CN110334804B (en) | All-optical depth diffraction neural network system and method based on spatial partially coherent light | |
CN113673590B (en) | Rain removing method, system and medium based on multi-scale hourglass dense connection network | |
CN112818969B (en) | Knowledge distillation-based face pose estimation method and system | |
CN111582435A (en) | Diffraction depth neural network system based on residual error network | |
CN113867307B (en) | Spacecraft intelligent fault diagnosis method based on deep neural network | |
CN112699917A (en) | Image identification method of nonlinear optical convolution neural network | |
CN111259917B (en) | Image feature extraction method based on local neighbor component analysis | |
CN115545173A (en) | Optical modulation neuron for signal processing and all-optical diffraction neural network method | |
CN111582468B (en) | Photoelectric hybrid intelligent data generation and calculation system and method | |
Costarelli | Sigmoidal functions approximation and applications | |
Zhou et al. | MSAR‐DefogNet: Lightweight cloud removal network for high resolution remote sensing images based on multi scale convolution | |
Zhou et al. | Online filter weakening and pruning for efficient convnets | |
Li et al. | RGB-induced feature modulation network for hyperspectral image super-resolution | |
Finnveden et al. | Understanding when spatial transformer networks do not support invariance, and what to do about it | |
CN109558880B (en) | Contour detection method based on visual integral and local feature fusion | |
Jin et al. | Poisson image denoising by piecewise principal component analysis and its application in single‐particle X‐ray diffraction imaging | |
CN117392065A (en) | Cloud edge cooperative solar panel ash covering condition autonomous assessment method | |
CN116993639A (en) | Visible light and infrared image fusion method based on structural re-parameterization | |
CN116630700A (en) | Remote sensing image classification method based on introduction channel-space attention mechanism | |
CN112489012A (en) | Neural network architecture method for CT image recognition | |
CN113628261A (en) | Infrared and visible light image registration method in power inspection scene | |
CN102298775A (en) | Super-resolution method and system for human face based on sample |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210625 |
|
RJ01 | Rejection of invention patent application after publication |