CN113033796A - Image identification method of all-optical nonlinear diffraction neural network - Google Patents

Image identification method of all-optical nonlinear diffraction neural network Download PDF

Info

Publication number
CN113033796A
CN113033796A CN202011456487.6A CN202011456487A CN113033796A CN 113033796 A CN113033796 A CN 113033796A CN 202011456487 A CN202011456487 A CN 202011456487A CN 113033796 A CN113033796 A CN 113033796A
Authority
CN
China
Prior art keywords
neural network
diffraction
optical
layer
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011456487.6A
Other languages
Chinese (zh)
Inventor
于明鑫
祝连庆
董明利
张东亮
庄炜
张旭
夏嘉斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Information Science and Technology University
Original Assignee
Beijing Information Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Information Science and Technology University filed Critical Beijing Information Science and Technology University
Priority to CN202011456487.6A priority Critical patent/CN113033796A/en
Publication of CN113033796A publication Critical patent/CN113033796A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/067Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using optical means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image recognition method of an all-optical nonlinear diffraction deep neural network. A nonlinear diffraction depth neural network method based on leakage (LReLU) modified linear activation function is designed for image recognition. Firstly, setting physical parameters of a diffraction depth neural network, including light source wavelength, pixel size, the number of pixels on each layer and grating layer spacing, calculating a light wave transmission coefficient by using an optical Rayleigh-Sommerfeld (Rayleigh-Sommerfeld) diffraction formula, and further establishing a single pixel output function; then, sending each pixel output value into an LReLU activation unit to form nonlinear mapping; and finally, establishing a complete all-optical nonlinear diffraction deep neural network forward propagation model, and optimizing neural network parameters by adopting a random gradient descent algorithm. Compared with the existing all-optical diffraction deep neural network, the method provided by the invention has the advantages of stronger non-linear data separability, high classification precision and simple and convenient calculation.

Description

Image identification method of all-optical nonlinear diffraction neural network
Technical Field
The invention belongs to the field of optics and deep learning, and particularly relates to an image recognition method of an all-optical nonlinear diffraction neural network.
Background
In recent years, deep neural networks have been developed in a breakthrough manner in the field of image recognition as an important method of machine learning. However, deep neural networks are more computationally expensive than traditional machine learning algorithms. Therefore, many companies, research institutions and colleges and universities at home and abroad adopt different physical mechanisms to realize deep learning algorithms, such as FPGA, quantum computation, photon computation and the like. In these studies, the full-gloss diffraction deep neural network framework proposed by doctor, division of school, los angeles, california, etc. achieves higher image recognition accuracy under simulation, and the results are published in the Science journal in the form of a paper. The method utilizes an optical Rayleigh-Sophia diffraction formula to establish a single neuron output function and construct a forward propagation reasoning model. In the forward propagation path, by adjusting the phase and amplitude of light to connect the various nerve layers, the greatest advantage is that computation power consumption is low, speed is fast, and is not limited by von neumann bottlenecks.
However, the current diffraction deep neural network model does not realize nonlinearity, and the expressed mapping is only linear, so that the nonlinear data in practical application is difficult to express. Compared with the traditional deep neural network (electronic mode), the data nonlinear separability is weak, and a large space is left for image recognition accuracy. In view of the defects in the prior art, the invention adopts a leakage (leakage LReLU) correction linear unit as a neuron activation function of a diffraction depth neural network to construct a nonlinear diffraction depth neural network model for a graph recognition task.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an image recognition method of an all-optical nonlinear diffraction neural network, the provided method has stronger nonlinear data separability, high classification precision and simple and convenient calculation, and the applicability of the device is improved.
In order to solve the technical problems, the invention adopts the technical scheme that: an image recognition method of an all-optical nonlinear diffractive neural network, the method comprising the steps of: a, setting physical parameters of an all-optical nonlinear diffraction depth neural network, wherein the parameters comprise the wavelength of a used light source, the size of pixels, the number of pixels on each layer and the distance between grating layers; b, calculating the light wave transmission coefficient of the light path between the adjacent layers of the neural network by using an optical Rayleigh-Sophia diffraction formula; step c, constructing a pixel output function; d, adopting an LrelU activation function to act on the pixel output value to complete nonlinear mapping; step e, establishing an all-optical nonlinear diffraction deep neural network model; and f, training and testing a model on an MNIST handwritten digital image database to obtain a test result.
Preferably, in the parameters, the wavelength of the light source is 10.6 μm, the size of the pixels is 5 μm, the grating layer spacing is 300 μm, and the number of pixels in each layer is 784.
Preferably, the optical rayleigh-solifenacin diffraction formula is:
Figure BDA0002829489110000021
wherein, λ represents the wavelength of light,
Figure BDA0002829489110000022
representing the euclidean distance between layer i picture elements and layer l +1 picture elements p,
Figure BDA0002829489110000023
preferably, after obtaining the optical wave transmission coefficient, each pixel output function is expressed as:
Figure BDA0002829489110000024
wherein the content of the first and second substances,
Figure BDA0002829489110000025
expressed as the sum of diffracted light waves of all pixels of the l +1 layer to the ith pixel of the l layer,
Figure BDA0002829489110000026
expressed as the transmission coefficient, α ═ 1 represents the amplitude, and Φ represents the phase.
Preferably, the output value of the ith pixel on the ith layer is obtained by the activation function
Figure BDA0002829489110000027
Obtaining a non-linear mapping in which the function is activated
Figure BDA0002829489110000031
Expressed as:
Figure BDA0002829489110000032
wherein the content of the first and second substances,
Figure BDA0002829489110000033
is a fixed parameter of the interval (1, + ∞), the invention
Figure BDA0002829489110000034
Preferably, in step e, the loss function of the all-optical nonlinear diffraction depth neural network is represented as:
Figure BDA0002829489110000035
wherein the content of the first and second substances,
Figure BDA0002829489110000036
in order to achieve the target value,
Figure BDA0002829489110000037
for the estimation, the constraints are:
Figure BDA0002829489110000038
preferably, an MNIST handwritten digit image set is used as a test image, and 70000 images with the size of 28 x 28 handwritten digits of 0-9 are taken as the data image set, wherein 55000 images are used as a training set, 5000 images are used as a verification set, and 10000 images are used as a test set.
Preferably, the test method comprises the following steps: processing three image data sets in MNIST, and converting two-dimensional image data into one-dimensional data; and step two, training a full-gloss nonlinear diffraction deep neural network model by using 55000 training images, and then verifying the training model by using 5000 verification image sets. The training parameters were: the training period is 500, and the batch adopted in each period is 100 samples; and step three, testing the training model by using 10000 testing images.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, a leakage (Leaky ReLU, LReLU) modified linear activation function is added into the all-optical diffraction depth neural network, and cross entropy is adopted as a target function, so that an all-optical nonlinear diffraction depth neural network model is creatively constructed, and the efficiency and accuracy of the identification process are substantially and remarkably improved; the method provided by the invention has stronger non-linear data separability, high classification precision and simple and convenient calculation.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
Further objects, features and advantages of the present invention will become apparent from the following description of embodiments of the invention, with reference to the accompanying drawings, in which:
FIG. 1 schematically shows a process flow diagram of the present invention;
FIG. 2 is a schematic diagram illustrating optical transmission between adjacent layers of a network according to the present invention;
FIG. 3 is a schematic diagram showing the structure of the all-optical diffraction deep neural network of the present invention;
FIG. 4 schematically illustrates exemplary MNIST handwritten digit images 0-9 used by the present invention;
FIG. 5 is a schematic diagram showing the test accuracy of the present invention on a MNIST handwritten digital image collection;
fig. 6 schematically shows a test accuracy chart of each digit of the MNIST handwritten digit image set according to the present invention.
Detailed Description
The objects and functions of the present invention and methods for accomplishing the same will be apparent by reference to the exemplary embodiments. However, the present invention is not limited to the exemplary embodiments disclosed below; it can be implemented in different forms. The nature of the description is merely to assist those skilled in the relevant art in a comprehensive understanding of the specific details of the invention.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same reference numerals denote the same or similar parts, or the same or similar steps.
The invention aims to solve the defect that the existing all-optical diffraction deep neural network model cannot realize nonlinearity in image recognition, and provides an image recognition method of an all-optical nonlinear diffraction deep neural network to solve the problem.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
an image recognition method of an all-optical nonlinear diffraction deep neural network comprises the following steps:
step 1, setting physical parameters of an all-optical nonlinear diffraction depth neural network, specifically including the wavelength of a used light source, the size of pixels, the number of pixels on each layer and the distance between grating layers;
step 2, calculating the light wave transmission coefficient of the light path between adjacent layers of the network by using an optical Rayleigh-Sommerfeld diffraction formula;
step 3, taking the product of the light transmission coefficients of all the pixels on the upper layer and the transmission coefficient of the receiving pixel on the lower layer as a single pixel output value, and then correcting a linear activation function through leakage (Leaky ReLU, LReLU) to obtain nonlinear mapping;
step 4, constructing an all-optical nonlinear diffraction forward propagation model, namely giving a calculation formula of a network input layer, a hidden layer and an output layer;
step 5, determining parameters to be optimized of the all-optical nonlinear diffraction neural network, and optimizing by using a random gradient descent algorithm;
and 6, testing the embodiment, namely firstly, acquiring a training and verifying image set in an MNIST (MNIST handwriting digital database), training the all-optical nonlinear diffraction depth neural network, then acquiring a testing image set in the MNIST handwriting digital database, inputting the testing image set into the trained all-optical nonlinear diffraction depth neural network, and obtaining the identification result class of the to-be-tested handwritten digital image according to the minimum judgment of the output value and the target value of the to-be-tested image.
In the method, in the step 1, a carbon dioxide laser with the wavelength of 10.6 μm is used as a light source, the pixel size is 5 μm, and the grating layer spacing is 300 μm.
In the above method, in step 3, the optical wave transmission coefficients of all pixels in the previous layer and the transmission coefficients of the receiving pixels in the next layer are complex numbers, and the multiplication processes are independently completed in the real part and the imaginary part respectively, so as to combine into new complex values.
In the above method, in step 3, the negative activation coefficient of the lreul activation function is 0.01, but not limited to this coefficient value.
In the above method, in step 4, the output value of each pixel in the hidden layer is transmitted to the lreol activation function.
In the above method, in step 4, the output layer uses a Softmax function to obtain probability values of the images belonging to the respective categories.
In the above method, in step 5, the parameter to be optimized is the phase value (phi) of the pixel, and the cross entropy is used as the objective function.
The invention will be further illustrated with reference to the following examples and drawings:
an image recognition method of an all-optical nonlinear diffraction deep neural network is shown in fig. 1, and the specific steps are described as follows:
step 1: a carbon dioxide laser with the wavelength of 10.6 mu m is selected as a system light source, the pixel size is 5 mu m, the grating layer interval is 300 mu m, and the number of pixels in each layer is 784(28 multiplied by 28).
Step 2: calculating the light wave transmission coefficient of the light path between adjacent layers of the network, wherein the used optical Rayleigh-Sophia diffraction formula is as follows:
Figure BDA0002829489110000061
wherein, λ represents the wavelength of light,
Figure BDA0002829489110000062
representing the euclidean distance between layer i picture elements and layer l +1 picture elements p,
Figure BDA0002829489110000063
and step 3: after the light wave transmission coefficient is obtained, the output function of each pixel is expressed as:
Figure BDA0002829489110000064
wherein the content of the first and second substances,
Figure BDA0002829489110000065
defined as the sum of diffracted light waves from all pixels of the l +1 layer to the ith pixel of the l layer,
Figure BDA0002829489110000066
defined as the transmission coefficient, α represents the amplitude, α in the present invention is 1 (in an ideal state), and Φ represents the phase. Optical transmission between adjacent layers of the network, e.g.As shown in fig. 2.
The ith pixel output value on the ith layer is activated by the function
Figure BDA0002829489110000067
Obtaining a non-linear mapping in which the function is activated
Figure BDA0002829489110000068
Expressed as:
Figure BDA0002829489110000069
wherein
Figure BDA0002829489110000071
Is a fixed parameter of the interval (1, + ∞), the invention
Figure BDA0002829489110000072
Because the numerical value is in a complex form, the invention adopts a complex calculation method, and the process is expressed as follows: W.X ═ Wr·Xr-Wi·Xi)+i(Wi·Xr+Wr·Xi)
Wherein, WrAnd WiIs the real and imaginary parts, X, of the optical wave transmission coefficientrAnd XiThe real and imaginary parts of t · m.
And 4, step 4: on the basis of the formula, an all-optical nonlinear diffraction deep neural network forward propagation model is constructed, and the overall network structure schematic diagram is shown in fig. 3.
The input layer is represented as:
Figure BDA0002829489110000073
where o denotes the input layer, k denotes input layer pixels, p denotes pixels on the hidden layer,
Figure BDA0002829489110000074
defined as input mode, here tableShown as the values of the pixels of the image,
Figure BDA0002829489110000075
calculated in the step (2).
The hidden layer is represented as:
Figure BDA0002829489110000076
wherein i represents the l-th layer of picture elements, and p represents the l +1 layer of picture elements.
The output layer is represented as:
Figure BDA0002829489110000077
wherein, M represents the total number of hidden layers, and M is 5 in the invention.
And 5: the loss function of the all-optical nonlinear diffraction deep neural network is expressed as:
Figure BDA0002829489110000078
wherein is
Figure BDA0002829489110000079
The target value is,
Figure BDA00028294891100000710
for the estimation, the constraints are:
Figure BDA00028294891100000711
the phase parameter values are then optimized by a random gradient descent algorithm.
Step six: an embodiment uses a MNIST handwritten digit image set as a test image, the data set comprises 70,000 handwritten digit 0-9 images with the size of 28 x 28, wherein 55,000 images are a training set, 5,000 images are a verification set, and 10,000 images are a test set, and the sample diagram is shown in FIG. 4.
The method comprises the following specific steps:
(1) processing three image data sets in the MNIST, and converting two-dimensional image data into one-dimensional data;
(2) setting relevant parameters of the whole calculation model, wherein the specific values are described above;
(3) the plenoptic nonlinear diffraction deep neural network model was trained using 55,000 training images, and then the trained model was validated using a 5,000 validation image set. The training parameters were: the training period (epoch) is 500, and the batch adopted in each period is 100 samples;
(4) the model trained in (3) was tested using 10,000 test images, fig. 5 and 6 for image accuracy and identification accuracy for each class, respectively. As can be observed from fig. 5 and 6, compared with the existing all-optical diffraction deep neural network, the performance of the all-optical nonlinear diffraction deep neural network constructed by the invention is superior to that of the existing all-optical diffraction deep neural network.
The invention has the beneficial effects that: according to the invention, a leakage (Leaky ReLU, LReLU) modified linear activation function is added into the all-optical diffraction depth neural network, and cross entropy is adopted as a target function, so that an all-optical nonlinear diffraction depth neural network model is creatively constructed, and the efficiency and accuracy of the identification process are substantially and remarkably improved; the method provided by the invention has stronger non-linear data separability, high classification precision and simple and convenient calculation.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (8)

1. An image recognition method of an all-optical nonlinear diffraction neural network is characterized by comprising the following steps of:
a, setting physical parameters of an all-optical nonlinear diffraction depth neural network, wherein the parameters comprise the wavelength of a used light source, the size of pixels, the number of pixels on each layer and the distance between grating layers;
b, calculating the light wave transmission coefficient of the light path between the adjacent layers of the neural network by using an optical Rayleigh-Sophia diffraction formula;
step c, constructing a pixel output function;
d, adopting an LrelU activation function to act on the pixel output value to complete nonlinear mapping;
step e, establishing an all-optical nonlinear diffraction deep neural network model;
and f, training and testing a model on an MNIST handwritten digital image database to obtain a test result.
2. The identification method according to claim 1, wherein in the parameters, the light source wavelength is 10.6 μm, the pixel size is 5 μm, the grating layer pitch is 300 μm, and the number of pixels per layer is 784.
3. The identification method according to claim 1, wherein the optical rayleigh-solifenaf diffraction formula is:
Figure FDA0002829489100000011
wherein, λ represents the wavelength of light,
Figure FDA0002829489100000012
representing the euclidean distance between layer i picture elements and layer l +1 picture elements p,
Figure FDA0002829489100000013
4. the identification method of claim 1, wherein after obtaining the optical wave transmission coefficients, each pixel output function is represented as:
Figure FDA0002829489100000014
wherein the content of the first and second substances,
Figure FDA0002829489100000015
is shown asThe diffracted light waves of all the pixels of the l +1 layer to the ith pixel of the l layer are accumulated,
Figure FDA0002829489100000021
expressed as the transmission coefficient, α ═ 1 represents the amplitude, and Φ represents the phase.
5. Identification method according to claim 1, characterized in that the ith picture element output value on the ith layer is passed through the activation function
Figure FDA0002829489100000022
Obtaining a non-linear mapping in which the function is activated
Figure FDA0002829489100000023
Expressed as:
Figure FDA0002829489100000024
wherein the content of the first and second substances,
Figure FDA0002829489100000025
is a fixed parameter of the interval (1, + ∞), the invention
Figure FDA0002829489100000026
6. The identification method according to claim 1, wherein in step e, the plenoptic nonlinear diffraction depth neural network loss function is expressed as:
Figure FDA0002829489100000027
wherein the content of the first and second substances,
Figure FDA0002829489100000028
in order to achieve the target value,
Figure FDA0002829489100000029
for the estimation, the constraints are:
Figure FDA00028294891000000210
7. the recognition method according to claim 1, wherein a MNIST hand-written digital image set is used as the test image, and the data image set comprises 70000 images with the size of 28 x 28 hand-written digital 0-9, wherein 55000 images are training sets, 5000 images are verification sets, and 10000 images are test sets.
8. The testing method of claim 7, wherein the testing method comprises the steps of:
processing three image data sets in MNIST, and converting two-dimensional image data into one-dimensional data;
and step two, training a full-gloss nonlinear diffraction deep neural network model by using 55000 training images, and then verifying the training model by using 5000 verification image sets. The training parameters were: the training period is 500, and the batch adopted in each period is 100 samples;
and step three, testing the training model by using 10000 testing images.
CN202011456487.6A 2020-12-11 2020-12-11 Image identification method of all-optical nonlinear diffraction neural network Pending CN113033796A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011456487.6A CN113033796A (en) 2020-12-11 2020-12-11 Image identification method of all-optical nonlinear diffraction neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011456487.6A CN113033796A (en) 2020-12-11 2020-12-11 Image identification method of all-optical nonlinear diffraction neural network

Publications (1)

Publication Number Publication Date
CN113033796A true CN113033796A (en) 2021-06-25

Family

ID=76459216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011456487.6A Pending CN113033796A (en) 2020-12-11 2020-12-11 Image identification method of all-optical nonlinear diffraction neural network

Country Status (1)

Country Link
CN (1) CN113033796A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822424A (en) * 2021-07-27 2021-12-21 湖南大学 All-optical diffraction neural network system based on super-structured surface
CN115113508A (en) * 2022-05-07 2022-09-27 四川大学 Holographic display speckle suppression method based on optical diffraction neural network
CN115358381A (en) * 2022-09-01 2022-11-18 清华大学 Optical full adder and neural network design method, device and medium thereof
CN116957031A (en) * 2023-07-24 2023-10-27 浙江大学 Photoelectric computer based on optical multi-neuron activation function module
CN117521746A (en) * 2024-01-04 2024-02-06 武汉大学 Quantized optical diffraction neural network system and training method thereof
CN116957031B (en) * 2023-07-24 2024-05-24 浙江大学 Photoelectric computer based on optical multi-neuron activation function module

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682694A (en) * 2016-12-27 2017-05-17 复旦大学 Sensitive image identification method based on depth learning
CN110287985A (en) * 2019-05-15 2019-09-27 江苏大学 A kind of deep neural network image-recognizing method based on the primary topology with Mutation Particle Swarm Optimizer

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682694A (en) * 2016-12-27 2017-05-17 复旦大学 Sensitive image identification method based on depth learning
CN110287985A (en) * 2019-05-15 2019-09-27 江苏大学 A kind of deep neural network image-recognizing method based on the primary topology with Mutation Particle Swarm Optimizer

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MARIO MISCUGLIO 等: "ALL-OPTICAL NONLINEAR ACTIVATION FUNCTION FOR PHOTONIC NEURAL NETWORKS", 《OPTICAL MATERIALS EXPRESS》 *
XING LIN等: "All-optical machine learning using diffractive deep neural networks", 《SCIENCE》 *
陈绵书等: "基于卷积神经网络的多标签图像分类", 《吉林大学学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822424A (en) * 2021-07-27 2021-12-21 湖南大学 All-optical diffraction neural network system based on super-structured surface
CN113822424B (en) * 2021-07-27 2023-10-20 湖南大学 All-optical diffraction neural network system based on super-structured surface
CN115113508A (en) * 2022-05-07 2022-09-27 四川大学 Holographic display speckle suppression method based on optical diffraction neural network
CN115113508B (en) * 2022-05-07 2023-11-28 四川大学 Holographic display speckle suppression method based on optical diffraction neural network
CN115358381A (en) * 2022-09-01 2022-11-18 清华大学 Optical full adder and neural network design method, device and medium thereof
CN115358381B (en) * 2022-09-01 2024-05-31 清华大学 Optical full adder and neural network design method, equipment and medium thereof
CN116957031A (en) * 2023-07-24 2023-10-27 浙江大学 Photoelectric computer based on optical multi-neuron activation function module
CN116957031B (en) * 2023-07-24 2024-05-24 浙江大学 Photoelectric computer based on optical multi-neuron activation function module
CN117521746A (en) * 2024-01-04 2024-02-06 武汉大学 Quantized optical diffraction neural network system and training method thereof
CN117521746B (en) * 2024-01-04 2024-03-26 武汉大学 Quantized optical diffraction neural network system and training method thereof

Similar Documents

Publication Publication Date Title
CN113033796A (en) Image identification method of all-optical nonlinear diffraction neural network
CN111160171B (en) Radiation source signal identification method combining two-domain multi-features
CN111523546B (en) Image semantic segmentation method, system and computer storage medium
CN110334804B (en) All-optical depth diffraction neural network system and method based on spatial partially coherent light
CN113673590B (en) Rain removing method, system and medium based on multi-scale hourglass dense connection network
CN112818969B (en) Knowledge distillation-based face pose estimation method and system
CN111582435A (en) Diffraction depth neural network system based on residual error network
CN113867307B (en) Spacecraft intelligent fault diagnosis method based on deep neural network
CN112699917A (en) Image identification method of nonlinear optical convolution neural network
CN111259917B (en) Image feature extraction method based on local neighbor component analysis
CN115545173A (en) Optical modulation neuron for signal processing and all-optical diffraction neural network method
CN111582468B (en) Photoelectric hybrid intelligent data generation and calculation system and method
Costarelli Sigmoidal functions approximation and applications
Zhou et al. MSAR‐DefogNet: Lightweight cloud removal network for high resolution remote sensing images based on multi scale convolution
Zhou et al. Online filter weakening and pruning for efficient convnets
Li et al. RGB-induced feature modulation network for hyperspectral image super-resolution
Finnveden et al. Understanding when spatial transformer networks do not support invariance, and what to do about it
CN109558880B (en) Contour detection method based on visual integral and local feature fusion
Jin et al. Poisson image denoising by piecewise principal component analysis and its application in single‐particle X‐ray diffraction imaging
CN117392065A (en) Cloud edge cooperative solar panel ash covering condition autonomous assessment method
CN116993639A (en) Visible light and infrared image fusion method based on structural re-parameterization
CN116630700A (en) Remote sensing image classification method based on introduction channel-space attention mechanism
CN112489012A (en) Neural network architecture method for CT image recognition
CN113628261A (en) Infrared and visible light image registration method in power inspection scene
CN102298775A (en) Super-resolution method and system for human face based on sample

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210625

RJ01 Rejection of invention patent application after publication