CN110569965A - Neural network model optimization method and system based on ThLU function - Google Patents

Neural network model optimization method and system based on ThLU function Download PDF

Info

Publication number
CN110569965A
CN110569965A CN201910798586.3A CN201910798586A CN110569965A CN 110569965 A CN110569965 A CN 110569965A CN 201910798586 A CN201910798586 A CN 201910798586A CN 110569965 A CN110569965 A CN 110569965A
Authority
CN
China
Prior art keywords
function
neural network
thlu
cifar
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910798586.3A
Other languages
Chinese (zh)
Inventor
刘坤华
陈龙
袁湛楠
谢玉婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
National Sun Yat Sen University
Original Assignee
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Sun Yat Sen University filed Critical National Sun Yat Sen University
Priority to CN201910798586.3A priority Critical patent/CN110569965A/en
Publication of CN110569965A publication Critical patent/CN110569965A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

the invention provides a neural network model optimization method and system based on a THLU function, the neural network model optimization method and system based on the ReLU function has the advantages that the gradient disappearance phenomenon does not exist in the positive half shaft of the ReLU function, the neuron death phenomenon can be reduced, a new activation function is provided, namely, a correction linear unit based on the tanh function is provided, the negative half shaft of the tanh function and the positive half shaft of the ReLU function are collected, the logical function, the ELU function, the LReLU function, the ReLU function and the ThLU function are verified respectively through a VggNet-16 neural network architecture based on a CIFAR-10 data set and a CIFAR-100 data set, the neural network model obtained based on the training of the ThLU function has better accuracy and lower loss, the neuron death phenomenon of the ReLU function is effectively solved, and the activation function is more efficient.

Description

neural network model optimization method and system based on ThLU function
Technical Field
the invention relates to the technical field of deep learning, in particular to a neural network model optimization method and system based on a ThLU function.
background
In recent years, deep learning theory has achieved abundant research results in the aspects of image recognition, image detection, voice recognition, lip language recognition and the like, and the level of artificial intelligence is improved. The deep learning theory has attracted attention due to, in part, the following: deep learning network architecture, computer hardware, activation functions, optimization algorithms, and the like. Among these, the improvement of the activation function is an important reason for the focus of deep learning theory. The activation function is derived from logistic regression, and in order to transform a linear net input quantity into a nonlinear equation with good characteristics, the net input quantity z is defined through a nonlinear logistic sigmoid function, so as to obtain a conditional probability P (y ═ 1| x), and the nonlinear function is called the activation function. The mathematical definition of the activation function is derived from professor Bengio, montreal university, canada 2016: the activation function is the mapping h R → R and is almost everywhere derivable.
at present, the Sigmoid function and the tanh function have low accuracy in a neural network, and are not suitable for a deep neural network architecture. The ReLU function simulates the function of a human neuron, so that the performance of the neural network can be improved, the positive half shaft of the neural network has no gradient disappearance, but the negative half shaft has neuron death, so that the loss of the neural network is caused. The functions limit the development of the deep learning capability of the neural network.
Disclosure of Invention
In order to solve the problem that the neural network is difficult to continue deep learning due to neuron death in the neural network function in the prior art, the invention provides a neural network model optimization method and system based on the ThLU function, so that the accuracy of the neural network model is improved, the loss is reduced, and the deep learning capability of the neural network is improved.
In order to solve the technical problems, the invention adopts the technical scheme that: the neural network model optimization method based on the ThLU function comprises the following steps:
The method comprises the following steps: setting an activation function of a neural network model as a ThLU function, wherein the ThLU function is a modified linear unit based on a tanh function, and a negative half axis of the modified linear unit is derived from a negative half axis of the tanh function, and a positive half axis of the modified linear unit is derived from a positive half axis of a ReLU function;
Step two: and obtaining a neural network model for performance verification through VggNet-16 neural network architecture training based on the CIFAR-10 data set and the CIFAR-100 data set respectively.
preferably, the formula of the ThLU function is as follows:
Preferably, the obtaining of the neural network model for performance verification through vggtnet-16 neural network architecture training based on the CIFAR-10 data set and the CIFAR-100 data set specifically comprises:
firstly, performing a test based on a VggNet-16 neural network architecture and a CIFAR-10 data set to obtain a training correct rate curve, a training loss curve, a verification correct rate curve and a verification loss curve;
And then performing a test based on a VggNet-16 neural network architecture and a CIFAR-100 data set to obtain a training accuracy curve, a training loss curve, a verification accuracy curve and a verification loss curve.
The invention also provides a neural network model optimization system based on the THLU function for the method, which comprises the following steps: the model optimization module is used for setting an activation function of the neural network model as a ThLU function, the ThLU function is a modified linear unit based on a tanh function, the negative half axis of the modified linear unit is derived from the negative half axis of the tanh function, and the positive half axis of the modified linear unit is derived from the positive half axis of the ReLU function;
And the performance verification module is used for obtaining a neural network model through VggNet-16 neural network architecture training and performing performance verification on the basis of the CIFAR-10 data set and the CIFAR-100 data set respectively.
Preferably, the formula of the ThLU function is as follows:
Preferably, the performance verification module comprises:
The CIFAR-10 data set verification unit is used for carrying out tests based on the VggNet-16 neural network architecture and the CIFAR-10 data set to obtain a training correct rate curve, a training loss curve, a verification correct rate curve and a verification loss curve;
and the CIFAR-100 data set verification unit is used for carrying out tests based on the VggNet-16 neural network architecture and the CIFAR-100 data set to obtain a training correct rate curve, a training loss curve, a verification correct rate curve and a verification loss curve.
Compared with the prior art, the invention has the beneficial effects that: the neural network model training method based on the ReLU function has the advantages that the gradient disappearance phenomenon does not exist in the positive half shaft of the ReLU function, the neuron death phenomenon can be reduced, a new activation function is provided, namely a modified linear unit based on the tanh function is provided, the negative half shaft of the tanh function and the positive half shaft of the ReLU function are integrated, the tan function, the ELU function, the LReLU function, the ReLU function and the ThLU function are verified respectively through a VggNet-16 neural network architecture based on a CIFAR-10 data set and a CIFAR-100 data set, the neural network model trained based on the ThLU function has better accuracy and lower loss through verification, the neuron death phenomenon of the ReLU function is effectively solved, and the neural network model is a more efficient activation function.
Drawings
FIG. 1 is a graph of ReLU, LReLU, ELU, and ThLU functions of the present invention;
FIG. 2 is a graph of training accuracy obtained by training each activation function based on a CIFAR-10 dataset according to the present invention;
FIG. 3 is a graph of training loss obtained by training each activation function based on a CIFAR-10 dataset according to the present invention;
FIG. 4 is a graph of validation accuracy obtained by training each activation function based on a CIFAR-10 dataset according to the present invention;
FIG. 5 is a graph of validation loss for each activation function of the present invention trained on a CIFAR-10 dataset;
FIG. 6 is a graph of training accuracy obtained by training each activation function based on a CIFAR-100 dataset according to the present invention;
FIG. 7 is a graph of training loss for each activation function of the present invention trained based on a CIFAR-100 dataset;
FIG. 8 is a graph of validation accuracy obtained by training each activation function based on a CIFAR-100 dataset according to the present invention;
FIG. 9 is a graph of validation loss for each activation function of the present invention trained on a CIFAR-100 dataset.
Detailed Description
the drawings are for illustrative purposes only and are not to be construed as limiting the patent; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there are terms such as "upper", "lower", "left", "right", "long", "short", etc., indicating orientations or positional relationships based on the orientations or positional relationships shown in the drawings, it is only for convenience of description and simplicity of description, but does not indicate or imply that the device or element referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationships in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
the technical scheme of the invention is further described in detail by the following specific embodiments in combination with the attached drawings:
examples
fig. 1-9 show an embodiment of a neural network model optimization method based on the ThLU function, which includes the following steps:
the method comprises the following steps: setting an activation function of the neural network model as a ThLU function, wherein the ThLU function is a modified linear unit based on a tanh function, and a negative half axis of the modified linear unit is derived from a negative half axis of the tanh function, and a positive half axis of the modified linear unit is derived from a positive half axis of a ReLU function;
Step two: and obtaining a neural network model for performance verification through VggNet-16 neural network architecture training based on the CIFAR-10 data set and the CIFAR-100 data set respectively.
the embodiment of the invention is based on that the positive half axis of the ReLU function does not have the gradient disappearance phenomenon and the negative half axis of the tanh function can reduce the neuron death phenomenon, and provides a new activation function: a modified linear unit (ThLU function) based on the tanh function, where the negative half-axis of the ThLU function is derived from the negative half-axis of the tanh function, and the positive half-axis is derived from the positive half-axis of the ReLU function, and the formula is as follows:
The function curves of the ThLU function and the ELU function, the lrellu function, and the ReLU function are shown in fig. 1.
In order to verify the performance of the THLU function, the embodiment of the invention is based on a VggNet-16 neural network architecture, and the THLU function is tested on a CIFAR-10 data set and a CIFAR-100 data set respectively. The CIFAR-10 dataset is a dataset collected by AlexKrizhevsky, Vinod Nair and Geoffrey Hinton in 2009, and is an important dataset used by academia to train validation neural network architecture. The CIFAR-10 dataset contains 60000 total 32X 32 color images of 10 classes. The CIFAR-100 data set evolved from the CIFAR-10 data set, and has 20 major categories, wherein the 20 major categories are divided into 100 minor categories. Each subclass contains 600 images, 500 training images and 100 verification images.
As the image sizes of the CIFAR-10 data set and the CIFAR-100 data set are both 32X 32, the same neural network parameters are set in the two experiments, the neural network selects a random gradient descent method as an optimization algorithm, the experiment platform is an OS X EI Capitan system, an Intel Core i5 mainboard, an 8G memory and a Tensorflow is 1.2.1CPU version.
Experiments were first performed based on the VggNet-16 neural network architecture and the CIFAR-10 dataset.
The batch size is 64 and the maximum step number is 7000 when the neural network is trained, and a training accuracy curve and a training loss curve obtained by training each activation function based on a CIFAR-10 data set are shown in FIGS. 2 and 3. In the training process, the training accuracy obtained based on the training of the ThLU function is higher than the training accuracy obtained by the training of the tanh function, the ELU function, the LReLU function and the ReLU function, and the accuracy obtained based on the training of the ThLU function can reach 100% around 6200 step; the training loss obtained by training based on the THLU function is lower than that based on the tanh function, the ELU function, the LReLU function and the ReLU function.
The verification accuracy curve and the verification loss curve obtained by training each activation function based on the CIFAR-10 data set are shown in FIGS. 4 and 5. As can be seen from fig. 4 and 5, the verification accuracy obtained based on the ThLU function is higher than the verification accuracy obtained based on the tanh function, the ELU function, the lreuu function, and the ReLU function; the verification loss obtained based on the ThLU function is lower than that obtained based on the tanh function, the ELU function, the lreuu function, and the ReLU function.
Therefore, based on the VggNet-16 neural network architecture and the CIFAR-10 data set, the neural network model trained based on the ThLU function can obtain higher accuracy and lower loss than the neural network model trained based on the ELU function, the LReLU function, the ReLU function and the tanh function.
Experiments were then performed based on the VggNet-16 neural network architecture and the CIFAR-100 dataset.
The batch size is 64 and the maximum step number is 10000 when the neural network is set to be trained, and a training accuracy rate curve and a training loss curve obtained by training each activation function based on a CIFAR-100 data set are shown in FIGS. 6 and 7. In the training process, the training accuracy obtained based on the ThLU function is higher than the training accuracy obtained by the training of the tanh function, the ELU function, the LReLU function and the ReLU function; the training loss obtained by training based on the THLU function is lower than that based on the tanh function, the ELU function, the LReLU function and the ReLU function.
the verification accuracy curve and the verification loss curve obtained by training each activation function based on the CIFAR-100 data set are shown in FIGS. 8 and 9. As can be seen from fig. 8 and 9, the verification accuracy obtained based on the ThLU function is higher than the verification accuracy obtained based on the tanh function, the ELU function, the lreuu function, and the ReLU function; the verification loss obtained based on the ThLU function is lower than that obtained based on the tanh function, the ELU function, the lreuu function, and the ReLU function.
Therefore, based on the VggNet-16 neural network architecture and the CIFAR-100 data set, the neural network model trained based on the ThLU function can obtain higher accuracy and lower loss than the neural network model trained based on the ELU function, the LReLU function, the ReLU function and the tanh function.
The two tests show that the performance of the THLU function is superior to that of an ELU function, an LReLU function, a ReLU function and a tanh function in accuracy and loss of a neural network model obtained by training a VggNet-16 neural network architecture based on a CIFAR-10 data set and a CIFAR-100 data set.
The embodiment of the invention provides a new activation function, namely a correction linear unit based on a tanh function, integrates the negative half axis of the tanh function and the positive half axis of the ReLU function, verifies the tanh function, the ELU function, the LReLU function, the ReLU function and the ThLU function respectively through a VggNet-16 neural network architecture based on a CIFAR-10 data set and a CIFAR-100 data set, and shows that a neural network model obtained based on training of the ThLU function has better accuracy and lower loss through verification, so that the neuron death phenomenon of the ReLU function is effectively solved, and the neural network model is a more efficient activation function.
Example 2
A neural network model optimization system based on a ThLU function, the system comprising:
the model optimization module is used for setting an activation function of the neural network model as a ThLU function, the ThLU function is a modified linear unit based on a tanh function, the negative half axis of the modified linear unit is derived from the negative half axis of the tanh function, and the positive half axis of the modified linear unit is derived from the positive half axis of the ReLU function;
And the performance verification module is used for obtaining a neural network model through VggNet-16 neural network architecture training and performing performance verification on the basis of the CIFAR-10 data set and the CIFAR-100 data set respectively.
the negative half-axis of the ThLU function is derived from the negative half-axis of the tanh function, and the positive half-axis is derived from the positive half-axis of the ReLU function, with the formula:
The performance verification module includes: a CIFAR-10 data set validation unit and a CIFAR-100 data set validation unit.
The CIFAR-10 data set verification unit performs tests based on the VggNet-16 neural network architecture and the CIFAR-10 data set to obtain a training accuracy curve, a training loss curve, a verification accuracy curve and a verification loss curve.
and setting the batch size to be 64 and the maximum step number to be 7000 during neural network training, and acquiring a training accuracy rate curve and a training loss curve obtained by training each activation function based on a CIFAR-10 data set. In the training process, the training accuracy obtained based on the training of the ThLU function is higher than the training accuracy obtained by the training of the tanh function, the ELU function, the LReLU function and the ReLU function, and the accuracy obtained based on the training of the ThLU function can reach 100% around 6200 step; the training loss obtained by training based on the THLU function is lower than that based on the tanh function, the ELU function, the LReLU function and the ReLU function.
according to a verification accuracy rate curve and a verification loss curve obtained by training each activation function based on a CIFAR-10 data set, the verification accuracy rate obtained based on the ThLU function is higher than that obtained based on a tanh function, an ELU function, an LReLU function and a ReLU function; the verification loss obtained based on the ThLU function is lower than that obtained based on the tanh function, the ELU function, the lreuu function, and the ReLU function.
Therefore, based on the VggNet-16 neural network architecture and the CIFAR-10 data set, the neural network model trained based on the ThLU function can obtain higher accuracy and lower loss than the neural network model trained based on the ELU function, the LReLU function, the ReLU function and the tanh function.
The CIFAR-100 data set verification unit performs tests based on the VggNet-16 neural network architecture and the CIFAR-100 data set to obtain a training accuracy curve, a training loss curve, a verification accuracy curve and a verification loss curve.
the batch size is 64 when the neural network is trained, the maximum step number is 10000, and a training accuracy rate curve and a training loss curve obtained by training each activation function based on a CIFAR-100 data set are obtained. In the training process, the training accuracy obtained based on the ThLU function is higher than the training accuracy obtained by the training of the tanh function, the ELU function, the LReLU function and the ReLU function; the training loss obtained by training based on the THLU function is lower than that based on the tanh function, the ELU function, the LReLU function and the ReLU function.
according to a verification accuracy rate curve and a verification loss curve obtained by training each activation function based on a CIFAR-100 data set, the verification accuracy rate obtained based on the ThLU function is higher than that obtained based on a tanh function, an ELU function, an LReLU function and a ReLU function; the verification loss obtained based on the ThLU function is lower than that obtained based on the tanh function, the ELU function, the lreuu function, and the ReLU function.
therefore, based on the VggNet-16 neural network architecture and the CIFAR-100 data set, the neural network model trained based on the ThLU function can obtain higher accuracy and lower loss than the neural network model trained based on the ELU function, the LReLU function, the ReLU function and the tanh function.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (6)

1. a neural network model optimization method based on a ThLU function is characterized by comprising the following operations:
The method comprises the following steps: setting an activation function of the neural network model as a ThLU function, wherein the ThLU function is a modified linear unit based on a tanh function, a negative half axis is derived from a negative half axis of the tanh function, and a positive half axis is derived from a positive half axis of a ReLU function;
step two: and obtaining a neural network model for performance verification through VggNet-16 neural network architecture training based on the CIFAR-10 data set and the CIFAR-100 data set respectively.
2. the method of claim 1, wherein the formula of the ThLU function is as follows:
3. the method as claimed in claim 1, wherein the method for optimizing the neural network model based on the ThLU function, based on the CIFAR-10 dataset and the CIFAR-100 dataset, the method for performing the performance verification on the neural network model obtained by training the vggtnet-16 neural network architecture specifically comprises the following steps:
s1, performing a test based on the VggNet-16 neural network architecture and the CIFAR-10 data set to obtain a training correct rate curve, a training loss curve, a verification correct rate curve and a verification loss curve;
S2: and performing a test based on the VggNet-16 neural network architecture and the CIFAR-100 data set to obtain a training accuracy curve, a training loss curve, a verification accuracy curve and a verification loss curve.
4. a neural network model optimization system based on a ThLU function, the system comprising:
The model optimization module is used for setting an activation function of the neural network model as a ThLU function, the ThLU function is a modified linear unit based on a tanh function, the negative half axis of the modified linear unit is derived from the negative half axis of the tanh function, and the positive half axis of the modified linear unit is derived from the positive half axis of the ReLU function;
And the performance verification module is used for obtaining a neural network model through VggNet-16 neural network architecture training and performing performance verification on the basis of the CIFAR-10 data set and the CIFAR-100 data set respectively.
5. The neural network model optimization system based on the ThLU function of claim 4, wherein the formula of the ThLU function is as follows:
6. the THLU function-based neural network model optimization system of claim 4, wherein the performance verification module comprises:
the CIFAR-10 data set verification unit is used for carrying out tests based on the VggNet-16 neural network architecture and the CIFAR-10 data set to obtain a training correct rate curve, a training loss curve, a verification correct rate curve and a verification loss curve;
And the CIFAR-100 data set verification unit is used for carrying out tests based on the VggNet-16 neural network architecture and the CIFAR-100 data set to obtain a training correct rate curve, a training loss curve, a verification correct rate curve and a verification loss curve.
CN201910798586.3A 2019-08-27 2019-08-27 Neural network model optimization method and system based on ThLU function Pending CN110569965A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910798586.3A CN110569965A (en) 2019-08-27 2019-08-27 Neural network model optimization method and system based on ThLU function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910798586.3A CN110569965A (en) 2019-08-27 2019-08-27 Neural network model optimization method and system based on ThLU function

Publications (1)

Publication Number Publication Date
CN110569965A true CN110569965A (en) 2019-12-13

Family

ID=68776370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910798586.3A Pending CN110569965A (en) 2019-08-27 2019-08-27 Neural network model optimization method and system based on ThLU function

Country Status (1)

Country Link
CN (1) CN110569965A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949893A (en) * 2020-11-18 2021-06-11 安徽师范大学 Soybean reserve early warning method based on improved RNN
WO2022077894A1 (en) * 2020-10-16 2022-04-21 苏州浪潮智能科技有限公司 Image classification and apparatus, and related components
US11475309B2 (en) 2020-04-14 2022-10-18 Google Llc Asymmetric functionality activation for improved stability in neural networks
CN116824512A (en) * 2023-08-28 2023-09-29 西华大学 27.5kV visual grounding disconnecting link state identification method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11475309B2 (en) 2020-04-14 2022-10-18 Google Llc Asymmetric functionality activation for improved stability in neural networks
WO2022077894A1 (en) * 2020-10-16 2022-04-21 苏州浪潮智能科技有限公司 Image classification and apparatus, and related components
CN112949893A (en) * 2020-11-18 2021-06-11 安徽师范大学 Soybean reserve early warning method based on improved RNN
CN116824512A (en) * 2023-08-28 2023-09-29 西华大学 27.5kV visual grounding disconnecting link state identification method and device
CN116824512B (en) * 2023-08-28 2023-11-07 西华大学 27.5kV visual grounding disconnecting link state identification method and device

Similar Documents

Publication Publication Date Title
CN110569965A (en) Neural network model optimization method and system based on ThLU function
US11487995B2 (en) Method and apparatus for determining image quality
CN111160533B (en) Neural network acceleration method based on cross-resolution knowledge distillation
CN105469376B (en) The method and apparatus for determining picture similarity
CN108806792A (en) Deep learning facial diagnosis system
Qiao et al. Application of SVM based on genetic algorithm in classification of cataract fundus images
WO2016033965A1 (en) Method for generating image classifier and image classification method and device
CN111967392A (en) Face recognition neural network training method, system, equipment and storage medium
Prasetio et al. The facial stress recognition based on multi-histogram features and convolutional neural network
CN111783997B (en) Data processing method, device and equipment
CN109271516B (en) Method and system for classifying entity types in knowledge graph
CN112308131B (en) Sample rejection method, device, equipment and storage medium
US11335128B2 (en) Methods and systems for evaluating a face recognition system using a face mountable device
CN114913923A (en) Cell type identification method aiming at open sequencing data of single cell chromatin
CN114048729A (en) Medical document evaluation method, electronic device, storage medium, and program product
Fan et al. Hybrid separable convolutional inception residual network for human facial expression recognition
Mohammadi et al. Static hand gesture recognition for American sign language using neuromorphic hardware
Herasymova et al. Development of Intelligent Information Technology of Computer Processing of Pedagogical Tests Open Tasks Based on Machine Learning Approach.
Halkias et al. Sparse penalty in deep belief networks: using the mixed norm constraint
CN115795355B (en) Classification model training method, device and equipment
CN110147740B (en) Face recognition method, device, equipment and storage medium
Guarin et al. Automatic facial landmark localization in clinical populations-improving model performance with a small dataset
CN115937937A (en) Facial expression recognition method based on improved residual error neural network
TWI742312B (en) Machine learning system, machine learning method and non-transitory computer readable medium for operating the same
CN114065920A (en) Image identification method and system based on channel-level pruning neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191213

RJ01 Rejection of invention patent application after publication