CN112613410A - Parasite egg identification method based on transfer learning - Google Patents

Parasite egg identification method based on transfer learning Download PDF

Info

Publication number
CN112613410A
CN112613410A CN202011557317.7A CN202011557317A CN112613410A CN 112613410 A CN112613410 A CN 112613410A CN 202011557317 A CN202011557317 A CN 202011557317A CN 112613410 A CN112613410 A CN 112613410A
Authority
CN
China
Prior art keywords
distribution
probability
modules
edge
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011557317.7A
Other languages
Chinese (zh)
Other versions
CN112613410B (en
Inventor
李峰
李搏
潘雨青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN202011557317.7A priority Critical patent/CN112613410B/en
Publication of CN112613410A publication Critical patent/CN112613410A/en
Application granted granted Critical
Publication of CN112613410B publication Critical patent/CN112613410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of image processing, the field of image recognition and the field of transfer learning, and particularly relates to a parasite egg recognition method based on transfer learning. The invention introduces a relevant model and shared characteristic parameters based on transfer learning, enhances the basic characteristics of the whole parasite egg model, and reduces the huge resource and time consumed by deep model training; introducing a Sobel operator, and rapidly processing the picture through edge information, so that the dimensionality of the picture is reduced, and the training speed is increased; the combined distribution of edge distribution and condition distribution is introduced, so that the distance between the source domain and the target domain is shortest, and the identification precision is improved; and a softmax classifier is introduced, so that the multi-classification problem is effectively solved.

Description

Parasite egg identification method based on transfer learning
Technical Field
The invention belongs to the field of image processing, the field of image recognition and the field of transfer learning, and particularly relates to a parasite egg recognition method based on transfer learning.
Background
The development of a computer image processing technology promotes medical image processing, a transfer learning technology is widely applied to the aspects of pictures, videos, audios, behaviors and the like, and is in a big data era, massive images, texts, languages and other data are generated every day, a deep learning model depends on massive data, continuous training and updating are carried out, and the problems of less data, poor accuracy, weak computing capability and the like are solved.
Because the parasite egg distinguishing limit is fuzzy, parasite eggs with similar shapes are difficult to distinguish, and the distinction of the parasite eggs needs very high professional literacy. By means of the transfer learning method, the image information of food, water source and excrement infected by the parasites is analyzed, and the parasite egg classification model is established, so that the heavy labor of medical staff can be reduced, and the working efficiency of the medical staff can be greatly improved.
So far, methods such as image matching, pattern recognition, sample feature selection, sample feature fusion, deep learning and the like are respectively adopted in China to research parasite eggs. Chinese patent CN110503669 proposes an image registration algorithm based on parasite eggs, which utilizes Harris corner algorithm to remove redundant noise by rotation, translation and scale change of images, and performs data enhancement at the same time, and the data enhancement can improve the accuracy of identification, but at the same time, the calculation cost is increased, the learning cost is increased, and more resources are consumed. Chinese patent CN11503107 has proposed a parasite ovum image characteristic selection method, through calculating the variance of each characteristic set subentry, and choose the 6 minimum characteristics, reduce dimension with the characteristic compressed method in sampling 6 characteristics, form 3 description characteristics as the characteristic of the picture, can have effective improvement precision and real-time, but the result of selection of the characteristic, have direct influence on the result of discernment. Chinese patent CN108805101A proposes a method for identifying parasite eggs based on deep learning, which utilizes an artificial intelligence deep learning model to automatically learn from a sample library and extract feature information, so as to solve the problem of low identification efficiency, but training a deep learning model with high accuracy requires a large amount of sample data, and consumes a large amount of learning time and learning cost. Parasite eggs are identified by methods such as a support vector machine, an average set and the like abroad, the requirement of automatic identification is met, the accuracy is very high, but under ideal conditions, the accuracy is difficult to achieve under interference factors of various noises, impurities and stains.
Disclosure of Invention
The invention aims to overcome the problem of low accuracy caused by small data, solve the problem of increased calculation and learning cost caused by data enhancement, and consider the influence of feature selection on an identification result and identification errors caused by various interference factors. A mixed model method based on a migration model and deep learning self-adaptation is provided by combining the migration learning model and the deep learning self-adaptation algorithm, and the requirements of practical application are met under the condition that the accuracy and the recognition efficiency are improved.
In order to realize the invention, the invention adopts the following technical scheme: a parasite egg identification method based on transfer learning comprises the following steps:
s1, image preprocessing: inputting an original image, converting the original image into a gray image, acquiring edge information of the original image by using an edge sobel operator, preprocessing the image, finding out the outline of a parasitic ovum area in the original image, cutting the parasitic ovum area, and enhancing a cut image by adopting a data enhancement and random noise mode;
s2, selecting a module of a pre-training model: selecting a VGG16 network architecture and an ImageNet pre-training network as pre-training models, selecting the first 1-n (n < ═ 5) modules as a fixed layer number, wherein the first n modules are general characteristics, and reserving all parameters of the modules; the following 5-n modules are special features, and parameters in the remaining 5-n modules are finely adjusted by adopting a He initialization mode;
s3, deep network construction based on a pre-training model: inputting the cut picture enhanced in the step 1 into a transfer learning model, sequentially carrying out parameter sharing operation on 1-n fixed parameter modules, and sequentially carrying out convolution, pooling and function activating operation on the subsequent 5-n modules to improve the accuracy of picture classification; and processing by three full connection layers, and finally performing probability solution by using a softmax classifier to determine the classification of the pictures.
Further, in step S3, the full-link layer of the pre-trained model and the learned full-link layer are jointly distributed and adapted by taking into account the condition distribution and the edge distribution, and adding the balance factor μ, so that the distributions of the two are consistent or similar, and the domains with the same or similar categories have the same probability distribution.
Further, the specific process of the joint distribution adaptation is as follows:
s3.1 assume that the edge distributions of the source and target domains are not the same, i.e., P (X)S)≠P(XT) While assuming that the conditional distributions of the source and target domains are not uniform, i.e. P (Y)S|XS)≠P(YT|XT);
S3.2, measuring the distribution difference of the source domain and the target domain by using the edge distribution and the condition distribution, introducing a variable mu, and balancing the edge distribution and the condition distribution to solve an optimal solution, wherein an integration formula is as follows:
D(DS,DT)≈(1-μ)D(P(XS),P(XT))+μD(P(YS|XS),P(YT|XT))
μ∈[0,1]
wherein D (D)S,DT) Representing the distance between the source and target domains, DSRepresenting the probability distribution of the source domain, DTRepresenting the probability distribution of the object and μ is a parameter used to balance the edge distribution and the conditional distribution, D (P (X)S),P(XT) Represents the distance between the edge distributions of the source domain and the target domain, P (X)S),P(XT) Respectively representing the edge distribution of the source domain and the edge distribution of the target domain, D (P (Y)S|XS),P(YT|XT) Represents the distance between the conditional distributions of the source and target domains, P (Y)S|XS),P(YT|XT) Representing conditional distributions of the source domain and the target domain, respectively;
s3.3, the optimized target consists of two parts, namely a loss function and distribution adaptation, a loss function is defined and used for measuring the difference between a predicted value and a true value, the distribution adaptation is the joint distribution adaptation distance in S3.2, and then the optimized target is as follows:
Figure BDA0002855623650000031
wherein
Figure BDA0002855623650000032
Representing the set of all annotation data in the source domain and the target domain,
Figure BDA0002855623650000033
is a commonly used loss function, w represents a weight,
Figure BDA0002855623650000034
is the result of the weighting process of the input image,
Figure BDA0002855623650000035
indicating correct results, fc6, fc8 indicating full connectivity layer, D2(DS,DT) Is the distribution difference defined in S3.2.
Further, the depth network based on the pre-trained model in step S3 includes N modules, three fully-connected layers, and a softmax classifier, where the first module includes two convolutional layers, the second module includes a maximum pooling layer and two convolutional layers, and the third to fifth modules include a maximum pooling layer and three convolutional layers, respectively; the full connection layer I is used for integrating the characteristics output by the module V and converting the characteristics into a linear relation, the full connection layer II is used for characteristic weighting, and the full connection layer III is used for classifying results; and performing softmax probability solution on the features extracted from the third full connection layer, and solving the probability value of each possible category of pictures through a softmax classifier.
Further, the softmax classifier training step in the above step S3 is as follows:
s4.1, dividing a given parasite egg test sample x into k classes, estimating a probability value p (y ═ j | x) for each class j by using an assumption, and estimating the probability of occurrence of each classification result of x; assuming that the function is to output a k-dimensional vector to represent the probability values of the k estimates;
s4.2 the probability of classifying x into class j in softmax regression is:
Figure BDA0002855623650000041
p (y ═ j | x; θ) represents the probability of classifying x into class j, θ represents the model parameters of the fully-connected layer,
Figure BDA0002855623650000042
the probability of each class is normalized.
S4.3, calculating the probability of each category, and returning the category with the highest probability, namely the highest possibility.
According to the parasite egg identification method based on transfer learning, the relevant model based on transfer learning and the shared characteristic parameters are introduced, the basic characteristics of the whole parasite egg model are enhanced, and huge resources and time consumed by deep model training are reduced; introducing a Sobel operator, and rapidly processing the picture through edge information, so that the dimensionality of the picture is reduced, and the training speed is increased; the combined distribution of edge distribution and condition distribution is introduced, so that the distance between the source domain and the target domain is shortest, and the identification precision is improved; and a softmax classifier is introduced, so that the multi-classification problem is effectively solved.
Drawings
FIG. 1 is a flow chart of the parasite egg identification method based on migratory learning according to the present invention.
FIG. 2 is a picture of parasite eggs after treatment by the label operator.
FIG. 3 building a deep network model.
FIG. 4 is an error analysis of migration model module selection.
Detailed Description
The invention is further explained below with reference to specific examples.
As shown in fig. 1, the process of the parasite egg identification method based on transfer learning of the present invention is a parasite egg identification method based on transfer learning, comprising the following steps:
s1, image preprocessing: inputting an original image, converting the original image into a gray image, acquiring edge information of the original image by using an edge sobel operator, performing image preprocessing such as filtering and denoising, binarization, image corrosion and expansion, finding out the outline of a parasitic ovum area in the original image, cutting the parasitic ovum area, and performing enhancement processing on a cut image by adopting a data enhancement and random noise mode;
as a preferred embodiment of the present invention, the image preprocessing step in step S1 specifically includes:
s1.1, selecting a plurality of pairs of images with typical characteristics from pictures containing parasite eggs to be identified for gray processing;
s1.2 smoothing the image (9x 9 kernel) by using a low-pass filter, and reducing high-frequency noise in the smoothed image. The goal of the low pass filter is to reduce the rate of change of the image. Such as replacing each pixel with the mean of the pixels surrounding the pixel;
s1.3, carrying out gray level and normalization processing on the picture, carrying out contour analysis according to edge change, using an adaptive threshold value in the gradient image, setting any pixel smaller than the threshold value as 0 (black), and otherwise, setting the pixel as 255 (white);
s1.4, cutting the parasite egg image according to the change of the edge, wherein the processing result is shown in figure 2;
and S1.5, carrying out normalization and data enhancement processing on the cut picture.
S2, selecting a module of a pre-training model: selecting a VGG16 network architecture and an ImageNet pre-training network as pre-training models, selecting the first 1-n (n < ═ 5) modules as a fixed layer number, wherein the first n modules are general characteristics, and reserving all parameters of the modules; the following 5-n modules are special features, and parameters in the remaining 5-n modules are finely adjusted by adopting a He initialization mode;
in step S2, as a preferred embodiment of the present invention, the number of feature layers of the pre-training model is selected, as shown in fig. 4, and the performance of the model decreases as the number of layers of the migration model increases, which is given by the data. However, the common shared features of the first three layers can be migrated without modifying the common features of the first three layers, and the migration of the network layer number can accelerate the learning and optimization of the network, that is, the effect is optimal when n is 3; the method has the advantages that the He initialization is carried out on the module to be modified, the He initialization is mainly used for initializing the weight, the size of the previous layer can be kept in mind, the global minimum value of the cost function can be obtained more quickly and effectively, the range of the global minimum value depends on the size of the neuron of the previous layer, the initialization process is controlled, and gradient reduction is more effective.
S3, deep network construction based on a pre-training model: inputting the cut picture enhanced in the step 1 into a transfer learning model, sequentially carrying out parameter sharing operation on 1-n fixed parameter modules, and sequentially carrying out convolution, pooling and function activating operation on the subsequent 5-n modules to improve the accuracy of picture classification; and processing by three full connection layers, and finally performing probability solution by using a softmax classifier to determine the classification of the pictures.
As a preferred embodiment of the present invention, in the above step S3, the full-link layer considers the condition distribution and the edge distribution, adds the balance factor μ, and performs joint distribution adaptation on the full-link layer of the pre-trained model and the learned full-link layer, so that the distributions between the two layers are consistent or similar, and domains with the same or similar categories have the same probability distribution.
As a preferred embodiment of the present invention, the specific process of the above joint distribution adaptation is as follows:
s3.1 assume that the edge distributions of the source and target domains are not the same, i.e., P (X)S)≠P(XT) While assuming that the conditional distributions of the source and target domains are not uniform, i.e. P (Y)S|XS)≠P(YT|XT);
S3.2, measuring the distribution difference of the source domain and the target domain by using the edge distribution and the condition distribution, introducing a variable mu, and balancing the edge distribution and the condition distribution to solve an optimal solution, wherein an integration formula is as follows:
D(DS,DT)≈(1-μ)D(P(Xs),P(Xt))+μD(P(YS|XS),P(YT|XT))
μ∈[0,1]
wherein D (D)S,DT) Representing the distance between the source and target domains, DsRepresenting the probability distribution of the source domain, DTRepresenting the probability distribution of the object and mu is used to average outParameter balancing edge distribution and conditional distribution, D (P (X)s),P(Xt) Represents the distance between the edge distributions of the source domain and the target domain, P (X)s),P(Xt) Respectively representing the edge distribution of the source domain and the edge distribution of the target domain, D (P (Y)s|Xs),P(Yt|Xt) Represents the distance between the conditional distributions of the source and target domains, P (Y)S|XS),P(YT|XT) Representing conditional distributions of the source domain and the target domain, respectively;
as a preferred embodiment of the invention, the variable μ is in the range of 0.6 and 0.8, with the best results being achieved.
S3.3, the optimized target consists of two parts, namely a loss function and distribution adaptation, a loss function is defined and used for measuring the difference between a predicted value and a true value, the distribution adaptation is the joint distribution adaptation distance in S3.2, and then the optimized target is as follows:
Figure BDA0002855623650000061
wherein
Figure BDA0002855623650000062
Representing the set of all annotation data in the source domain and the target domain,
Figure BDA0002855623650000063
is a commonly used loss function, w represents a weight,
Figure BDA0002855623650000064
is the result of the weighting process of the input image,
Figure BDA0002855623650000065
indicating correct results, fc6, fc8 indicating full connectivity layer, D2(DS,DT) Is the distribution difference defined in S3.2.
As a preferred embodiment of the present invention, the migration learning model of step S3 includes N modules, three fully-connected layers, and a softmax classifier, where the first module includes two convolutional layers, the second module includes a maximum pooling layer and two convolutional layers, and the third to fifth modules include a maximum pooling layer and three convolutional layers, respectively; the full connection layer I is used for integrating the characteristics output by the module V and converting the characteristics into a linear relation, the full connection layer II is used for characteristic weighting, and the full connection layer III is used for classifying results; and performing softmax probability solution on the features extracted from the third full connection layer, and solving the probability value of each possible category of pictures through a softmax classifier.
As a preferred embodiment of the present invention, the softmax classifier training step in the above step S3 is as follows:
s4.1, dividing a given parasite egg test sample x into k classes, estimating a probability value p (y ═ j | x) for each class j by using an assumption, and estimating the probability of occurrence of each classification result of x; assuming that the function is to output a k-dimensional vector to represent the probability values of the k estimates;
s4.2 the probability of classifying x into class j in softmax regression is:
Figure BDA0002855623650000071
p(y(i)=j|xi(ii) a Theta) represents the probability of classifying x into class j, theta represents the model parameter of the fully-connected layer,
Figure BDA0002855623650000072
normalizing the probability of each class;
s4.3, calculating the probability of each category, and returning the category with the highest probability, namely the highest possibility.
As a preferred embodiment of the present invention, a specific embodiment is described below with reference to fig. 3 to describe a building process of a migration learning network model:
(3.1) the original image is processed 224 x 3 by step S1, while we divide the model into five modules, three fully connected layers and one classifier, wherein the module parameters are selected according to step 2, each module comprising several convolutional layers and one pooling layer, respectively.
(3.2) set block1, containing two conv3 x 3 x 64, where the output image is 224 x 64, and the input is the input of the preprocessed pattern in step 1.
(3.3) set block2, input is the result of block1 output, containing one maxpool, and two conv3 x 3 x 128, when the output dimension is 112 x 128.
(3.4) set block3, input is the result of block2 output, containing one maxpool, and three conv3 x 3 x 256, when the output dimension is 56 x 256.
(3.5) set block4, input is the result of block3 output, containing one maxpool, and three conv3 x 3 x 512, with output dimension 28 x 256.
(3.6) set block5, input is the result of block4 output, containing one maxpool, and three conv3 x 3 x 512, where the output dimension is 14 x 512.
(3.7) transferring the parameters of the first three modules in the weight data trained by the VGG16 into a model designed by us, wherein the first three modules of the VGG16 are general features and can obviously improve the accuracy of picture classification, and the random parameter model is transferred to the corresponding second two modules by using the He initiation random latter two modules.
(3.8) block5 is followed by a maxpool to reduce the dimensions of the output.
The (3.9) fc6 layer transforms the features into a linear relationship with the input features as a result of the output of module 5 and the output dimension as 1 x 4096.
The (3.10) fc7 layer is also a fully connected layer, fc6 is used as input data, output characteristics of the upper layer are reintegrated, and the dimension of output is 1 x 4096.
The (3.11) fc8 layer is also a fully connected layer, taking fc7 as input data, with the output dimension being 1 x 12.
(3.12) using the features extracted from the fc8 layer to make a softmax model classification, and obtaining the maximum probability of classification through softmax.

Claims (5)

1. A parasite egg identification method based on transfer learning is characterized by comprising the following steps:
s1, image preprocessing: inputting an original image, converting the original image into a gray image, acquiring edge information of the original image by using an edge sobel operator, preprocessing the image, finding out the outline of a parasitic ovum area in the original image, cutting the parasitic ovum area, and enhancing a cut image by adopting a data enhancement and random noise mode;
s2, selecting a module of a pre-training model: selecting a VGG16 network architecture and an ImageNet pre-training network as pre-training models, selecting the first 1-n (n & lt & gt 5) modules as a fixed layer number, wherein the first n modules are general characteristics, and reserving all parameters thereof; the following 5-n modules are special features, and parameters in the remaining 5-n modules are finely adjusted by adopting a He initialization mode;
s3, deep network construction based on a pre-training model: inputting the cut picture enhanced in the step 1 into a transfer learning model, sequentially carrying out parameter sharing operation on 1-n fixed parameter modules, and sequentially carrying out convolution, pooling and function activating operation on the subsequent 5-n modules to improve the accuracy of picture classification; and processing by three full connection layers, and finally performing probability solution by using a softmax classifier to determine the classification of the pictures.
2. The method for identifying eggs of parasites based on migratory learning of claim 1, wherein in step S3, the fully-connected layer of the pre-trained model and the learned fully-connected layer are jointly distributed and adapted by considering the condition distribution and the edge distribution in the fully-connected layer and adding the balance factor μ, so that the distributions between the two are consistent or similar, and the domains with the same or similar classes have the same probability distribution.
3. The method for identifying parasitic eggs based on transfer learning according to claim 2, wherein the specific process of the joint distribution adaptation is as follows:
s3.1: assuming edges of source and target domainsThe distribution is different, i.e. P (X)S)≠P(XT) While assuming that the conditional distributions of the source and target domains are not uniform, i.e. P (Y)S|XS)≠P(YT|XT);
S3.2: measuring the distribution difference of a source domain and a target domain by using edge distribution and conditional distribution, introducing a variable mu, and balancing the edge distribution and the conditional distribution to solve an optimal solution, wherein an integration formula is as follows:
D(DS,DT)≈(1-μ)D(P(XS),P(XT))+μD(P(YS|XS),P(YT|XT))
μ∈[0,1]
wherein D (D)S,DT) Representing the distance between the source and target domains, DSRepresenting the probability distribution of the source domain, DTRepresenting the probability distribution of the object and μ is a parameter used to balance the edge distribution and the conditional distribution, D (P (X)S),P(XT) Represents the distance between the edge distributions of the source domain and the target domain, P (X)S),P(XT) Respectively representing the edge distribution of the source domain and the edge distribution of the target domain, D (P (Y)S|XS),P(YT|XT) Represents the distance between the conditional distributions of the source and target domains, P (Y)S|XS),P(YT|XT) Representing conditional distributions of the source domain and the target domain, respectively;
s3.3: the optimization objective consists of two parts, namely a loss function and a distribution adaptation, a loss function is defined for measuring the difference between the predicted value and the true value, the distribution adaptation is the joint distribution adaptation distance in S3.2, and then the optimization objective is:
Figure FDA0002855623640000021
wherein
Figure FDA0002855623640000022
Representing the set of all annotation data in the source domain and the target domain,
Figure FDA0002855623640000023
is a commonly used loss function, w represents a weight,
Figure FDA0002855623640000024
is the result of the weighting process of the input image,
Figure FDA0002855623640000025
indicating correct results, fc6, fc8 indicating full connectivity layer, D2(DS,DT) Is the distribution difference defined in S3.2.
4. The method for identifying eggs of parasites based on transfer learning of claim 1, wherein said depth network based on pre-trained model of step S3 comprises n modules, three fully-connected layers, a softmax classifier, the first module comprises two convolutional layers, the second module comprises a maximum pooling layer and two convolutional layers, and the third to fifth modules comprise a maximum pooling layer and three convolutional layers respectively; the full connection layer I is used for integrating the characteristics output by the module V and converting the characteristics into a linear relation, the full connection layer II is used for characteristic weighting, and the full connection layer III is used for classifying results; and performing softmax probability solution on the features extracted from the third full connection layer, and solving the probability value of each possible category of pictures through a softmax classifier.
5. The method for identifying eggs of parasites based on transfer learning according to claim 1, wherein the step of training the softmax classifier in the step S3 is as follows:
s4.1: classifying a given parasite egg test sample x into k classes, estimating a probability value p (y ═ j | x) for each class j using the hypothesis, estimating the probability of occurrence of each classification result for x; assuming that the function is to output a k-dimensional vector to represent the probability values of the k estimates;
s4.2: the probability of classifying x as class j in the softmax regression is:
Figure FDA0002855623640000031
p (y ═ j | x; θ) represents the probability of classifying x into class j, θ represents the model parameters of the fully-connected layer,
Figure FDA0002855623640000032
normalizing the probability of each class;
s4.3: the probability of each class is calculated and the class with the highest probability, i.e., the highest probability, is returned.
CN202011557317.7A 2020-12-24 2020-12-24 Parasite egg identification method based on transfer learning Active CN112613410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011557317.7A CN112613410B (en) 2020-12-24 2020-12-24 Parasite egg identification method based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011557317.7A CN112613410B (en) 2020-12-24 2020-12-24 Parasite egg identification method based on transfer learning

Publications (2)

Publication Number Publication Date
CN112613410A true CN112613410A (en) 2021-04-06
CN112613410B CN112613410B (en) 2024-05-14

Family

ID=75245239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011557317.7A Active CN112613410B (en) 2020-12-24 2020-12-24 Parasite egg identification method based on transfer learning

Country Status (1)

Country Link
CN (1) CN112613410B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972299A (en) * 2022-06-16 2022-08-30 沈阳工业大学 Railway track defect detection method based on deep migration learning
KR102562740B1 (en) * 2022-09-29 2023-08-02 노을 주식회사 Method and apparatus for identifying eggs of parasites using image normalization
CN116778208A (en) * 2023-08-24 2023-09-19 吉林大学 Image clustering method based on depth network model

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451661A (en) * 2017-06-29 2017-12-08 西安电子科技大学 A kind of neutral net transfer learning method based on virtual image data collection
CN107958286A (en) * 2017-11-23 2018-04-24 清华大学 A kind of depth migration learning method of field Adaptive Networking
CN110321926A (en) * 2019-05-24 2019-10-11 北京理工大学 A kind of moving method and system based on depth residual GM network
CN110849627A (en) * 2019-11-27 2020-02-28 哈尔滨理工大学 Width migration learning network and rolling bearing fault diagnosis method based on same
CN111652264A (en) * 2020-04-13 2020-09-11 西安理工大学 Negative migration sample screening method based on maximum mean difference
CN112052904A (en) * 2020-09-09 2020-12-08 陕西理工大学 Method for identifying plant diseases and insect pests based on transfer learning and convolutional neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451661A (en) * 2017-06-29 2017-12-08 西安电子科技大学 A kind of neutral net transfer learning method based on virtual image data collection
CN107958286A (en) * 2017-11-23 2018-04-24 清华大学 A kind of depth migration learning method of field Adaptive Networking
CN110321926A (en) * 2019-05-24 2019-10-11 北京理工大学 A kind of moving method and system based on depth residual GM network
CN110849627A (en) * 2019-11-27 2020-02-28 哈尔滨理工大学 Width migration learning network and rolling bearing fault diagnosis method based on same
CN111652264A (en) * 2020-04-13 2020-09-11 西安理工大学 Negative migration sample screening method based on maximum mean difference
CN112052904A (en) * 2020-09-09 2020-12-08 陕西理工大学 Method for identifying plant diseases and insect pests based on transfer learning and convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡鹏辉;王娜;王毅;王慧芳;汪天富;倪东;: "基于全卷积神经网络的肛提肌裂孔智能识别", 深圳大学学报(理工版), no. 03, 25 May 2018 (2018-05-25) *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972299A (en) * 2022-06-16 2022-08-30 沈阳工业大学 Railway track defect detection method based on deep migration learning
CN114972299B (en) * 2022-06-16 2024-03-26 沈阳工业大学 Railway track defect detection method based on deep migration learning
KR102562740B1 (en) * 2022-09-29 2023-08-02 노을 주식회사 Method and apparatus for identifying eggs of parasites using image normalization
CN116778208A (en) * 2023-08-24 2023-09-19 吉林大学 Image clustering method based on depth network model
CN116778208B (en) * 2023-08-24 2023-11-10 吉林大学 Image clustering method based on depth network model

Also Published As

Publication number Publication date
CN112613410B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN110334706B (en) Image target identification method and device
CN112613410B (en) Parasite egg identification method based on transfer learning
CN107274386B (en) artificial intelligent auxiliary cervical cell fluid-based smear reading system
CN110705425B (en) Tongue picture multi-label classification method based on graph convolution network
CN111862119A (en) Semantic information extraction method based on Mask-RCNN
CN107563444A (en) A kind of zero sample image sorting technique and system
CN111652317B (en) Super-parameter image segmentation method based on Bayes deep learning
CN110853070A (en) Underwater sea cucumber image segmentation method based on significance and Grabcut
CN112749673A (en) Method and device for intelligently extracting stock of oil storage tank based on remote sensing image
CN111127360A (en) Gray level image transfer learning method based on automatic encoder
CN113435486A (en) Coal gangue identification method based on PCA-IFOA-SVM combined with gray level-texture fusion features
CN117011260A (en) Automatic chip appearance defect detection method, electronic equipment and storage medium
CN116977633A (en) Feature element segmentation model training method, feature element segmentation method and device
CN115527102A (en) Fish species identification method and system based on contour key points and attention mechanism
CN108154513A (en) Cell based on two photon imaging data detects automatically and dividing method
CN114863189A (en) Intelligent image identification method based on big data
CN114358279A (en) Image recognition network model pruning method, device, equipment and storage medium
Cui et al. Real-time detection of wood defects based on SPP-improved YOLO algorithm
CN114882355A (en) Intelligent building crack identification and detection method and device
Yusof et al. The disease detection for maize-plant using K-means clustering
CN112633327A (en) Staged metal surface defect detection method, system, medium, equipment and application
CN117496276B (en) Lung cancer cell morphology analysis and identification method and computer readable storage medium
CN111008949A (en) Soft and hard tissue detection method for tooth image
CN117435916B (en) Self-adaptive migration learning method in aerial photo AI interpretation
CN113139936B (en) Image segmentation processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant