CN112613410B - Parasite egg identification method based on transfer learning - Google Patents

Parasite egg identification method based on transfer learning Download PDF

Info

Publication number
CN112613410B
CN112613410B CN202011557317.7A CN202011557317A CN112613410B CN 112613410 B CN112613410 B CN 112613410B CN 202011557317 A CN202011557317 A CN 202011557317A CN 112613410 B CN112613410 B CN 112613410B
Authority
CN
China
Prior art keywords
distribution
probability
edge
target domain
source domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011557317.7A
Other languages
Chinese (zh)
Other versions
CN112613410A (en
Inventor
李峰
李搏
潘雨青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN202011557317.7A priority Critical patent/CN112613410B/en
Publication of CN112613410A publication Critical patent/CN112613410A/en
Application granted granted Critical
Publication of CN112613410B publication Critical patent/CN112613410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the fields of image processing, image recognition and transfer learning, and particularly relates to a parasite egg recognition method based on transfer learning. According to the invention, the relevant model and the shared characteristic parameters based on transfer learning are introduced, so that the basic characteristics of the whole parasite egg model are enhanced, and the huge resource and time consumed by deep model training are reduced; introducing a Sobel operator, and rapidly processing the picture through edge information, so that the dimension of the picture is reduced, and the training speed is increased; the joint distribution of the edge distribution and the condition distribution is introduced, so that the distance between the source domain and the target domain is shortest, and the identification precision is improved; and a softmax classifier is introduced, so that the problem of multiple classification is effectively solved.

Description

Parasite egg identification method based on transfer learning
Technical Field
The invention belongs to the fields of image processing, image recognition and transfer learning, and particularly relates to a parasite egg recognition method based on transfer learning.
Background
The development of computer image processing technology promotes medical image processing, the transfer learning technology is widely applied to the aspects of pictures, videos, audios, behaviors and the like, and is in a big data age, massive various data such as images, texts, languages and the like are generated every day, and the deep learning model depends on the massive data to be continuously trained and updated, so that the problems of few data, poor accuracy, weak computing capacity and the like are solved.
Parasitic eggs of similar shapes are difficult to distinguish due to the fuzzy boundary between parasitic eggs, and the distinction of eggs requires very high specialized literacy. By analyzing the picture information of the food, the water source and the excrement infected by the parasites by means of the transfer learning method, the parasitic ova classification model is established, so that the heavy labor of medical staff can be reduced, and the working efficiency of the medical staff can be greatly improved.
So far, parasite eggs are studied in China by adopting methods such as image matching, pattern recognition, sample feature selection, sample feature fusion, deep learning and the like. Chinese patent CN110503669 proposes an image registration algorithm based on parasite eggs, and by using Harris corner algorithm, redundant noise is removed by rotating, translating and changing the scale of the image, and meanwhile, data enhancement is performed, so that the accuracy of recognition can be improved, but at the same time, the calculation cost is increased, the learning cost is increased, and more resources are consumed. Chinese patent CN11503107 proposes a parasite egg image feature selection method, by calculating the variance of each feature set item and selecting the smallest 6 features, the dimension reduction is carried out by a feature compression method for 6 feature samples, 3 description features are formed as the features of the image, the accuracy and instantaneity can be effectively improved, but the selection result of the features has direct influence on the recognition result. Chinese patent CN108805101a proposes a recognition method of parasite eggs based on deep learning, which utilizes an artificial intelligent deep learning model to automatically learn and extract characteristic information from a sample library, so as to solve the problem of low recognition efficiency, however, training a deep learning model with high accuracy not only requires a large amount of sample data, but also consumes a large amount of learning time and learning cost. The parasite eggs are identified by a support vector machine, an average collection and other methods abroad, so that the requirements of automatic identification are met, the accuracy is very high, but under ideal conditions, the high accuracy is difficult to achieve under the interference factors of various noise, impurities and stains.
Disclosure of Invention
The invention aims to solve the problem of low accuracy caused by small data, solve the problem of increased calculation and learning costs caused by data enhancement, and consider the influence of feature selection on the recognition result and recognition errors caused by various interference factors. The mixed model method based on the migration model and the deep learning self-adaption is provided by combining the migration learning model and the deep learning self-adaption algorithm, and the requirements of practical application are met under the condition that the accuracy and the recognition efficiency are improved.
In order to realize the invention, the following technical scheme is adopted: a parasitic ovum identification method based on transfer learning comprises the following steps:
s1, image preprocessing: inputting an original image, converting the original image into a gray level image, acquiring edge information of the original image by using an edge sobel operator, preprocessing the image, finding out the outline of a parasitic ovum area in the original image, cutting, and carrying out enhancement processing on the cut image by adopting a data enhancement and random noise mode;
S2, selecting a module of a pre-training model: selecting a VGG16 network architecture and an ImageNet pre-training network as a pre-training model, selecting the first 1 to n (n < = 5) modules as fixed layer numbers, wherein the first n modules are universal features, and reserving all parameters; the back 5-n modules are preset features, and parameters in the remaining 5-n modules are adjusted in a He initialization initialization mode;
S3, constructing a depth network based on a pre-training model: inputting the cut pictures subjected to the enhancement treatment in the step 1 into a transfer learning model, sequentially carrying out parameter sharing operation on 1-n fixed parameter modules, and then carrying out convolution, pooling and function activation operation according to the parameter sharing operation to improve the accuracy of picture classification; and then, processing by three full-connection layers, and finally, carrying out probability solving by using a softmax classifier to determine the classification of the picture.
Further, in the step S3, the full connection layer considers the conditional distribution and the edge distribution, and adds the balance factor μ, so that the full connection layer of the pre-training model and the learned full connection layer are subjected to joint distribution adaptation, so that the distributions between the full connection layer and the learned full connection layer are consistent, and the domains with the same category have the same probability distribution.
Further, the specific process of the joint distribution adaptation is as follows:
S3.1, if the edge distribution of the source domain and the target domain is different, namely P (X S)≠P(XT), and meanwhile, the conditional distributions of the source domain and the target domain are inconsistent, namely P (Y S|XS)≠P(YT|XT);
s3.2, measuring the distribution difference of the source domain and the target domain by using edge distribution and conditional distribution, introducing a balance factor mu, balancing the edge distribution and the conditional distribution to solve an optimal solution, and integrating the following formulas:
D(DS,DT)≈(1-μ)D(P(XS),P(XT))+μD(P(YS|XS),P(YT|XT))
μ∈[0,1]
Wherein D (D S,DT) represents the distance between the source domain and the target domain, D S represents the probability distribution of the source domain, D T represents the probability distribution of the target domain, μ is a parameter for balancing the edge distribution and the condition distribution, D (P (X S),P(XT)) represents the distance between the source domain and the target domain edge distribution, P (X S),P(XT) represents the edge distribution of the source domain and the edge distribution of the target domain, D (P (Y S|XS),P(YT|XT)) represents the distance between the source domain and the condition distribution of the target domain, and P (Y S|XS),P(YT|XT) represents the condition distribution of the source domain and the target domain, respectively;
The optimization target consists of two parts, namely a loss function and distribution adaptation, wherein the loss function is defined for measuring the difference between a predicted value and a true value, the distribution adaptation is the joint distribution adaptation distance in the step S3.2, and the optimization target is as follows:
Wherein the method comprises the steps of Representing a set of all annotation data in the source domain and the target domain,/>Is a loss function, w represents a weight,/>Is the result of the input image after weight processing,/>Representing the correct result, fc6, fc8 representing the fully connected layer, D 2(DS,DT) is the distribution difference defined in S3.2.
Further, the depth network based on the pre-training model in the step S3 includes N modules, three full connection layers, and a softmax classifier, where the first module includes two convolution layers, the second module includes a max pooling layer and two convolution layers, and the third to five modules include a max pooling layer and three convolution layers, respectively; the first full-connection layer is used for comprehensively converting the characteristics output by the fifth module into a linear relation, the second full-connection layer is used for weighting the characteristics, and the third full-connection layer is used for classifying the results; and the extracted features of the full connection layer III are subjected to softmax probability solving, and the probability value of each possible type picture is obtained through a softmax classifier.
Further, the softmax classifier training step in the step S3 is as follows:
S4.1, dividing a given parasite egg test sample x into k classes, estimating a probability value p (y=j|x) for each class j, and estimating the probability of occurrence of each classification result of x; the function outputs a k-dimensional vector representing k estimated probability values;
S4.2 the probability of classifying x into category j in softmax regression is:
p (y=j|x; θ) represents the probability of classifying x into category j, θ represents the model parameters of the fully connected layer, The probabilities of each class are normalized.
And S4.3, calculating the probability of each category, and returning the category with the highest probability, namely the highest probability.
According to the parasite egg identification method based on transfer learning, a related model and shared characteristic parameters based on transfer learning are introduced, so that the basic characteristics of the whole parasite egg model are enhanced, and the huge consumption of large resources and time for training a depth model are reduced; introducing a Sobel operator, and rapidly processing the picture through edge information, so that the dimension of the picture is reduced, and the training speed is increased; the joint distribution of the edge distribution and the condition distribution is introduced, so that the distance between the source domain and the target domain is shortest, and the identification precision is improved; and a softmax classifier is introduced, so that the problem of multiple classification is effectively solved.
Drawings
FIG. 1 is a flow chart of a parasite egg recognition method based on transfer learning in accordance with the present invention.
FIG. 2 is a photograph of parasite eggs after treatment with the label operator.
Construction of the depth network model of fig. 3.
FIG. 4 is an error analysis of a migration model module selection.
Detailed Description
The invention will be further explained with reference to specific examples.
As shown in fig. 1, the present invention is a process of a method for identifying parasite eggs based on transfer learning, and the present invention is a method for identifying parasite eggs based on transfer learning, comprising the following steps:
S1, image preprocessing: inputting an original image, converting the original image into a gray level image, acquiring edge information of the original image by using an edge sobel operator, performing image preprocessing such as filtering denoising, binarization, image corrosion, expansion and the like, finding out the outline of a parasitic ovum area in the original image, cutting, and performing enhancement processing on the cut image by adopting a data enhancement and random noise mode;
As a preferred embodiment of the present invention, the image preprocessing step in step S1 specifically includes:
S1.1, selecting a plurality of images with typical characteristics from pictures to be identified, wherein the pictures contain parasite eggs, and carrying out gray scale processing on the images;
S1.2, smoothing the image (9 x 9 kernel) by using a low-pass filter, and reducing high-frequency noise in the smoothed image. The goal of the low pass filter is to reduce the rate of change of the image. Such as replacing each pixel with the average of pixels surrounding the pixel;
s1.3, carrying out gray scale and normalization processing on the picture, carrying out contour analysis according to edge variation, using an adaptive threshold value in the gradient image, setting any pixel smaller than the threshold value to 0 (black), otherwise setting the pixel to 255 (white);
s1.4, cutting the change of the passing edge of the parasite egg image, wherein the processing result is shown in figure 2;
S1.5, normalizing and data enhancing the cut pictures.
S2, selecting a module of a pre-training model: selecting a VGG16 network architecture and an ImageNet pre-training network as a pre-training model, selecting the first 1 to n (n < = 5) modules as fixed layer numbers, wherein the first n modules are universal features, and reserving all parameters; the back 5-n modules are preset features, and parameters in the remaining 5-n modules are adjusted in a He initialization initialization mode;
As a preferred embodiment of the present invention, in step S2, the number of feature layers of the pre-training model is selected, as shown in FIG. 4, and the performance of the model is reduced according to the data given as the number of migration model layers increases. However, the common shared features of the first three layers can be migrated without modifying the common features of the first three layers, so that the network layer number migration can accelerate the learning and optimization of the network, that is, the effect is optimal when n=3; the modules to be modified are initialized He initialization, the weights are mainly initialized by He initialization, the size of the previous layer is kept in mind, the global minimum of the cost function can be obtained faster and more effectively, the range of the global minimum depends on the size of the neurons of the previous layer, the initialization process is controlled, and the gradient descent is more effective.
S3, constructing a depth network based on a pre-training model: inputting the cut pictures subjected to the enhancement treatment in the step 1 into a transfer learning model, sequentially carrying out parameter sharing operation on 1-n fixed parameter modules, and then carrying out convolution, pooling and function activation operation according to the parameter sharing operation to improve the accuracy of picture classification; and then, processing by three full-connection layers, and finally, carrying out probability solving by using a softmax classifier to determine the classification of the picture.
As a preferred embodiment of the present invention, in the step S3, the full connection layer considers the conditional distribution and the edge distribution, and adds the balance factor μ, and the full connection layer of the pre-training model and the learned full connection layer are jointly distributed and adapted, so that the distributions between the two are consistent, and the domains with the same category have the same probability distribution.
As a preferred embodiment of the present invention, the specific procedure of the joint distribution adaptation is as follows:
S3.1, assuming that the edge distribution of the source domain and the target domain is different, namely P (X S)≠P(XT), and meanwhile, assuming that the conditional distributions of the source domain and the target domain are inconsistent, namely P (Y S|XS)≠P(YT|XT);
s3.2, measuring the distribution difference of the source domain and the target domain by using edge distribution and conditional distribution, introducing a balance factor mu, balancing the edge distribution and the conditional distribution to solve an optimal solution, and integrating the following formulas:
D(DS,DT)≈(1-μ)D(P(Xs),P(Xt))+μD(P(YS|XS),P(YT|XT))
μ∈[0,1]
Wherein D (D S,DT) represents the distance between the source domain and the target domain, D s represents the probability distribution of the source domain, D T represents the probability distribution of the target and μ is a parameter for balancing the edge distribution and the condition distribution, D (P (X s),P(Xt)) represents the distance between the source domain and the target domain edge distribution, P (X s),P(Xt) represents the source domain edge distribution and the target domain edge distribution, D (P (Y s|Xs),P(Yt|Xt)) represents the distance between the source domain and the target domain condition distribution, and P (Y S|XS),P(YT|XT) represents the condition distribution of the source domain and the target domain, respectively;
as a preferred embodiment of the present invention, the balance factor μ is in the range of 0.6 and 0.8, and the best effect can be achieved.
The optimization target consists of two parts, namely a loss function and distribution adaptation, wherein the loss function is defined for measuring the difference between a predicted value and a true value, the distribution adaptation is the joint distribution adaptation distance in the step S3.2, and the optimization target is as follows:
Wherein the method comprises the steps of Representing a set of all annotation data in the source domain and the target domain,/>Is a common loss function, w represents a weight,/>Is the result of the input image after weight processing,/>Representing the correct result, fc6, fc8 representing the fully connected layer, D 2(DS,DT) is the distribution difference defined in S3.2.
As a preferred embodiment of the present invention, the migration learning model of the above step S3 includes N modules, three full connection layers, and a softmax classifier, where the first module includes two convolution layers, the second module includes one max pooling layer and two convolution layers, and the third to five modules include one max pooling layer and three convolution layers, respectively; the first full-connection layer is used for comprehensively converting the characteristics output by the fifth module into a linear relation, the second full-connection layer is used for weighting the characteristics, and the third full-connection layer is used for classifying the results; and the extracted features of the full connection layer III are subjected to softmax probability solving, and the probability value of each possible type picture is obtained through a softmax classifier.
As a preferred embodiment of the present invention, the softmax classifier training step in step S3 above is as follows:
S4.1, dividing a given parasite egg test sample x into k classes, estimating a probability value p (y=j|x) for each class j, and estimating the probability of occurrence of each classification result of x; the function outputs a k-dimensional vector representing k estimated probability values;
S4.2 the probability of classifying x into category j in softmax regression is:
p (y (i)=j|xi; θ) represents the probability of classifying x into category j, θ represents the model parameters of the fully connected layer, Normalizing the probability of each class;
and S4.3, calculating the probability of each category, and returning the category with the highest probability, namely the highest probability.
As a preferred embodiment of the present invention, the following describes a process of constructing a learning network model by migration with reference to fig. 3 in a specific embodiment:
(3.1) processing the original image into 224 x 3 by step S1, and simultaneously dividing the model into five modules, three fully connected layers and a classifier, wherein the selection of the module parameters is according to step 2, and each module comprises a plurality of convolution layers and a pooling layer respectively.
(3.2) Setting block1, comprising two conv3 x 64, wherein the output image is 224 x 64, and wherein the input is the input of the pre-processed pattern in step 1.
(3.3) Setting block2, the input being the result of the block1 output, comprising one maxpool, and two conv3 x 128, the output dimension at this time being 112 x 128.
(3.4) Setting block3, the input being the result of the block2 output, comprising one maxpool, and three conv3 x 256, the output dimension at this time being 56 x 256.
(3.5) Setting block4, the input being the result of block3 output, comprising one maxpool, and three conv3 x 512, with output dimensions of 28 x 256.
(3.6) Setting block5, the input being the result of block4 output, comprising one maxpool, and three conv3 x 512, the output dimension at this time being 14 x 512.
And (3.7) migrating parameters of the first three modules in the weight data trained by the VGG16 to a model designed by us, wherein the first three modules of the VGG16 are universal features, so that the accuracy of picture classification can be obviously improved, and using He initialization random back two modules, migrating a random parameter model to corresponding back two modules.
(3.8) Following block5 is a maxpool to reduce the output dimension.
The (3.9) fc6 layer inputs the features to the input as the result of the module 5 output, changes the features into a linear relationship, and the output dimension is 1 x 4096.
(3.10) Fc7 is also a fully connected layer, fc6 is used as input data, the output features of the upper layer are recombined, and the output dimension is 1 x 4096.
(3.11) Fc8 is also a fully connected layer, with fc7 as the input data, and the output dimension at this time is 1×1×12.
(3.12) Classifying by using the features extracted by the fc8 layer as a softmax model, and obtaining the maximum probability of classification by softmax.

Claims (2)

1. The parasitic ova recognition method based on transfer learning is characterized by comprising the following steps of:
s1, image preprocessing: inputting an original image, converting the original image into a gray level image, acquiring edge information of the original image by using an edge sobel operator, preprocessing the image, finding out the outline of a parasitic ovum area in the original image, cutting, and carrying out enhancement processing on the cut image by adopting a data enhancement and random noise mode;
S2, selecting a module of a pre-training model: selecting a VGG16 network architecture and an ImageNet pre-training network as a pre-training model, selecting the first 1-n modules as a fixed layer number, wherein n < =5, reserving all parameters of the first n modules, and adjusting parameters in the remaining 5-n modules by adopting a He initialization initialization mode for the later 5-n modules;
S3, constructing a depth network based on a pre-training model: inputting the cut pictures subjected to the enhancement treatment in the step 1 into a transfer learning model, sequentially carrying out parameter sharing operation on 1-n fixed parameter modules, and then carrying out convolution, pooling and function activation operation according to the parameter sharing operation to improve the accuracy of picture classification; then, three full connection layers are processed, and finally, a softmax classifier is used for carrying out probability solution to determine the classification of the pictures;
In the step S3, the condition distribution and the edge distribution are considered in the full connection layer, a balance factor mu is added, and the full connection layer of the pre-training model and the learned full connection layer are subjected to joint distribution adaptation, so that the distribution between the full connection layer and the learned full connection layer is consistent, and domains with the same category have the same probability distribution;
the specific process of the joint distribution adaptation is as follows:
S3.1: the edge distributions of the source domain and the target domain are different, i.e., P (X S)≠P(XT), while the conditional distributions of the source domain and the target domain are inconsistent, i.e., P (Y S|XS)≠P(YTXT);
s3.2: using edge distribution and conditional distribution to measure the distribution difference of a source domain and a target domain, introducing a balance factor mu, balancing the edge distribution and the conditional distribution to solve an optimal solution, and integrating the following formulas:
D(DS,DT)≈(1-μ)D(P(XS),P(XT))+μD(P(YS|XS),P(YT|XT))
μ∈[0,1]
Wherein D (D S,DT) represents the distance between the source domain and the target domain, D S represents the probability distribution of the source domain, D T represents the probability distribution of the target domain, μ is a parameter for balancing the edge distribution and the condition distribution, D (P (X S),P(XT)) represents the distance between the source domain and the target domain edge distribution, P (X S),P(XT) represents the edge distribution of the source domain and the edge distribution of the target domain, D (P (Y S|XS),P(YT|XT)) represents the distance between the source domain and the condition distribution of the target domain, and P (Y S|XS),P(YT|XT) represents the condition distribution of the source domain and the target domain, respectively;
s3.3: the optimization objective consists of two parts, namely a loss function and distribution adaptation, defining a loss function for measuring the difference between the predicted value and the true value, wherein the distribution adaptation is the joint distribution adaptation distance in S3.2, and the optimization objective is as follows:
Wherein the method comprises the steps of Representing a set of all annotation data in the source domain and the target domain,/>Is a loss function, w represents a weight,/>Is the result of the input image after weight processing,/>Representing the correct result, fc6, fc8 representing the fully connected layer, D 2(DS,DT) is the distribution difference defined in S3.2;
The depth network based on the pre-training model in the step S3 comprises n modules, three full-connection layers and a softmax classifier, wherein the first module comprises two convolution layers, the second module comprises a maximum pooling layer and two convolution layers, and the third to five modules respectively comprise the maximum pooling layer and three convolution layers; the full-connection layer I is used for comprehensively converting the characteristics output by the module five into linear relations, the full-connection layer II is used for weighting the characteristics, and the full-connection layer three-purpose is used for classifying results; and (3) carrying out a softmax probability solution on the extracted features of the full-connection layer three, and obtaining the probability value of each possible type picture through a softmax classifier.
2. The method for identifying parasitic ova based on transfer learning as claimed in claim 1, wherein the softmax classifier training step in step S3 is as follows:
S4.1, dividing a given parasite egg test sample x into k classes, estimating a probability value p (y=j|x) for each class j, and estimating the probability of occurrence of each classification result of x; the function outputs a k-dimensional vector representing k estimated probability values;
S4.2 the probability of classifying x into category j in softmax regression is:
p (y=j|x; θ) represents the probability of classifying x into category j, θ represents the model parameters of the fully connected layer, Normalizing the probability of each class;
and S4.3, calculating the probability of each category, and returning the category with the highest probability, namely the highest probability.
CN202011557317.7A 2020-12-24 2020-12-24 Parasite egg identification method based on transfer learning Active CN112613410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011557317.7A CN112613410B (en) 2020-12-24 2020-12-24 Parasite egg identification method based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011557317.7A CN112613410B (en) 2020-12-24 2020-12-24 Parasite egg identification method based on transfer learning

Publications (2)

Publication Number Publication Date
CN112613410A CN112613410A (en) 2021-04-06
CN112613410B true CN112613410B (en) 2024-05-14

Family

ID=75245239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011557317.7A Active CN112613410B (en) 2020-12-24 2020-12-24 Parasite egg identification method based on transfer learning

Country Status (1)

Country Link
CN (1) CN112613410B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972299B (en) * 2022-06-16 2024-03-26 沈阳工业大学 Railway track defect detection method based on deep migration learning
KR102562740B1 (en) * 2022-09-29 2023-08-02 노을 주식회사 Method and apparatus for identifying eggs of parasites using image normalization
CN116778208B (en) * 2023-08-24 2023-11-10 吉林大学 Image clustering method based on depth network model

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451661A (en) * 2017-06-29 2017-12-08 西安电子科技大学 A kind of neutral net transfer learning method based on virtual image data collection
CN107958286A (en) * 2017-11-23 2018-04-24 清华大学 A kind of depth migration learning method of field Adaptive Networking
CN110321926A (en) * 2019-05-24 2019-10-11 北京理工大学 A kind of moving method and system based on depth residual GM network
CN110849627A (en) * 2019-11-27 2020-02-28 哈尔滨理工大学 Width migration learning network and rolling bearing fault diagnosis method based on same
CN111652264A (en) * 2020-04-13 2020-09-11 西安理工大学 Negative migration sample screening method based on maximum mean difference
CN112052904A (en) * 2020-09-09 2020-12-08 陕西理工大学 Method for identifying plant diseases and insect pests based on transfer learning and convolutional neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451661A (en) * 2017-06-29 2017-12-08 西安电子科技大学 A kind of neutral net transfer learning method based on virtual image data collection
CN107958286A (en) * 2017-11-23 2018-04-24 清华大学 A kind of depth migration learning method of field Adaptive Networking
CN110321926A (en) * 2019-05-24 2019-10-11 北京理工大学 A kind of moving method and system based on depth residual GM network
CN110849627A (en) * 2019-11-27 2020-02-28 哈尔滨理工大学 Width migration learning network and rolling bearing fault diagnosis method based on same
CN111652264A (en) * 2020-04-13 2020-09-11 西安理工大学 Negative migration sample screening method based on maximum mean difference
CN112052904A (en) * 2020-09-09 2020-12-08 陕西理工大学 Method for identifying plant diseases and insect pests based on transfer learning and convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于全卷积神经网络的肛提肌裂孔智能识别;胡鹏辉;王娜;王毅;王慧芳;汪天富;倪东;;深圳大学学报(理工版);20180525(03);全文 *

Also Published As

Publication number Publication date
CN112613410A (en) 2021-04-06

Similar Documents

Publication Publication Date Title
CN112613410B (en) Parasite egg identification method based on transfer learning
CN111860670B (en) Domain adaptive model training method, image detection method, device, equipment and medium
CN110334706B (en) Image target identification method and device
Pan et al. Accurate segmentation of nuclei in pathological images via sparse reconstruction and deep convolutional networks
CN107274386B (en) artificial intelligent auxiliary cervical cell fluid-based smear reading system
CN107230203B (en) Casting defect identification method based on human eye visual attention mechanism
CN106462746A (en) Analyzing digital holographic microscopy data for hematology applications
CN110705425A (en) Tongue picture multi-label classification learning method based on graph convolution network
Feng et al. A color image segmentation method based on region salient color and fuzzy c-means algorithm
Mohammed Abdelkader et al. Entropy-based automated method for detection and assessment of spalling severities in reinforced concrete bridges
Ju et al. Classification of jujube defects in small data sets based on transfer learning
CN111091129A (en) Image salient region extraction method based on multi-color characteristic manifold sorting
Sigdel et al. Feature analysis for classification of trace fluorescent labeled protein crystallization images
CN115409804A (en) Method for identifying and marking focus region of mammary gland magnetic resonance image and predicting curative effect
Wang et al. Feature extraction and segmentation of pavement distress using an improved hybrid task cascade network
US11790673B2 (en) Method for detection of cells in a cytological sample having at least one anomaly
Chapaneri et al. Plant disease detection: A comprehensive survey
CN108154513A (en) Cell based on two photon imaging data detects automatically and dividing method
Kalbhor et al. Cervical cancer diagnosis based on cytology pap smear image classification using fractional coefficient and machine learning classifiers
CN114358279A (en) Image recognition network model pruning method, device, equipment and storage medium
Cui et al. Real-time detection of wood defects based on SPP-improved YOLO algorithm
CN109191452B (en) Peritoneal transfer automatic marking method for abdominal cavity CT image based on active learning
Algazinov et al. Hardware–software complex for the analysis of a nonuniform flow of objects in real-time optical sorting systems
CN117496276B (en) Lung cancer cell morphology analysis and identification method and computer readable storage medium
CN117274294B (en) Homologous chromosome segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant