CN116258697B - Automatic classification device and method for child skin disease images based on rough labeling - Google Patents

Automatic classification device and method for child skin disease images based on rough labeling Download PDF

Info

Publication number
CN116258697B
CN116258697B CN202310150365.1A CN202310150365A CN116258697B CN 116258697 B CN116258697 B CN 116258697B CN 202310150365 A CN202310150365 A CN 202310150365A CN 116258697 B CN116258697 B CN 116258697B
Authority
CN
China
Prior art keywords
image
classification
color
skin disease
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310150365.1A
Other languages
Chinese (zh)
Other versions
CN116258697A (en
Inventor
俞刚
李竞
郑惠文
沈忱
齐国强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202310150365.1A priority Critical patent/CN116258697B/en
Publication of CN116258697A publication Critical patent/CN116258697A/en
Application granted granted Critical
Publication of CN116258697B publication Critical patent/CN116258697B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic classification device and method for child skin disease images based on rough labeling, which are used for preprocessing focus areas of rough labeling after rough labeling of focus areas of acquired child skin disease images so as to establish mask labeling images; constructing a classification model comprising a U-Net, a texture feature extraction module, a color feature extraction module, a shape feature extraction module, a first correlation analysis module, a second correlation analysis module and a feature fusion and classification module, and performing supervised learning on the classification model by using mask annotation images so as to optimize parameters of the classification model; and (5) automatically classifying the child skin disease images by using a classification model with optimized parameters. The device and the method construct a model capable of accurately and automatically classifying the skin diseases of the children based on the rough labels, and improve the classification accuracy of the skin disease images of the children.

Description

Automatic classification device and method for child skin disease images based on rough labeling
Technical Field
The invention belongs to the technical field of image classification, and particularly relates to an automatic classification device and method for child skin disease images based on rough labeling.
Background
Skin disease is one of the most frequently occurring diseases in children. Currently, the most common skin disorders with highest incidence among pediatric skin disorders include: atopic dermatitis (Atopic Dermatitis, AD), urticaria (Urtica), hemangioma (Hemagima), diaper dermatitis (Diaper Dermatitis), and the like. The four diseases seriously affect the life quality of the children, endanger physiological and psychological health and even endanger life, and cause heavy economic and social burden.
At present, most of skin disease infants are first diagnosed by basic-level non-dermatologists, and are limited by the experience and capability of basic-level doctors and incomplete medical facilities of hospitals, so that the basic-level misdiagnosis rate of the skin disease of children is high. It is difficult to solve this problem in a short period of time by conventional clinical methods. Therefore, a new method is explored, the children skin diseases are accurately and rapidly diagnosed, the first diagnosis accuracy of vast basic doctors is rapidly improved, and the method becomes an important problem to be solved urgently.
The skin of children is significantly different from the skin of adults, mainly comprising: 1. skin structure and barrier differences: the average thickness of the horny layer of children (about 7 μm) is 30% thinner than that of the horny layer of adults (about 10 μm), the epidermis layer is 20% thinner, the horny cells with smaller volume and thinner horny layer are provided, the skin barrier function is not mature, the water loss speed is high, the resistance is weak, and the response to external stimulus is strong; 2. skin composition differences: the concentration of natural moisturizing factors in the keratinocytes of the skin of children is obviously lower than that of adults and the sebum level is lower, and the total lipid content and the sebum secretion amount are smaller than those of adults, so that the skin of children is easier to dry without water. Meanwhile, the content of melanin in the skin of children is lower than that of adults, so that the skin of children is more easily damaged by the sun; 3. microbial barrier is weak: the pH value of the skin of the children is close to neutral (6.6-7.7); the microbiota is also unstable and susceptible to effects. According to the above characteristics, more researches can be conducted on the skin image of the child.
Currently, skin disease image recognition models based on deep learning are widely studied and tested, but the recognition capability is limited, and most research methods are also focused on treating adult skin disease problems, and do not relate to child skin and problems. In addition, the research method based on deep learning requires a great deal of manpower to carry out detailed labeling, which puts high requirements on the accuracy of training data, but the skin disease problem has damage or discontinuous labeling characteristics, so that the detailed labeling is needed, and a great deal of manpower and material resources are required to be spent. And the situation of unclear edges exists in the method, and the possibility of imperfect identification exists in the artificial labeling. Therefore, it is particularly important to provide an automatic classification method for children skin disease images, wherein fine labeling can be achieved through rough labeling.
Disclosure of Invention
In view of the above, the invention aims to provide an automatic classification device and method for child skin disease images based on rough labels, which construct a model capable of accurately and automatically classifying child skin diseases based on the rough labels and improve classification accuracy of child skin disease images.
In order to achieve the above object, the method for automatically classifying the child skin disease image based on the rough labeling provided by the embodiment of the invention comprises the following steps:
after the obtained child skin disease image is subjected to rough labeling of the focus area, preprocessing is carried out on the focus area subjected to rough labeling to establish a mask masking labeling image;
the method comprises the steps of constructing a classification model comprising a U-Net, a texture feature extraction module, a color feature extraction module, a shape feature extraction module, a first correlation analysis module, a second correlation analysis module and a feature fusion and classification module, wherein the U-Net, the texture feature extraction module, the color feature extraction module and the shape color feature extraction module are respectively used for extracting depth features, texture features, color features and shape features from mask annotation images, the first correlation analysis module is used for carrying out primary feature fusion on the texture features and the color features, the second correlation analysis module is used for carrying out secondary feature fusion on the shape features and primary feature fusion results, and the feature fusion and classification module is used for carrying out feature fusion on the splicing results of the secondary feature fusion results and the depth features and then carrying out child skin disease classification;
performing supervised learning on the classification model by using the mask annotation image so as to optimize parameters of the classification model;
and (5) automatically classifying the child skin disease images by using a classification model with optimized parameters.
In an alternative embodiment, the preprocessing the coarsely labeled lesion area to create a masking mask labeled image includes:
the method comprises the steps of processing a child skin disease image by using a CLAHE algorithm based on channel differences of child skin and adult skin on an RGB image to obtain a preprocessed image;
after super-pixel segmentation is carried out on a focus region marked roughly in the preprocessed image by utilizing an SLIC algorithm, three-way numerical values of each pixel point in the super-pixel block are added to obtain a single-pixel point chromatographic value;
dividing the single pixel point chromatographic value into two types by using a clustering algorithm, namely dividing pixels in a focus area into pixel points of 0 or 1 to obtain a binary image;
searching all boundaries based on the binary image, calculating the area of the area in each boundary, and refilling the maximum connected domain to obtain a mask annotation image with fine granularity.
In an alternative embodiment, the clustering algorithm includes a K-means clustering algorithm, i.e., the K-means clustering algorithm is used to classify single-pixel point chromatograms into two classes.
In an optional embodiment, in the texture feature extraction module, extracting color features by using a color histogram feature extraction method based on a bayesian classifier includes:
obtaining X based on HIS color model value of each pixel point in mask masking annotation image HS 、X HI And X SI And classifying the three vectors into 9 color category histograms through a Bayesian classifier, and counting the number of pixel points in each color category histogram to obtain color characteristics.
In an alternative embodiment, in the texture feature extraction module, the texture features of the mask annotation image are extracted using an ULBP algorithm.
In an alternative embodiment, in the shape feature extraction module, shape features based on region edges, including shape parameters, bending energy, rectangularity, circularity, are extracted for the mask annotation image.
In an alternative embodiment, the first correlation analysis module and the second correlation analysis module use a CCA algorithm for feature fusion.
In an alternative embodiment, the feature fusion and classification module includes at least one convolution layer through which feature fusion is achieved and a full connection layer through which dermatological classification is achieved.
In an alternative embodiment, when the mask annotation image is used for supervised learning of the classification model, the model parameters are updated by taking the cross entropy of the classification result of the child skin disease output by the classification model and the classification truth value label as a loss function.
In order to achieve the above object, an embodiment provides an automatic classification device for child skin disease images based on rough labeling, which comprises a preprocessing unit, a model construction unit, a parameter optimization unit and an application unit;
the preprocessing unit is used for preprocessing the focus area of the rough marking after the rough marking of the focus area is carried out on the acquired child skin disease image so as to establish a mask marking image;
the model construction unit is used for constructing a classification model comprising a U-Net, a texture feature extraction module, a color feature extraction module, a shape feature extraction module, a first correlation analysis module, a second correlation analysis module and a feature fusion and classification module, wherein the U-Net, the texture feature extraction module, the color feature extraction module and the shape color feature extraction module are respectively used for extracting depth features, texture features, color features and shape features from mask annotation images, the first correlation analysis module is used for carrying out primary feature fusion on the texture features and the color features, the second correlation analysis module is used for carrying out secondary feature fusion on the shape features and primary feature fusion results, and the feature fusion and classification module is used for carrying out feature fusion on the splicing results of the secondary feature fusion results and the depth features and then carrying out child skin disease classification;
the parameter optimization unit is used for performing supervised learning on the classification model by using the mask annotation image so as to optimize parameters of the classification model;
the application unit is used for automatically classifying the child skin disease images by using the classification model with optimized parameters.
Compared with the prior art, the invention has the beneficial effects that at least the following steps are included:
the mask annotation image with fine granularity is established by preprocessing the focus area with coarse annotation, so that the classification model can effectively obtain enough semantic information only by coarsely annotating the focus area with large granularity, accurate automatic classification of the children skin disease image can be realized without a large amount of fine annotation, and the problem that the skin disease is difficult to be finely annotated due to unclear boundaries is solved. Furthermore, the classification model is combined with texture features, color features and shape features to classify the skin diseases of the children, so that the classification accuracy of the skin disease images of the children can be improved, and the auxiliary diagnosis capability can be provided for doctors.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a child dermatological image automatic classification method based on rough labeling provided in an embodiment;
FIG. 2 is a schematic diagram of a classification model provided by an embodiment;
FIG. 3 is an exemplary diagram of a child skin disease color channel histogram provided by an embodiment;
FIG. 4 is a comparison of various pretreatment methods for pediatric dermatological images provided in the examples;
FIG. 5 is an exemplary diagram of a child skin disease image coarse labeling area superpixel segmentation provided by an embodiment;
fig. 6 is a schematic structural diagram of an automatic classification device for child skin disease images based on rough labeling according to an embodiment.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the detailed description is presented by way of example only and is not intended to limit the scope of the invention.
The method aims to overcome the defect that a great amount of accurate child skin disease marks are needed in the prior art to cause great cost, and simultaneously aims to accurately classify the child skin disease images. The embodiment provides an automatic classification method and device for child skin disease images based on rough labeling, which are characterized in that image characteristic factors of child skin different from adult skin are utilized for preliminary pretreatment, then a graphics method is utilized for matching the rough labeling with actual image pixels, a semantic segmentation image with high precision is obtained, and a mask (mask) mask labeling image is established. The method comprises the steps of utilizing the image characteristics of a plurality of flaky and punctiform dermatological images, adopting a deep learning training method based on image characteristic fusion, using the Unet as a main network, and then utilizing a traditional image processing method to conduct typical correlation analysis CCA (Canonical Correlation Analysis) on the color characteristics, the texture characteristics and the shape characteristics of mask annotation images to obtain image fusion characteristics, adding the image fusion characteristics into a deep learning network, and establishing a complete classification network. The mask annotation image constructed based on the rough annotation is utilized to train the classification model, so that higher training precision can be obtained, fine annotation time and energy of doctors can be saved greatly, and meanwhile, the effective identification capability of the classification model is ensured.
As shown in fig. 1, the automatic classification method for child skin disease images based on coarse labels provided by the embodiment includes the following steps:
s110, after the obtained child skin disease image is subjected to rough labeling of the focus area, the rough labeled focus area is preprocessed to establish a mask labeling image.
The rough labeling refers to that a doctor performs rectangular frame dragging labeling on a larger focus area, and the focus area selected by the rough labeling frame is rough and possibly contains a non-focus area, and the focus area is not selected comprehensively. After the focus area is coarsely marked, preprocessing the coarsely marked focus area to establish a mask mark image, which specifically comprises the following steps:
(a) And processing the child skin disease image by using a limited contrast adaptive histogram equalization (Contrast Limited Adaptive Histogram Equalization, CLAHE) algorithm on the channel based on the channel difference of the child skin and the adult skin on the RGB image to obtain a preprocessed image.
Through experimental study, the RGB color spectrum of the normal skin picture of the child has smaller skin melanin, the value on the G channel is obviously higher than that of the other two channels, as shown in fig. 3, which causes that the difference between the RGB three-channel pixel sum of the focus area is larger during the subsequent super-pixel segmentation, so that focus edge information is lost. In addition, as the dermatological image is generally photographed by a common camera, the scene and the light change have large differences, the contrast of the subsequent lesion area treatment can be increased by using the CLAHE algorithm, and the contrast amplification in the image can be limited, so that the noise amplification problem is reduced, and the comparison situation of the operation is shown in fig. 4.
(b) And (3) performing superpixel segmentation (for example, segmenting the focus region into superpixel blocks of 200 units) on the focus region with the rough labeling in the preprocessed image by using a SLIC (simple linear iterative clustering) algorithm, and adding three channel numerical values of each pixel point in the superpixel blocks according to the labeling data with the rough labeling to obtain a single-pixel-point chromatographic value.
The invention adopts a super-pixel segmentation method to refine the rough marked image. Super-pixels refer to small areas composed of a series of pixel points that are adjacent in position and similar in color, brightness, texture, etc. The small areas mostly keep effective information for further image segmentation, and boundary information of objects in the images is not damaged generally, a small amount of super pixels are used for replacing a large amount of pixels to express image characteristics, and complexity of image processing is reduced. The super pixel block generated by the super pixel segmentation SLIC algorithm is relatively compact, the field characteristics are easy to express, meanwhile, the parameters which need to be set and adjusted are few, the operation is simple, the speed is high, the effect on the compactness and the outline of an image is good, and in addition, the SLIC algorithm is compatible with the segmentation of gray level images and color images. Generally, the super-pixel segmentation method is a means for carrying out blurring processing by utilizing a fine particle image, but the invention tries to carry out reverse thinning operation by utilizing the principle of the method, tries to take the image in a coarse labeling frame as a whole pixel, and carries out high-unit pixel segmentation by utilizing the super-pixel segmentation method to realize fine segmentation of rough boundary labeling and lay a realization foundation of fine granularity for converting the coarse labeling into the fine labeling. The result of the super-pixel segmentation of the coarsely labeled region may be as shown in fig. 5.
(c) And (3) dividing the single pixel point chromatographic value into two types by using a clustering algorithm such as K-means and the like, namely dividing pixels in a focus area into pixel points of 0 or 1, and obtaining a binary image.
(d) Searching all boundaries based on the binary image, calculating the area of the area in each boundary, and refilling the maximum connected domain to obtain a mask annotation image with fine granularity.
In order to better process the finely marked boundary information, the invention further processes the coarsely marked image by using a clustering algorithm and a method for searching the connected domain. And obtaining a single pixel point chromatographic value by adding three channel numerical values of each pixel point through a K-means clustering algorithm, and dividing the super pixel block clusters obtained by super pixel segmentation into 2 major classes, namely dividing the data in the rough labeling frame into 0/1 pixel points. The binarized rough labeling data may have a discontinuous distribution due to problems such as image peaks and critical conditions. Therefore, the invention further utilizes a method for searching the maximum connected domain, utilizes the findContours function to find all boundaries in the binary image, calculates the area in each boundary through the contourArea function, and finally fills the maximum connected domain through the fileConvexPOLY function. The method realizes the aim of automatically refining based on the rough labeling image and the focus boundary image through the steps.
S120, constructing a classification model.
In an embodiment, the classification model is designed based on a U-Net network. In order to correct errors caused by refinement of coarse labels, an image feature fusion mode is added on the basis of a U-Net network, namely, image features are used as one of training parameters, and an overall classification model shown in figure 2 is established.
As shown in FIG. 2, the classification model comprises a U-Net, a texture feature extraction module, a color feature extraction module, a shape feature extraction module, a first correlation analysis module, a second correlation analysis module, and a feature fusion and classification module. The device comprises a U-Net, a texture feature extraction module, a color feature extraction module and a shape and color feature extraction module, wherein the U-Net, the texture feature extraction module, the color feature extraction module and the shape and color feature extraction module are respectively used for extracting depth features, texture features, color features and shape features from mask annotation images, a first correlation analysis module is used for carrying out primary feature fusion on the texture features and the color features, a second correlation analysis module is used for carrying out secondary feature fusion on shape features and primary feature fusion results, and a feature fusion and classification module is used for carrying out feature fusion on the secondary feature fusion results and the splicing results of the depth features and then carrying out classification on the skin diseases of children.
In the texture feature extraction module, extracting color features by adopting a color histogram feature extraction mode based on a Bayesian classifier comprises the following steps:
scanning each pixel point in the mask annotation image, and obtaining X based on HIS color model values (h, s, i) of each pixel point in the mask annotation image HS 、X HI And X SI Three vectors, wherein X HS =[h,s],X HI =[h,i],X SI =[s,i]And classifying the three vectors into corresponding skin disease colors through a Bayesian classifier, namely classifying the three vectors into 9 color category histograms according to the Bayesian classifier, and counting the number of pixel points in each color category histogram to obtain color characteristics. Wherein the 9 color categories include white, red, light brown, dark brown, light blue gray, dark blue gray, black, and undefined colors, respectively.
In the texture feature extraction module, a ULBP (Uniform Local Binary Pattern) algorithm is adopted to extract texture features of a mask annotation image, wherein the ULBP algorithm is defined as:
wherein g c Is the gray value of the central pixel point, R is the radius of the field, g i G is g c The gray value of the ith pixel point in the field, P is the number of the pixel points in the neighborhood of the central pixel point, and the operator U is unified to represent the conversion times of 0 and 1 in the local binary mode. The superscript riu reflects that the U value of the rotation invariant "uniform" mode is at most 2.
The ULBP algorithm is an algorithm for measuring the relationship between a pixel point and a neighborhood pixel point, and extracts texture features by analyzing the relationship between the pixel point and the neighborhood pixel point. And the distribution of the local textures can form the overall textures of the image, and finally, the texture feature vector of the image can be obtained by counting each neighborhood structure.
In the shape feature extraction module, shape features based on region edges, including shape parameters, bending energy, rectangularity and circularity, are extracted from the mask annotation image.
Wherein, the expression of the shape parameter F is:
wherein C is the perimeter of the boundary of the region, and A is the area of the region.
The expression of the bending energy B is:
wherein P is an arc length parameter, P is a curve total length, and k (P) is a curvature function.
The rectangle degree R is the ratio of the area of the region to the minimum circumscribed rectangle area, and represents the saturation degree of the circumscribed rectangle, and the expression is as follows:
wherein A is the area of the region A box Is the area of the smallest circumscribed rectangle.
The circularity C is used for measuring the circularity of the region, and is expressed as:
where P is the perimeter of the region boundary and A is the region area.
By the method, the image features such as color features, texture features and shape features are extracted. In order to unify feature dimensions, feature fusion is performed in the first correlation analysis module and the second correlation analysis module by using a typical correlation analysis method CCA (Canonical Correlation Analysis). CCA is a statistical analysis method for analyzing the correlation of two random vectors, which can remove redundant information between features and can well fuse the features.
The feature fusion and classification module is connected to the second correlation analysis module and the output end of the U-Net, and comprises at least one convolution layer and a full connection layer, for example, 3 convolution layers and 1 full connection layer, wherein feature fusion is realized through the convolution layers, and skin diseases are classified through the full connection layer.
S130, performing supervised learning on the classification model by using the mask annotation image so as to optimize parameters of the classification model.
In the embodiment, when the mask annotation image is used for performing supervised learning on the classification model, the cross entropy function of the classification result of the child skin disease output by the classification model and the classification truth value label is used as a loss function to update model parameters, wherein the model parameters comprise network parameters of U-Net, network parameters of the feature fusion and classification module, and after parameter optimization, a reliable classification model is obtained, so that the child skin disease image can be accurately classified.
And S140, automatically classifying the child skin disease images by using the classification model with optimized parameters.
In the embodiment, when the classification model optimized by parameters is utilized to automatically classify the child skin disease images, after the obtained child skin disease images to be classified are subjected to rough labeling of focus areas, the focus areas with rough labeling are preprocessed to establish mask labeling images, the mask labeling images are input into the classification model optimized by parameters, and the classification result of the child skin disease images is obtained through forward reasoning calculation.
Based on the same inventive concept, the embodiment also provides an automatic classification device of the child skin disease image based on the rough labeling, as shown in fig. 6, comprising a preprocessing unit, a model construction unit, a parameter optimization unit and an application unit; the preprocessing unit is used for preprocessing the focus area of the rough marking after the rough marking of the focus area is carried out on the acquired child skin disease image so as to establish a mask marking image; the model building unit is used for building a classification model; the parameter optimization unit is used for performing supervised learning on the classification model by using the mask annotation image so as to optimize parameters of the classification model; the application unit is used for automatically classifying the child skin disease images by using the classification model with optimized parameters.
It should be noted that, when the automatic classification device for child skin disease images provided in the foregoing embodiment performs automatic classification of child skin disease images, the above-mentioned division of each functional unit should be used as an example, and the above-mentioned functional allocation may be performed by different functional units according to needs, that is, the internal structure of the terminal or the server is divided into different functional units, so as to complete all or part of the above-mentioned functions. In addition, the automatic classification device for the pediatric skin disease image and the automatic classification method for the pediatric skin disease image provided in the above embodiments belong to the same concept, and detailed implementation processes of the automatic classification method for the pediatric skin disease image are described in the automatic classification method for the pediatric skin disease image, which is not described herein.
The foregoing detailed description of the preferred embodiments and advantages of the invention will be appreciated that the foregoing description is merely illustrative of the presently preferred embodiments of the invention, and that no changes, additions, substitutions and equivalents of those embodiments are intended to be included within the scope of the invention.

Claims (5)

1. The automatic classification method for the child skin disease image based on the rough labeling is characterized by comprising the following steps of:
after performing coarse marking on the focus area of the obtained pediatric skin disease image, preprocessing the focus area of the coarse marking to establish a mask masking marking image, comprising: the method comprises the steps of processing a child skin disease image by using a CLAHE algorithm based on channel differences of child skin and adult skin on an RGB image to obtain a preprocessed image; after super-pixel segmentation is carried out on a focus region marked roughly in the preprocessed image by utilizing an SLIC algorithm, three-way numerical values of each pixel point in the super-pixel block are added to obtain a single-pixel point chromatographic value; dividing the single pixel point chromatographic value into two types by using a clustering algorithm, namely dividing pixels in a focus area into pixel points of 0 or 1 to obtain a binary image; searching all boundaries based on the binary image, calculating the area of the area in each boundary, and refilling the maximum connected domain to obtain a mask annotation image with fine granularity;
the method comprises the steps of constructing a classification model comprising a U-Net, a texture feature extraction module, a color feature extraction module, a shape feature extraction module, a first correlation analysis module, a second correlation analysis module and a feature fusion and classification module, wherein the U-Net, the texture feature extraction module, the color feature extraction module and the shape color feature extraction module are respectively used for extracting depth features, texture features, color features and shape features from mask annotation images, the first correlation analysis module is used for carrying out primary feature fusion on the texture features and the color features, the second correlation analysis module is used for carrying out secondary feature fusion on the shape features and primary feature fusion results, and the feature fusion and classification module is used for carrying out feature fusion on the splicing results of the secondary feature fusion results and the depth features and then carrying out child skin disease classification; wherein, inIn the texture feature extraction module, a color histogram feature extraction mode based on a Bayesian classifier is adopted to extract color features, and the method comprises the following steps: obtaining X based on HIS color model value of each pixel point in mask masking annotation image HS 、X HI And X SI Classifying the three vectors into 9 color category histograms through a Bayesian classifier, and counting the number of pixel points in each color category histogram to obtain color characteristics; in the texture feature extraction module, extracting texture features of a mask annotation image by adopting an ULBP algorithm; in the shape feature extraction module, extracting shape features based on region edges from mask annotation images, wherein the shape features comprise shape parameters, bending energy, rectangularity and circularity; the first correlation analysis module and the second correlation analysis module adopt a CCA algorithm to perform feature fusion;
performing supervised learning on the classification model by using the mask annotation image so as to optimize parameters of the classification model;
and (5) automatically classifying the child skin disease images by using a classification model with optimized parameters.
2. The automatic classification method of child skin disease images based on coarse labels according to claim 1, wherein the clustering algorithm comprises a K-means clustering algorithm, namely, the K-means clustering algorithm is adopted to classify single pixel point chromatographic values into two classes.
3. The automatic classification method of child skin disease images based on coarse labels according to claim 1, wherein the feature fusion and classification module comprises at least one convolution layer and a full connection layer, feature fusion is achieved through the convolution layer, and skin disease classification is achieved through the full connection layer.
4. The automatic classification method of child skin disease image based on coarse labeling according to claim 1, wherein when the classification model is supervised and learned by using mask labeling image, the model parameters are updated by using the cross entropy of the child skin disease classification result and the classification truth value label output by the classification model as a loss function.
5. The automatic classification device for the child skin disease image based on the rough labeling is characterized by comprising a preprocessing unit, a model construction unit, a parameter optimization unit and an application unit;
the preprocessing unit is used for preprocessing the focus area of the rough labeling after the rough labeling of the focus area of the acquired children skin disease image so as to establish a mask labeling image, and comprises the following steps: the method comprises the steps of processing a child skin disease image by using a CLAHE algorithm based on channel differences of child skin and adult skin on an RGB image to obtain a preprocessed image; after super-pixel segmentation is carried out on a focus region marked roughly in the preprocessed image by utilizing an SLIC algorithm, three-way numerical values of each pixel point in the super-pixel block are added to obtain a single-pixel point chromatographic value; dividing the single pixel point chromatographic value into two types by using a clustering algorithm, namely dividing pixels in a focus area into pixel points of 0 or 1 to obtain a binary image; searching all boundaries based on the binary image, calculating the area of the area in each boundary, and refilling the maximum connected domain to obtain a mask annotation image with fine granularity;
the model construction unit is used for constructing a classification model comprising a U-Net, a texture feature extraction module, a color feature extraction module, a shape feature extraction module, a first correlation analysis module, a second correlation analysis module and a feature fusion and classification module, wherein the U-Net, the texture feature extraction module, the color feature extraction module and the shape color feature extraction module are respectively used for extracting depth features, texture features, color features and shape features from mask annotation images, the first correlation analysis module is used for carrying out primary feature fusion on the texture features and the color features, the second correlation analysis module is used for carrying out secondary feature fusion on the shape features and primary feature fusion results, and the feature fusion and classification module is used for carrying out feature fusion on the splicing results of the secondary feature fusion results and the depth features and then carrying out child skin disease classification; wherein, in the texture feature extraction module, a color histogram feature extraction mode based on a Bayesian classifier is adopted to extract the colorsA color feature, comprising: obtaining X based on HIS color model value of each pixel point in mask masking annotation image HS 、X HI And X SI Classifying the three vectors into 9 color category histograms through a Bayesian classifier, and counting the number of pixel points in each color category histogram to obtain color characteristics; in the texture feature extraction module, extracting texture features of a mask annotation image by adopting an ULBP algorithm; in the shape feature extraction module, extracting shape features based on region edges from mask annotation images, wherein the shape features comprise shape parameters, bending energy, rectangularity and circularity; the first correlation analysis module and the second correlation analysis module adopt a CCA algorithm to perform feature fusion;
the parameter optimization unit is used for performing supervised learning on the classification model by using the mask annotation image so as to optimize parameters of the classification model;
the application unit is used for automatically classifying the child skin disease images by using the classification model with optimized parameters.
CN202310150365.1A 2023-02-22 2023-02-22 Automatic classification device and method for child skin disease images based on rough labeling Active CN116258697B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310150365.1A CN116258697B (en) 2023-02-22 2023-02-22 Automatic classification device and method for child skin disease images based on rough labeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310150365.1A CN116258697B (en) 2023-02-22 2023-02-22 Automatic classification device and method for child skin disease images based on rough labeling

Publications (2)

Publication Number Publication Date
CN116258697A CN116258697A (en) 2023-06-13
CN116258697B true CN116258697B (en) 2023-11-24

Family

ID=86683950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310150365.1A Active CN116258697B (en) 2023-02-22 2023-02-22 Automatic classification device and method for child skin disease images based on rough labeling

Country Status (1)

Country Link
CN (1) CN116258697B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389823A (en) * 2015-12-04 2016-03-09 浙江工业大学 Interactive intelligent image segmentation method based on tumor attack
CN105427314A (en) * 2015-11-23 2016-03-23 西安电子科技大学 Bayesian saliency based SAR image target detection method
CN105844292A (en) * 2016-03-18 2016-08-10 南京邮电大学 Image scene labeling method based on conditional random field and secondary dictionary study
CN107016677A (en) * 2017-03-24 2017-08-04 北京工业大学 A kind of cloud atlas dividing method based on FCN and CNN
CN108921205A (en) * 2018-06-14 2018-11-30 浙江大学 A kind of skin disease clinical image classification method based on multi-feature fusion
CN109598709A (en) * 2018-11-29 2019-04-09 东北大学 Mammary gland assistant diagnosis system and method based on fusion depth characteristic
CN110570352A (en) * 2019-08-26 2019-12-13 腾讯科技(深圳)有限公司 image labeling method, device and system and cell labeling method
CN111210449A (en) * 2019-12-23 2020-05-29 深圳市华嘉生物智能科技有限公司 Automatic segmentation method for gland cavity in prostate cancer pathological image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2012258421A1 (en) * 2012-11-30 2014-06-19 Canon Kabushiki Kaisha Superpixel-based refinement of low-resolution foreground segmentation
WO2015066297A1 (en) * 2013-10-30 2015-05-07 Worcester Polytechnic Institute System and method for assessing wound
WO2016075096A1 (en) * 2014-11-10 2016-05-19 Ventana Medical Systems, Inc. Classifying nuclei in histology images
US11568657B2 (en) * 2017-12-06 2023-01-31 Ventana Medical Systems, Inc. Method of storing and retrieving digital pathology analysis results

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427314A (en) * 2015-11-23 2016-03-23 西安电子科技大学 Bayesian saliency based SAR image target detection method
CN105389823A (en) * 2015-12-04 2016-03-09 浙江工业大学 Interactive intelligent image segmentation method based on tumor attack
CN105844292A (en) * 2016-03-18 2016-08-10 南京邮电大学 Image scene labeling method based on conditional random field and secondary dictionary study
CN107016677A (en) * 2017-03-24 2017-08-04 北京工业大学 A kind of cloud atlas dividing method based on FCN and CNN
CN108921205A (en) * 2018-06-14 2018-11-30 浙江大学 A kind of skin disease clinical image classification method based on multi-feature fusion
CN109598709A (en) * 2018-11-29 2019-04-09 东北大学 Mammary gland assistant diagnosis system and method based on fusion depth characteristic
CN110570352A (en) * 2019-08-26 2019-12-13 腾讯科技(深圳)有限公司 image labeling method, device and system and cell labeling method
CN111210449A (en) * 2019-12-23 2020-05-29 深圳市华嘉生物智能科技有限公司 Automatic segmentation method for gland cavity in prostate cancer pathological image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automated layer segmentation of macular OCT images via graph-based SLIC superpixels and manifold ranking approach;Zhijun Gao et al.;《 Computerized Medical Imaging and Graphics》;全文 *
多尺度空谱融合的高光谱图像分类算法研究;单德明;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;全文 *

Also Published As

Publication number Publication date
CN116258697A (en) 2023-06-13

Similar Documents

Publication Publication Date Title
Li et al. A composite model of wound segmentation based on traditional methods and deep neural networks
CN108898175B (en) Computer-aided model construction method based on deep learning gastric cancer pathological section
US20210118144A1 (en) Image processing method, electronic device, and storage medium
CN112017191A (en) Method for establishing and segmenting liver pathology image segmentation model based on attention mechanism
CN112070772A (en) Blood leukocyte image segmentation method based on UNet + + and ResNet
TW202014984A (en) Image processing method, electronic device, and storage medium
CN110838100A (en) Colonoscope pathological section screening and segmenting system based on sliding window
CN108830149B (en) Target bacterium detection method and terminal equipment
Rani et al. K-means clustering and SVM for plant leaf disease detection and classification
CN112102332A (en) Cancer WSI segmentation method based on local classification neural network
CN113160185A (en) Method for guiding cervical cell segmentation by using generated boundary position
Yang et al. Towards unbiased COVID-19 lesion localisation and segmentation via weakly supervised learning
CN110390678B (en) Tissue type segmentation method of colorectal cancer IHC staining image
Dong et al. A novel feature fusion based deep learning framework for white blood cell classification
CN108765431B (en) Image segmentation method and application thereof in medical field
CN116258697B (en) Automatic classification device and method for child skin disease images based on rough labeling
CN112308827A (en) Hair follicle detection method based on deep convolutional neural network
Huang et al. Skin lesion segmentation based on deep learning
CN115909401A (en) Cattle face identification method and device integrating deep learning, electronic equipment and medium
Dandan et al. A multi-model organ segmentation method based on abdominal ultrasound image
CN114882282A (en) Neural network prediction method for colorectal cancer treatment effect based on MRI and CT images
CN114842030A (en) Bladder tumor image segmentation method based on multi-scale semantic matching
CN114140830A (en) Repeated identification inhibition method based on circulating tumor cell image
Kumar et al. Segmentation of retinal lesions in fundus images: a patch based approach using encoder-decoder neural network
BEKTAŞ et al. Evaluating the effect of lesion segmentation on the detection of skin cancer by pre-trained CNN models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant