CN117333487B - Acne classification method, device, equipment and storage medium - Google Patents

Acne classification method, device, equipment and storage medium Download PDF

Info

Publication number
CN117333487B
CN117333487B CN202311630752.1A CN202311630752A CN117333487B CN 117333487 B CN117333487 B CN 117333487B CN 202311630752 A CN202311630752 A CN 202311630752A CN 117333487 B CN117333487 B CN 117333487B
Authority
CN
China
Prior art keywords
image
target
vaccinia
model
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311630752.1A
Other languages
Chinese (zh)
Other versions
CN117333487A (en
Inventor
王念欧
郦轲
刘文华
万进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Accompany Technology Co Ltd
Original Assignee
Shenzhen Accompany Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Accompany Technology Co Ltd filed Critical Shenzhen Accompany Technology Co Ltd
Priority to CN202311630752.1A priority Critical patent/CN117333487B/en
Publication of CN117333487A publication Critical patent/CN117333487A/en
Application granted granted Critical
Publication of CN117333487B publication Critical patent/CN117333487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a acne classification method, device, equipment and storage medium. The method comprises the following steps: obtaining an image to be detected, and carrying out image segmentation on the image to be detected through a target image segmentation model to obtain a target mask image; the target image segmentation model is obtained by training a UNet++ network model through training sample images; performing channel stitching on the image to be detected and the target mask image to obtain a target stitched image; grading treatment is carried out on the target spliced image through a target vaccinia grading model, so that the severity level of the image to be detected is obtained; the target vaccinia classification model is obtained by training an initial vaccinia classification model through training sample images. Mask images are introduced in the vaccinia grading process to serve as priori knowledge, global features and local features of the vaccinia are effectively identified by means of the priori knowledge, so that grading accuracy of severity is improved, and reliable reference information is provided for a vaccinia grading result.

Description

Acne classification method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a acne classification method, device, equipment and storage medium.
Background
Acne is also known as acne vulgaris, a chronic inflammatory disease of the pilo-sebaceous unit, which is well developed in the face, forechest and back of young men and women. The skin lesions are generally classified as 3 degrees 4 according to the nature and severity of the acne lesions.
Traditional methods of grading the severity of vaccinia have also relied primarily on the long-term experience accumulated by the practitioner. Currently, deep learning models are increasingly being used to provide assistance in grading the severity of vaccinia.
However, in the training process of the existing model, the severity level is generally marked mainly for the severity level of the acnes displayed on the whole image, and the influence on the severity level caused by the characteristics of the acnes is absent, such as the size and the number of the acnes. Meanwhile, in order to reduce the calculation amount of model training and reduce the risk of overfitting, the input acne image is generally limited in size, the acne image may have blurred pixels, and the appearance of the acne image is similar and the severity is similar, so that the currently used deep learning model cannot effectively identify the acne features in the acne image, and the grading result of the severity of the acne is inaccurate.
Disclosure of Invention
The invention provides a acne classification method, device, equipment and storage medium, which are used for solving the problem that the classification result of the severity of the acne output by the existing deep learning model is inaccurate, improving the classification accuracy of the severity and providing reliable reference information for the classification result of the acne.
According to an aspect of the present invention, there is provided a method for classifying acne, comprising:
acquiring an image to be detected;
image segmentation is carried out on the image to be detected through a target image segmentation model, and a target mask image is obtained; the target image segmentation model is obtained by training a UNet++ network model through training sample images;
performing channel stitching on the image to be detected and the target mask image to obtain a target stitched image;
grading the target spliced image through a target vaccinia grading model to obtain the severity level of the image to be detected; the target vaccinia classification model is obtained by training an initial vaccinia classification model through training sample images.
Further, performing channel stitching on the image to be detected and the target mask image to obtain a target stitched image, including:
performing pixel stitching on the image to be detected and the target pixel point of the target mask image at each same pixel position according to the channel to obtain an initial stitching image;
And performing dimension reduction processing on the channel number of the initial spliced image through a channel self-adaptive layer to obtain a target spliced image.
Further, the channel adaptation layer includes:
three convolution kernels 11 and a RELU activation layer.
Further, performing pixel stitching on the to-be-detected image and the target pixel point of the target mask image at each same pixel position according to a channel to obtain an initial stitched image, including:
acquiring three channel pixel values of each pixel point in the image to be detected;
acquiring a mask value corresponding to each pixel point in the target mask image;
for a target pixel point of the image to be detected and the target mask image on the same pixel position, splicing the three-channel pixel value and the mask value to obtain a four-channel pixel value of the target pixel point;
and determining a set of four-channel pixel values of each target pixel point as the initial stitched image.
Further, the training the unet++ network model through the training sample image includes:
establishing a UNet++ network model;
inputting a training sample image into the unet++ network model to obtain a prediction mask image; the training sample image is marked with a vaccinia mask label;
Training parameters of the UNet++ network model according to first loss function values formed by the prediction mask image and the vaccinia mask label;
and returning to execute the operation of inputting the training sample image into the unet++ network model to obtain a prediction mask image until a target image segmentation model is obtained.
Further, training the initial vaccinia classification model by training the sample image includes:
establishing an initial vaccinia grading model;
inputting a training sample image into the initial vaccinia grading model to obtain a vaccinia grading prediction result; the training sample image is marked with a vaccinia grading label;
training parameters of the initial vaccinia classification model according to the vaccinia classification prediction result and a second loss function value formed by the vaccinia classification label;
and returning to the operation of inputting the training sample image into the initial vaccinia grading model to obtain the vaccinia grading prediction result until the target vaccinia grading model is obtained.
Further, the initial acne classification model is an acceptance Net neural network model.
According to another aspect of the present invention, there is provided a poxy classifying device comprising:
the image acquisition module is used for acquiring an image to be detected;
The image segmentation module is used for carrying out image segmentation on the image to be detected through a target image segmentation model to obtain a target mask image; the target image segmentation model is obtained by training a UNet++ network model through training sample images;
the image stitching module is used for performing channel stitching on the image to be detected and the target mask image to obtain a target stitched image;
the grading module is used for grading the target spliced image through a target acne grading model to obtain the severity level of the image to be detected; the target vaccinia classification model is obtained by training an initial vaccinia classification model through training sample images.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the vaccinia classification method according to any of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to execute the vaccinia classification method according to any embodiment of the present invention.
According to the technical scheme, the image to be detected is obtained, and image segmentation is carried out on the image to be detected through a target image segmentation model, so that a target mask image is obtained; the target image segmentation model is obtained by training a UNet++ network model through training sample images; performing channel stitching on the image to be detected and the target mask image to obtain a target stitched image; grading treatment is carried out on the target spliced image through a target vaccinia grading model, so that the severity level of the image to be detected is obtained; the target vaccinia classification model is obtained by training an initial vaccinia classification model through training sample images. And (3) taking a target mask image obtained by the target image segmentation model as priori knowledge, splicing the target mask image with an image channel to be detected, and inputting the spliced target mask image into a target acne grading model to obtain an acne grading result. The method has the advantages that the target mask image is introduced as priori knowledge in the vaccinia grading process, global features and local features of the vaccinia are effectively identified by means of the priori knowledge, and the problem that the current deep learning model cannot effectively identify the global features and the local features of the vaccinia, so that the output vaccinia severity grading result is inaccurate is solved; the grading accuracy of the severity degree is improved, and reliable reference information is provided for the vaccinia grading result.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for classifying acnes according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a target mask image;
fig. 3 is a flowchart of a method for classifying acnes according to a second embodiment of the present invention;
fig. 4 is a schematic diagram of a method for classifying acnes according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of a poxy classifying device according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device for implementing the vaccinia classification method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a method for classifying acnes according to an embodiment of the present invention, where the method may be performed by an acnes classification device, which may be implemented in hardware and/or software, and the acnes classification device may be configured in an electronic apparatus. As shown in fig. 1, the method includes:
s110, acquiring an image to be detected.
The image to be detected is an image which needs to be subjected to severity detection and classification, and the image to be detected can be a facial image or an image of other parts such as a chest or a back.
In this embodiment, the manner of acquiring the image to be detected may be to acquire an image of an area to be detected, and perform enhancement processing on the image to obtain the image to be detected.
S120, performing image segmentation on an image to be detected through a target image segmentation model to obtain a target mask image; the target image segmentation model is obtained by training a UNet++ network model through training sample images.
The target mask image refers to a mask image corresponding to an image to be detected, which is predicted based on a target image segmentation model. Fig. 2 is a schematic diagram of a target mask image. As shown in fig. 2, the mask values may reflect information such as the size and location of the acnes, and the set of mask values may reflect the density, distribution location, and the like of the acnes.
The training sample image may be a sample image in a training sample set for training a unet++ network model, and the training sample image may be, for example, of a size of224224. It should be noted that the training sample image may include: sample images with and without acnes
In this embodiment, the training sample image is marked with a mask tag, which is a piece of data with binary characteristics, for example, a white pixel block is used to indicate the area where the acne is located, and a black pixel block is used to indicate the area where the acne is not located. And the mask tag can reflect local features of the severity of the vaccinia, e.g., the size of the mask tag can reflect the size of the vaccinia, and the location of the mask tag can reflect the location of the vaccinia. Meanwhile, the mask labels can reflect global characteristics of the severity of the acnes, and the number of the mask labels marked in the training sample image can reflect the number of the acnes; the positions and the number of the mask labels can be combined, so that the distribution condition and the density of acnes can be further improved.
The target image segmentation model is a trained complete image segmentation model, and aims to detect an input image to distinguish whether each pixel is a lesion pixel (i.e. acne), and segment an image of the lesion pixel (i.e. a target mask image). In the embodiment of the invention, the target image segmentation model is obtained by training a UNet++ network model through training sample images. The unet++ network model has the following characteristics: (1) multi-level feature fusion. The unet++ network model introduces a multi-level feature fusion to make full use of feature information from different levels. By introducing connections at each level of the decoder, the unet++ network model can better capture features from thick to thin. (2) repeating the upsampling and downsampling: the UNet + + network model is similar to the UNet network model, but with the addition of repeated up-sampling and down-sampling modules to improve segmentation performance, these repeated modules enable the network to better capture features of different scales.
It will be appreciated that the acnes referred to in the embodiments of the present invention are of broad interest and may include, for example, acne, pimples, pustules, nodules, and the like.
In this embodiment, a complete target image segmentation model is trained based on a training sample image with a vaccinia mask label and a unet++ network model, and image segmentation processing is performed on an image to be detected, so as to obtain a target mask image output by the target image segmentation model.
S130, performing channel stitching on the image to be detected and the target mask image to obtain a target stitched image.
The target spliced image is an image obtained by fusing priori knowledge on the basis of an image to be detected. In the present embodiment, the target mask image output by the target image segmentation model is taken as a priori knowledge.
In this embodiment, the image to be detected generally has three channels of GRB, and the target mask image is a binary image generally has one channel. And performing pixel channel stitching on each pixel point in the image to be detected and the target mask image to obtain a target stitched image. The channel data of the target mosaic image is dependent on the requirements of the target vaccinia classification model on the number of channels of the input image.
S140, grading the target spliced image through a target acne grading model to obtain the severity level of the image to be detected; the target vaccinia classification model is obtained by training an initial vaccinia classification model through training sample images.
The target acne grading model is a complete acne grading model which is used for detecting input images to determine the severity level of acnes. The target vaccinia classification model is obtained by training an initial vaccinia classification model through training sample images. The initial vaccinia classification model refers to a classification model that is either untrained or untrained complete. The initial vaccinia classification model can be a model with a classification function.
In this embodiment, the target stitching image is input into the target vaccinia classification model, and the target stitching image is classified by the target vaccinia classification model, so as to obtain the severity level output by the target vaccinia classification model.
In the embodiment of the invention, the input image input to the target vaccinia classification model is a target mosaic image obtained by fusing target mask images on the basis of the image to be detected. In the acne grading process, the target mask image is introduced as priori knowledge, so that the mask image can be used for helping the target acne grading model to effectively identify global features and local features of acnes, the limitation of the target acne grading model on input images to be detected is overcome, the problem of inaccurate feature identification is solved, and an accurate acne severity grading result can be provided for images similar in appearance and similar in acne severity.
According to the technical scheme, the image to be detected is obtained, and image segmentation is carried out on the image to be detected through a target image segmentation model, so that a target mask image is obtained; the target image segmentation model is obtained by training a UNet++ network model through training sample images; performing channel stitching on the image to be detected and the target mask image to obtain a target stitched image; grading treatment is carried out on the target spliced image through a target vaccinia grading model, so that the severity level of the image to be detected is obtained; the target vaccinia classification model is obtained by training an initial vaccinia classification model through training sample images. And (3) taking a target mask image obtained by the target image segmentation model as priori knowledge, splicing the target mask image with an image channel to be detected, and inputting the spliced target mask image into a target acne grading model to obtain an acne grading result. By introducing the target mask image as priori knowledge in the vaccinia grading process, global features and local features of vaccinia are effectively identified by virtue of the priori knowledge, the grading accuracy of the severity degree is improved, and reliable reference information is provided for the vaccinia grading result
In an alternative embodiment, the training the unet++ network model by training the sample image includes:
Establishing a UNet++ network model;
inputting a training sample image into the unet++ network model to obtain a prediction mask image; the training sample image is marked with a vaccinia mask label;
training parameters of the UNet++ network model according to first loss function values formed by the prediction mask image and the vaccinia mask label;
and returning to execute the operation of inputting the training sample image into the unet++ network model to obtain a prediction mask image until a target image segmentation model is obtained.
In another alternative embodiment, training the initial vaccinia classification model by training the sample image includes:
establishing an initial vaccinia grading model;
inputting a training sample image into the initial vaccinia grading model to obtain a vaccinia grading prediction result; the training sample image is marked with a vaccinia grading label;
training parameters of the initial vaccinia classification model according to the vaccinia classification prediction result and a second loss function value formed by the vaccinia classification label;
and returning to the operation of inputting the training sample image into the initial vaccinia grading model to obtain the vaccinia grading prediction result until the target vaccinia grading model is obtained.
Preferably, the initial acne classification model adopts an acceptance Net neural network model. The acceptance Net neural network model employs an "acceptance" module that allows for the simultaneous use of multiple convolution kernels of different sizes to capture different scale features in an image. The structure enables the network to keep larger model depth under the condition of not increasing too many parameters, thereby improving the capability of feature extraction and global average pooling or global self-adaptive pooling, and the global maximum pooling does not need additional learnable parameters, so that the complexity of the model can be reduced, the risk of over-fitting is reduced, and the model has good generalization performance on various tasks; by using a 1x1 convolution kernel to reduce the number of channels and using global averaging pooling to reduce the number of parameters of the fully connected layer, efficient reasoning and training is achieved; and the "acceptance" module is highly modular, and the network structure can be easily expanded or modified to adapt to different requirements.
Wherein the training sample image may be pre-labeled with label data, which may include: vaccinia mask tag and vaccinia grade tag. The vaccinia mask label is used for forming a first loss function value with a predicted mask image of a training sample image output by the unet++ network model. The vaccinia mask label may be used to manually mark the lesion area, e.g., the pixel location occupied by the vaccinia is marked 0 and the remaining pixel locations are marked 255. The vaccinia grade label is used for forming a second loss function value with the vaccinia grade prediction result of the training sample image output by the initial vaccinia grade model.
The first Loss function value and the second Loss function value may employ Cross-Entropy Loss (CE Loss). Cross entropy loss is typically used to evaluate the difference between the output of the model and the real label. The incentive model outputs a probability distribution that approximates the true label, not just a discrete class. This helps the model learn uncertainty information, not just hard classification. The calculations are relatively efficient and typically can be trained quickly by optimization methods (e.g., random gradient descent).
The calculation formula of the cross entropy loss is as follows:
Wherein,represents the cross entropy loss function value, ">Representing input image +.>Is (are) true tags->Representing input image +.>Is a predicted output value of (1); n is the number of samples.
The first loss function value or the second loss function value may also use a DiceLoss loss function value, or other loss function values; combinations of multiple loss function values may also be employed, as are no limitations of embodiments of the present invention.
Example two
Fig. 3 is a flowchart of a method for classifying acnes according to a second embodiment of the present invention, where the step S130 of the above embodiment is further defined as: performing channel stitching on the image to be detected and the target mask image to obtain a target stitched image, including: performing pixel stitching on the image to be detected and the target pixel point of the target mask image at each same pixel position according to the channel to obtain an initial stitching image; and performing dimension reduction processing on the channel number of the initial spliced image through a channel self-adaptive layer to obtain a target spliced image.
As shown in fig. 3, the method includes:
s210, acquiring an image to be detected.
S220, performing image segmentation on the image to be detected through a target image segmentation model to obtain a target mask image; the target image segmentation model is obtained by training a UNet++ network model through training sample images.
S230, performing pixel stitching on the to-be-detected image and the target pixel point of the target mask image at each same pixel position according to the channel to obtain an initial stitching image.
The method comprises the steps of performing pixel stitching on an image to be detected and a target mask image according to channels by an initial stitching image. Since the image to be detected typically has three channels of GRBs, the target mask image is a binary image typically has one channel. Thus, the initial stitched image is four channels. However, the number of channels for which an input image is required for the acceptance Net neural network model or the like is three.
In an alternative embodiment, pixel stitching is performed on the image to be detected and the target pixel point of the target mask image at each same pixel position according to a channel, so as to obtain an initial stitched image, including:
acquiring three channel pixel values of each pixel point in the image to be detected;
acquiring a mask value corresponding to each pixel point in the target mask image;
for a target pixel point of the image to be detected and the target mask image on the same pixel position, splicing the three-channel pixel value and the mask value to obtain a four-channel pixel value of the target pixel point;
And determining a set of four-channel pixel values of each target pixel point as the initial stitched image.
In the present embodiment, three channel pixel values for pixel points in an image to be detectedAnd mask value +.>Splicing according to the channels to obtain four-channel pixel value of the target pixel point as +.>The method comprises the steps of carrying out a first treatment on the surface of the And determining a set of four-channel pixel values of each target pixel point as an initial stitched image.
S240, performing dimension reduction processing on the number of channels of the initial spliced image through the channel self-adaptive layer to obtain a target spliced image.
In this embodiment, since the initial stitched image is a four-channel image, the number of channels for which an input image is required for the acceptance Net neural network model or the like is three channels. Therefore, the dimension reduction processing is also required to be performed on the number of channels of the initial spliced image, so as to obtain a three-channel target spliced image.
Optionally, the channel adaptation layer includes: three convolution kernels 11 and a RELU activation layer.
In this embodiment, three convolution kernels are 1The convolution layer of 1 and the channel sub-adaptive layer formed by the RELU activation layer reduce the dimension of the initial spliced image of four channels into a target spliced image of three channels so as to be input into an obtained target vaccinia grading model trained based on the acceptance Net neural network model.
S250, grading the target spliced image through a target acne grading model to obtain the severity level of the image to be detected; the target vaccinia classification model is obtained by training an initial vaccinia classification model through training sample images.
In a specific embodiment, fig. 4 is a schematic diagram of a method for classifying acnes according to an embodiment of the present invention. As shown in fig. 4, inputting an image to be detected into a target image segmentation model for image segmentation to obtain a target mask image; the image to be detected and the target mask image are subjected to pixel stitching according to the channel to obtain an initial stitching image; performing dimension reduction processing on the number of channels of the initial spliced image through the channel self-adaptive layer to obtain a target spliced image; and grading the target spliced image through the target vaccinia grading model to obtain the severity level of the image to be detected.
According to the technical scheme, the image to be detected is obtained, and image segmentation is carried out on the image to be detected through a target image segmentation model, so that a target mask image is obtained; the target image segmentation model is obtained by training a UNet++ network model through training sample images; performing pixel stitching on target pixel points of the image to be detected and the target mask image at each same pixel position according to the channels to obtain an initial stitching image; and performing dimension reduction processing on the number of channels of the initial spliced image through the channel self-adaptive layer to obtain a target spliced image. Grading treatment is carried out on the target spliced image through a target vaccinia grading model, so that the severity level of the image to be detected is obtained; the target vaccinia classification model is obtained by training an initial vaccinia classification model through training sample images. The target mask image obtained by the target image segmentation model is used as priori knowledge and is input into the target acne classification model after being spliced with the image channel to be detected, the target mask image is used as priori knowledge in the acne classification process, global features and local features of the acnes are effectively identified by means of the priori knowledge, the classification accuracy of the severity degree is improved, and reliable reference information is provided for the acne classification result.
Example III
Fig. 5 is a schematic structural diagram of a poxy classifying device according to a third embodiment of the present invention. As shown in fig. 5, the apparatus includes:
an image acquisition module 310, configured to acquire an image to be detected;
the image segmentation module 320 is configured to perform image segmentation on the image to be detected through a target image segmentation model to obtain a target mask image; the target image segmentation model is obtained by training a UNet++ network model through training sample images;
the image stitching module 330 is configured to perform channel stitching on the image to be detected and the target mask image to obtain a target stitched image;
the grading module 340 is configured to grade the target stitched image through a target vaccinia grading model, so as to obtain a severity level of the image to be detected; the target vaccinia classification model is obtained by training an initial vaccinia classification model through training sample images.
According to the technical scheme, the image to be detected is obtained, and image segmentation is carried out on the image to be detected through a target image segmentation model, so that a target mask image is obtained; the target image segmentation model is obtained by training a UNet++ network model through training sample images; performing channel stitching on the image to be detected and the target mask image to obtain a target stitched image; grading treatment is carried out on the target spliced image through a target vaccinia grading model, so that the severity level of the image to be detected is obtained; the target vaccinia classification model is obtained by training an initial vaccinia classification model through training sample images. By introducing the target mask image as priori knowledge in the vaccinia grading process, global features and local features of the vaccinia are effectively identified by virtue of the priori knowledge, the grading accuracy of the severity degree is improved, and reliable reference information is provided for the vaccinia grading result.
Optionally, the image stitching module 330 includes:
the pixel splicing unit is used for carrying out pixel splicing on the target pixel points of the image to be detected and the target mask image at each same pixel position according to the channel to obtain an initial spliced image;
and the dimension reduction unit is used for carrying out dimension reduction processing on the channel number of the initial spliced image through the channel self-adaptive layer to obtain a target spliced image.
Optionally, the channel adaptation layer includes:
three convolution kernels 11 and a RELU activation layer.
Optionally, the pixel stitching unit is specifically configured to:
acquiring three channel pixel values of each pixel point in the image to be detected;
acquiring a mask value corresponding to each pixel point in the target mask image;
for a target pixel point of the image to be detected and the target mask image on the same pixel position, splicing the three-channel pixel value and the mask value to obtain a four-channel pixel value of the target pixel point;
and determining a set of four-channel pixel values of each target pixel point as the initial stitched image.
Optionally, the training the unet++ network model through the training sample image includes:
Establishing a UNet++ network model;
inputting a training sample image into the unet++ network model to obtain a prediction mask image; the training sample image is marked with a vaccinia mask label;
training parameters of the UNet++ network model according to first loss function values formed by the prediction mask image and the vaccinia mask label;
and returning to execute the operation of inputting the training sample image into the unet++ network model to obtain a prediction mask image until a target image segmentation model is obtained.
Optionally, training the initial vaccinia classification model by training the sample image includes:
establishing an initial vaccinia grading model;
inputting a training sample image into the initial vaccinia grading model to obtain a vaccinia grading prediction result; the training sample image is marked with a vaccinia grading label;
training parameters of the initial vaccinia classification model according to the vaccinia classification prediction result and a second loss function value formed by the vaccinia classification label;
and returning to the operation of inputting the training sample image into the initial vaccinia grading model to obtain the vaccinia grading prediction result until the target vaccinia grading model is obtained.
Optionally, the initial vaccinia classification model is an acceptance Net neural network model.
The vaccinia grading device provided by the embodiment of the invention can execute the vaccinia grading method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 6 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the vaccinia classification method.
In some embodiments, the vaccinia classification method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the vaccinia classification method described above may be performed. Alternatively, in other embodiments, processor 11 may be configured to perform the vaccinia classification method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (7)

1. A method of classifying poxes comprising:
acquiring an image to be detected;
image segmentation is carried out on the image to be detected through a target image segmentation model, and a target mask image is obtained; the target image segmentation model is obtained by training a UNet++ network model through training sample images; the training sample image is marked with a binary mask label; the target mask image is composed of mask values, and the mask values are used for reflecting the size, the position and the density of the acnes;
Performing channel stitching on the image to be detected and the target mask image to obtain a target stitched image;
grading the target spliced image through a target vaccinia grading model to obtain the severity level of the image to be detected; the target vaccinia classification model is obtained by training an initial vaccinia classification model through training sample images;
the training of the unet++ network model by training the sample image comprises:
establishing a UNet++ network model;
inputting a training sample image into the unet++ network model to obtain a prediction mask image; the training sample image is marked with a vaccinia mask label;
training parameters of the UNet++ network model according to first loss function values formed by the prediction mask image and the vaccinia mask label;
returning to the operation of inputting the training sample image into the unet++ network model to obtain a prediction mask image until a target image segmentation model is obtained;
performing channel stitching on the image to be detected and the target mask image to obtain a target stitched image, including:
performing pixel stitching on the image to be detected and the target pixel point of the target mask image at each same pixel position according to the channel to obtain an initial stitching image;
Performing dimension reduction processing on the channel number of the initial spliced image through a channel self-adaptive layer to obtain a target spliced image;
training an initial vaccinia classification model by training sample images, comprising:
establishing an initial vaccinia grading model;
inputting a training sample image into the initial vaccinia grading model to obtain a vaccinia grading prediction result; the training sample image is marked with a vaccinia grading label;
training parameters of the initial vaccinia classification model according to the vaccinia classification prediction result and a second loss function value formed by the vaccinia classification label;
and returning to the operation of inputting the training sample image into the initial vaccinia grading model to obtain the vaccinia grading prediction result until the target vaccinia grading model is obtained.
2. The method of claim 1, wherein the channel adaptation layer comprises:
three convolution layers with a convolution kernel of 11 and a RELU activation layer.
3. The method according to claim 1, wherein pixel stitching is performed on the image to be detected and the target pixel point of the target mask image at each same pixel position according to a channel, so as to obtain an initial stitched image, including:
acquiring three channel pixel values of each pixel point in the image to be detected;
Acquiring a mask value corresponding to each pixel point in the target mask image;
for a target pixel point of the image to be detected and the target mask image on the same pixel position, splicing the three-channel pixel value and the mask value to obtain a four-channel pixel value of the target pixel point;
and determining a set of four-channel pixel values of each target pixel point as the initial stitched image.
4. The method of claim 1, wherein the initial vaccinia classification model is an acceptance Net neural network model.
5. A poxy classification device comprising:
the image acquisition module is used for acquiring an image to be detected;
the image segmentation module is used for carrying out image segmentation on the image to be detected through a target image segmentation model to obtain a target mask image; the target image segmentation model is obtained by training a UNet++ network model through training sample images; the training sample image is marked with a binary mask label; the target mask image is composed of mask values, and the mask values are used for reflecting the size, the position and the density of the acnes;
the image stitching module is used for performing channel stitching on the image to be detected and the target mask image to obtain a target stitched image;
The grading module is used for grading the target spliced image through a target acne grading model to obtain the severity level of the image to be detected; the target vaccinia classification model is obtained by training an initial vaccinia classification model through training sample images;
an image stitching module comprising:
the pixel splicing unit is used for carrying out pixel splicing on the target pixel points of the image to be detected and the target mask image at each same pixel position according to the channel to obtain an initial spliced image;
the dimension reduction unit is used for carrying out dimension reduction processing on the number of channels of the initial spliced image through the channel self-adaptive layer to obtain a target spliced image;
wherein training the unet++ network model by training the sample image comprises:
establishing a UNet++ network model;
inputting a training sample image into the unet++ network model to obtain a prediction mask image; the training sample image is marked with a vaccinia mask label;
training parameters of the UNet++ network model according to first loss function values formed by the prediction mask image and the vaccinia mask label;
returning to the operation of inputting the training sample image into the unet++ network model to obtain a prediction mask image until a target image segmentation model is obtained;
Training an initial vaccinia classification model by training sample images, comprising:
establishing an initial vaccinia grading model;
inputting a training sample image into the initial vaccinia grading model to obtain a vaccinia grading prediction result; the training sample image is marked with a vaccinia grading label;
training parameters of the initial vaccinia classification model according to the vaccinia classification prediction result and a second loss function value formed by the vaccinia classification label;
and returning to the operation of inputting the training sample image into the initial vaccinia grading model to obtain the vaccinia grading prediction result until the target vaccinia grading model is obtained.
6. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the vaccinia classification method of any of claims 1-4.
7. A computer readable storage medium storing computer instructions for causing a processor to perform the vaccinia classification method according to any one of claims 1-4.
CN202311630752.1A 2023-12-01 2023-12-01 Acne classification method, device, equipment and storage medium Active CN117333487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311630752.1A CN117333487B (en) 2023-12-01 2023-12-01 Acne classification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311630752.1A CN117333487B (en) 2023-12-01 2023-12-01 Acne classification method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117333487A CN117333487A (en) 2024-01-02
CN117333487B true CN117333487B (en) 2024-03-29

Family

ID=89279757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311630752.1A Active CN117333487B (en) 2023-12-01 2023-12-01 Acne classification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117333487B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033481A (en) * 2018-01-10 2019-07-19 北京三星通信技术研究有限公司 Method and apparatus for carrying out image procossing
WO2019223148A1 (en) * 2018-05-23 2019-11-28 平安科技(深圳)有限公司 Acne determination method based on facial recognition, terminal, and storage medium
CN111179253A (en) * 2019-12-30 2020-05-19 歌尔股份有限公司 Product defect detection method, device and system
CN112001841A (en) * 2020-07-14 2020-11-27 歌尔股份有限公司 Image to-be-detected region extraction method and device and product defect detection system
CN113255396A (en) * 2020-02-07 2021-08-13 北京达佳互联信息技术有限公司 Training method and device of image processing model, and image processing method and device
CN113723310A (en) * 2021-08-31 2021-11-30 平安科技(深圳)有限公司 Image identification method based on neural network and related device
CN115410240A (en) * 2021-05-11 2022-11-29 深圳市聚悦科技文化有限公司 Intelligent face pockmark and color spot analysis method and device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033481A (en) * 2018-01-10 2019-07-19 北京三星通信技术研究有限公司 Method and apparatus for carrying out image procossing
WO2019223148A1 (en) * 2018-05-23 2019-11-28 平安科技(深圳)有限公司 Acne determination method based on facial recognition, terminal, and storage medium
CN111179253A (en) * 2019-12-30 2020-05-19 歌尔股份有限公司 Product defect detection method, device and system
CN113255396A (en) * 2020-02-07 2021-08-13 北京达佳互联信息技术有限公司 Training method and device of image processing model, and image processing method and device
CN112001841A (en) * 2020-07-14 2020-11-27 歌尔股份有限公司 Image to-be-detected region extraction method and device and product defect detection system
CN115410240A (en) * 2021-05-11 2022-11-29 深圳市聚悦科技文化有限公司 Intelligent face pockmark and color spot analysis method and device and storage medium
CN113723310A (en) * 2021-08-31 2021-11-30 平安科技(深圳)有限公司 Image identification method based on neural network and related device

Also Published As

Publication number Publication date
CN117333487A (en) 2024-01-02

Similar Documents

Publication Publication Date Title
CN113642431B (en) Training method and device of target detection model, electronic equipment and storage medium
CN113971751A (en) Training feature extraction model, and method and device for detecting similar images
CN112857268B (en) Object area measuring method, device, electronic equipment and storage medium
CN115861462B (en) Training method and device for image generation model, electronic equipment and storage medium
CN111444807A (en) Target detection method, device, electronic equipment and computer readable medium
CN116740355A (en) Automatic driving image segmentation method, device, equipment and storage medium
CN114511743B (en) Detection model training, target detection method, device, equipment, medium and product
CN117274266B (en) Method, device, equipment and storage medium for grading acne severity
CN112132867B (en) Remote sensing image change detection method and device
CN117521768A (en) Training method, device, equipment and storage medium of image search model
CN116758280A (en) Target detection method, device, equipment and storage medium
CN117333487B (en) Acne classification method, device, equipment and storage medium
CN113610856B (en) Method and device for training image segmentation model and image segmentation
CN114612651B (en) ROI detection model training method, detection method, device, equipment and medium
CN115631370A (en) Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network
CN115761698A (en) Target detection method, device, equipment and storage medium
CN115359322A (en) Target detection model training method, device, equipment and storage medium
CN114972361A (en) Blood flow segmentation method, device, equipment and storage medium
CN114037865B (en) Image processing method, apparatus, device, storage medium, and program product
CN117372261B (en) Resolution reconstruction method, device, equipment and medium based on convolutional neural network
CN114092739B (en) Image processing method, apparatus, device, storage medium, and program product
CN115147902B (en) Training method, training device and training computer program product for human face living body detection model
CN117746069B (en) Graph searching model training method and graph searching method
CN116580050A (en) Medical image segmentation model determination method, device, equipment and medium
CN117808829A (en) Tooth segmentation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant