CN112686336A - Burn surface of a wound degree of depth classification system based on neural network - Google Patents

Burn surface of a wound degree of depth classification system based on neural network Download PDF

Info

Publication number
CN112686336A
CN112686336A CN202110119362.2A CN202110119362A CN112686336A CN 112686336 A CN112686336 A CN 112686336A CN 202110119362 A CN202110119362 A CN 202110119362A CN 112686336 A CN112686336 A CN 112686336A
Authority
CN
China
Prior art keywords
image
burn
model
wound
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110119362.2A
Other languages
Chinese (zh)
Inventor
刘昊
岳克强
李文钧
程思一
潘成铭
孙洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202110119362.2A priority Critical patent/CN112686336A/en
Publication of CN112686336A publication Critical patent/CN112686336A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a neural network-based burn wound depth classification system, which comprises a burn image acquisition module, an image preprocessing module, an image data enhancement module, a front-end multi-model feature extraction module and a rear-end model classification prediction module which are sequentially connected; the burn image acquisition module is used for acquiring a wound surface image of a burn patient; the image preprocessing module is used for preprocessing the wound surface image; the image data enhancement module is used for performing data enhancement on the wound surface image and expanding a wound surface image data set; the front-end multi-model feature extraction module is used for performing multi-model feature extraction on the burn image and stacking an output feature matrix of the front-end multi-model; the rear-end model classification prediction module is used for further extracting the output characteristics of the front-end multi-model and performing classification prediction on the burn depth; the system can realize end-to-end deep diagnosis of the burn wound, and high-accuracy and high-efficiency deep diagnosis of the burn wound is realized through the configuration of different neural network models at the front end and the rear end.

Description

Burn surface of a wound degree of depth classification system based on neural network
Technical Field
The invention relates to the field of burn medical image processing, in particular to a burn wound depth classification system based on a neural network.
Background
Burns are typically damage to the skin or other tissue caused by heat, cold, electricity, chemicals, or radiation (e.g., sunburn). At present, the diagnosis of the depth of burn wound is generally divided into first degree, second degree and third degree according to the depth of burn. Wherein, the second degree burn is divided into two types according to the depth of the wound and the skin structure: shallow second and deep second. The first degree burn has the least degree of lesion, generally comprises damage to cuticle, stratum lucidum and stratum granulosum of epidermis, and can be cured within short-term treatment. Superficial second degree burns involve damage to the entire epidermis, up to the stratum germinativum or the papillary dermis. The wound surface generally heals after one to two weeks. Deep second degree burns include lesions of the dermis below the papillary layer, but a portion of the dermis remains. If no infection occurs, the healing time of the wound surface of a patient generally takes three to four weeks. If infection occurs, the healing time is prolonged, and the wound surface can be healed only by skin grafting in severe cases. Third degree burns are lesions below the full-thickness skin, and may burn deep subcutaneous fat, muscle, bone, internal organs, etc. in addition to the epidermis and dermis. Because the surface layer of the skin is completely destroyed, the skin grafting is needed to heal.
Effective diagnosis and assessment of burn wound depth has a crucial impact on early treatment and recovery of a patient's wound. Accurate burn deep diagnosis has important help to whether the patient needs operation skin grafting and subsequent clinical treatment, reduces the risk that the patient takes place the complication, shortens patient's the time of being in hospital simultaneously, reduces the expense of medical cost. The depth of burn is usually diagnosed by a physician by clinical assessment. The clinical diagnosis method is usually performed by observing the morphology of the burned area and the characteristics of the epidermis, however, the subjective influence factors of clinical diagnosis are more, and the diagnosis results of different doctors may be different. Biopsy techniques examine the depth of burn by extracting a burn wound sample. Laser doppler imaging techniques can also be used for non-invasive burn assessment. However, the methods have the disadvantages of complex detection equipment, high cost and low timeliness.
Disclosure of Invention
In order to solve the defects of the prior art and achieve the purpose of reducing subjective factors and diagnosis duration of a diagnosis result, the invention adopts the following technical scheme:
a burn wound depth classification system based on a neural network comprises a burn image acquisition module, an image preprocessing module, an image data enhancement module, a front-end multi-model feature extraction module and a rear-end model classification prediction module which are sequentially connected;
the burn image acquisition module acquires a wound surface image of a burn patient;
the image preprocessing module is used for preprocessing the wound surface image;
the image data enhancement module is used for enhancing data of the wound surface image and expanding a wound surface image data set;
the front-end multi-model feature extraction module is used for performing multi-model feature extraction on the burn image and stacking an output feature matrix of the front-end multi-model;
and the rear-end model classification prediction module is used for further extracting the output characteristics of the front-end multi-model and performing classification prediction on the burn depth.
Further, the image preprocessing module comprises a cutting unit, a marking unit, a normalizing unit and a normalizing unit;
the shearing unit is used for shearing the effective area of the image to obtain the sheared image;
the marking unit marks the burn depth of each image, the marking types comprise a background, a normal skin surface, a first-degree wound surface, a shallow second-degree wound surface, a deep second-degree wound surface and a third-degree wound surface, and the corresponding label value of each type is 0,1, 2, 3, 4 and 5;
the normalization unit and the normalization unit are used for normalizing and normalizing the wound surface image, the wound surface image consists of three RGB channels, the numerical value of each channel is an integer between [0 and 255], the normalization unit maps the channel into a floating point number between [0 and 1], and the normalization formula is as follows:
Figure BDA0002921431250000021
wherein xiRepresenting the pixel value, X, of each channel before normalizationiExpressing the pixel value of each channel after normalization, wherein N expresses the pixel number of the image;
the normalized formula is:
Figure BDA0002921431250000022
wherein X is each pixel value of the normalized single channel, mu is the mean value of all pixel values of the normalized image single channel, and the formula is as follows:
Figure BDA0002921431250000023
sigma is the standard deviation of all pixel values of a single channel of the normalized image, and the formula is as follows:
Figure BDA0002921431250000024
Xsis a normalized single-channel image value.
Furthermore, the image data enhancement module comprises a rotation unit, a turnover unit, a displacement unit, a scaling unit, a filtering unit, a blurring unit, a sharpening unit and an adjusting unit;
the rotation unit randomly rotates the burn wound image in a clockwise direction within an angle range of [0,360], and a random number generator randomly generates an angle to rotate the burn wound image;
the turnover unit randomly turns the burn image, including horizontal turnover and vertical turnover, and a random number generator randomly generates a mode to turn the burn image;
the displacement unit is used for randomly displacing the burn image, displacing the image in four directions, namely up, down, left and right, filling the displaced area with 0, and randomly generating a mode by a random number generator to displace the burn image;
the scaling unit scales the burn image randomly, scales the burn image by using the center of the image, fills the redundant area with 0 during scaling, and randomly enlarges or reduces the burn image by the random number generator;
the filtering unit carries out bilateral filtering on the burn image, and the formula is as follows:
Figure BDA0002921431250000031
Figure BDA0002921431250000032
wherein IfilteredFor the filtered image, I is the original input image, x is the coordinates of the current pixel to be filtered, Ω is the area of pixels around coordinates with x as the center, frDistance filter kernel to smooth intensity differences, gsA spatial filter kernel that is a smoothed coordinate difference;
the blurring unit is used for carrying out Gaussian blurring on the partial burn image, and the formula of a filter kernel used by the Gaussian blurring is as follows:
Figure BDA0002921431250000033
wherein x is the distance from the horizontal axis to the origin, y is the distance from the vertical axis to the origin, and σ is the standard deviation of the Gaussian distribution;
the sharpening power supply is used for sharpening the burn image, and a sharpened filtering kernel is as follows:
Figure BDA0002921431250000034
the adjusting unit adjusts the brightness and Saturation of the burn image, converts the RGB channel of the burn image into an HSV channel, wherein the HSV channel represents Hue (Hue, H), Saturation (Saturation, S) and lightness (Value, V), multiplies all pixel values in the Saturation and lightness channels by a Value between [0.8 and 1.2] to adjust the Saturation and lightness of the burn image, and then converts the HSV channel into the RGB channel.
Further, the front-end multi-model feature extraction module comprises a plurality of front-end neural network models, an image data set subjected to data enhancement is input into the plurality of front-end neural network models for training, each wound surface image is respectively input into the plurality of front-end neural network models during training, the front-end neural network models comprise a VGG16 model, an Inception V3 model, a MobileNet V2 model and an EfficientNet B5 model, original output layers of the front-end neural network models are respectively deleted, the size of output feature matrices is adjusted, and output matrices of the models are stacked to form a new output matrix.
Further, the rear-end model classification prediction module comprises a rear-end neural network model, the characteristic matrix is used as input, the rear-end neural network model is trained, the rear-end neural network model selects a ResNet50 model, the input of an original ResNet50 model is improved to meet the input characteristic proof, the last full connection layer of the original ResNet50 model is deleted, the output of a plurality of full connection layers is changed, and the output comprises: background, normal skin surface, first degree wound, superficial second degree wound, deep second degree wound and third degree wound.
Further, the activation function of each output is a ReLU function, whose expression is:
Figure BDA0002921431250000035
wherein x is the input of the upper layer of the model;
further, a plurality of classification results are output by adopting a softmax function, each result ranges between [0 and 1], and the expression of the softmax function is as follows:
Figure BDA0002921431250000041
wherein SiFor the output of the ith class, xiAn output representing an activation function;
further, in the training process of the model, an Adam (adaptive motion estimation) optimizer is used for updating the parameters of the model, wherein the formula of the Adam optimizer is as follows:
Figure BDA0002921431250000042
Figure BDA0002921431250000043
wherein theta istParameter of the model, θ, representing t iterationst-1Parameter of the model, θ, representing t-1 iterationst+1Parameters of the model representing t +1 iterations, J represents the Loss function, gtRepresenting the t-th iteration Loss function with respect to thetat-1The magnitude of the gradient of (a);
further, during training, the calculated Loss function is a modified weighted cross-entropy Loss function (weighted-entropy function), and the weight calculation expression is as follows:
Figure BDA0002921431250000044
wherein N isallNumber of pictures showing the total burn, NiNumber of burn pictures, w, representing class iiRepresenting the weighted value of the ith class, and the weighted cross entropy loss function expression is as follows:
Figure BDA0002921431250000045
wherein y isiA value that is representative of the value of the tag in reality,
Figure BDA0002921431250000046
predicted values, w, representing model outputsiAnd N represents the number of burn image types output by the model as a weight coefficient. Different burn categories are endowed with different weight values through a weighting cross entropy loss function, and the problem of unbalanced category distribution of a burn training set is solved.
Further, the rear-end model classification prediction module outputs a plurality of numerical values in a range of [0,1], and the probability of the corresponding wound images, which are the background, the normal skin surface, the first-degree wound, the shallow second-degree wound, the deep second-degree wound or the third-degree wound, is the value corresponding to the output with the largest numerical value, and the value is used as the result of model prediction.
The invention has the advantages and beneficial effects that:
compared with the traditional method, the burn wound depth classification system based on the neural network has the advantages of higher accuracy in classifying burn depths, shorter classification prediction time and stronger robustness in classifying burn wound depths of systems under different shooting environments through a data enhancement method.
Drawings
FIG. 1 is a block diagram of the system of the present invention.
FIG. 2 is a schematic flow chart of the system of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
In recent years, artificial intelligence technology is increasingly widely used in various industries. The artificial intelligence technology is combined with the traditional clinical evaluation of the depth of the burn wound, so that the depth of the burn wound can be quickly detected, the severity of the burn wound of a patient can be immediately obtained, the influence caused by subjective factors is reduced, reference is provided for subsequent treatment, and the detection time and cost are reduced.
As shown in fig. 1, a neural network-based burn wound depth classification system, which performs image preprocessing and data enhancement on an acquired burn image, inputs the burn image into a front-end multi-neural network model, stacks the output of the front-end model as the input of a back-end improved ResNet50 model, and outputs a burn depth prediction result from the back-end model, includes: the burn image processing system comprises a burn image acquisition module 1, an image preprocessing module 2, an image data enhancement module 3, a front-end multi-model feature extraction module 4 and a rear-end model classification prediction module 5.
The burn image acquisition module 1 is used for acquiring a burn wound image of a burn patient through camera equipment.
The image preprocessing module 2 is used for preprocessing the burn image such as cutting, marking, normalizing and standardizing.
The image data enhancement module 3 performs data enhancement such as rotation, turning, displacement, scaling, bilateral filtering, Gaussian blur, sharpening, brightness and saturation adjustment on the burn image.
The front-end multi-model feature extraction module 4 is used for performing multi-model feature extraction on the burn image and stacking an output feature matrix of the models.
The rear-end model classification prediction module 5 further extracts the output characteristics of the front-end model and performs classification prediction of the burn depth.
As shown in fig. 2, the workflow of the system includes the following steps:
s1: acquiring a burn wound image of a patient through photographic equipment; s2: processing an original burn wound image by an image preprocessing method, and marking the burn depth of the preprocessed image; s3: expanding a burn wound image data set by a data enhancement method; s4: inputting the images into a plurality of front-end neural networks for model training; s5: stacking output matrixes of the multiple models together to serve as input of the back-end neural network model, and training the back-end neural network model; s6: and carrying out depth classification diagnosis on the burn image through an output result of the back-end model.
Step S1 includes: the burn skin area of a photographed patient is collected by using equipment such as a camera and a smart phone, and a burn wound image is obtained preliminarily.
Step S2 includes: the method comprises the following steps of processing an original burn wound image by an image preprocessing method, wherein the preprocessing method specifically comprises the following steps: 1) cutting an effective image area to obtain a cut image; 2) marking the burn depth of each image, wherein the marking types are 6 types including background, normal skin surface, first-degree wound surface, shallow second-degree wound surface, deep second-degree wound surface and third-degree wound surface, and the corresponding label value of each type is 0,1, 2, 3, 4 and 5; 3) the burn images were normalized and normalized. The image consists of three channels of RGB, each channel having a numeric value between [0,255], which is mapped to a floating point number between [0,1] by normalization.
Step S3 includes: the burn wound image data set is expanded through a data enhancement method, and the data enhancement specifically comprises the following steps: 1) randomly rotating the burn wound image in a clockwise direction within an angle range of [0,360], and randomly generating an angle by a random number generator to rotate the burn wound image; 2) randomly turning the burn image, including horizontal turning and vertical turning, and turning the burn image in a mode randomly generated by a random number generator; 3) randomly shifting the burn image, namely shifting the image in four directions of up, down, left and right, wherein the range is 10 percent of the side length in the direction, the shifted area is filled with 0, and a random number generator randomly generates a mode to shift the burn image; 4) randomly zooming the burn image, zooming by the image center, wherein the zooming size is 90% and 110% of the original size, filling redundant areas with 0 during zooming, and randomly amplifying or zooming the burn image by a random number generator; 5) bilateral filtering is carried out on the burn image; 6) performing Gaussian blur on the partial burn image; 7) carrying out sharpening operation on the burn image; 8) adjusting the brightness and saturation of the burn image, converting the RGB channel of the burn image into HSV channel, and then multiplying all pixel values in the saturation and lightness channels by a value between [0.8,1.2] to adjust the saturation and lightness of the burn image. And then converting the HSV channel into an RGB channel.
Step S4 includes: and inputting the images into a plurality of front-end neural networks for model training, wherein the image data set is the data set subjected to data enhancement in the step S3. Specifically, each burn image is input into a plurality of front-end neural network models respectively during training, wherein the front-end neural network models comprise a VGG16 model, an Inception V3 model, a MobileNet V2 model and an EfficientNet B5 model. For the front-end network model, the original output layers of 4 front-end models are deleted respectively, then the feature matrix size of the output layers of the models is adjusted to be 1 × 256 × 256, and then the output matrices of the 4 models are stacked to form a 4 × 256 × 256 output matrix as the input of the back-end model.
Step S5 includes: and stacking the output matrixes of the plurality of models together to be used as the input of the back-end neural network model, and training the back-end neural network model. The back-end neural network model selects the ResNet50 model whose inputs are the 4 × 256 × 256 output matrices of the 4 models in step S4. The input shape of the improved original ResNet50 model is therefore 4 x 256. The last fully-connected layer of the original ResNet50 model is then deleted and changed to the output of 6 fully-connected layers. Wherein each output represents a probability value of 6 types including background, normal skin surface, first-degree wound, shallow second-degree wound, deep second-degree wound and third-degree wound. The activation function of each output of the last layer is a ReLU function, and the expression is as follows:
Figure BDA0002921431250000061
where x is the input to the layer above the model. Finally, selecting a softmax function to output 6 types of classification results, wherein each result range is [0,1], and the expression of the softmax function is as follows:
Figure BDA0002921431250000062
wherein SiFor the output of the ith class, xiRepresenting the output of the activation function.
Selecting an Adam (adaptive motion estimation) optimizer for updating parameters of the model in the training process of the model, wherein the formula of the Adam optimizer is as follows:
Figure BDA0002921431250000071
Figure BDA0002921431250000072
wherein theta istParameter of the model, θ, representing t iterationst-1Parameter of the model, θ, representing t-1 iterationst+1Parameters of the model representing t +1 iterations, J represents the Loss function, gtRepresenting the t-th iteration Loss function with respect to thetat-1The magnitude of the gradient of (a).
The Loss function calculated during training is a modified weighted cross-entropy Loss function (weighted cross-entropy function), and the weight calculation expression is as follows:
Figure BDA0002921431250000073
wherein N isallNumber of pictures showing the total burn, NiNumber of burn pictures, w, representing class iiRepresenting the weight value of class i. The weighted cross entropy loss function expression is:
Figure BDA0002921431250000074
wherein y isiA value that is representative of the value of the tag in reality,
Figure BDA0002921431250000075
predicted values, w, representing model outputsiAnd N represents the number of burn image types output by the model as a weight coefficient.
In the training process, the learning rate of model training is 0.0005, the total number of model training is 200, the number of steps of each iteration is 100, and the batch size of training, namely the number of burn images input into the model each time, is 32.
Step S6 includes: and carrying out depth classification diagnosis on the burn image through an output result of the back-end model. The output of the back-end model in step S5 is 6 numerical values in the range of [0,1], and the numerical values correspond to the probability that the burn wound image is a background, a normal skin surface, a first-degree wound, a shallow second-degree wound, a deep second-degree wound, or a third-degree wound, respectively. And taking the class corresponding to the output with the maximum numerical value as the result of model prediction.
Compared with the traditional method, the method provided by the invention has the following differences:
1) the invention provides a multi-model front-end neural network model, which comprises a VGG16 model, an Inception V3 model, a MobileNet V2 model and an EfficientNet B5 model, wherein output matrixes of 4 models are stacked and used as the input of a rear-end model;
2) the invention provides a back-end neural network model, the output characteristic matrixes of a plurality of front-end models are input into an improved ResNet50 model, and finally the type of the burn depth of a wound surface image is predicted;
3) the weighting cross entropy loss function provided by the invention endows different burn categories with different weight values, and solves the problem of unbalanced category distribution of the burn training set.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The utility model provides a burn surface of a wound degree of depth classification system based on neural network, includes burn image acquisition module (1), image preprocessing module (2), image data reinforcing module (3), front end multi-model feature extraction module (4) and rear end model classification prediction module (5) that connect gradually, its characterized in that:
the burn image acquisition module (1) is used for acquiring a wound surface image of a burn patient;
the image preprocessing module (2) is used for preprocessing the wound surface image;
the image data enhancement module (3) is used for enhancing the data of the wound surface image and expanding a wound surface image data set;
the front-end multi-model feature extraction module (4) is used for performing multi-model feature extraction on the burn image and stacking an output feature matrix of the front-end multi-model;
and the rear-end model classification prediction module (5) is used for further extracting the output characteristics of the front-end multi-model and performing classification prediction on the burn depth.
2. A neural network based depth classification system of burn wounds, according to claim 1, characterized in that the image preprocessing module (2) comprises a clipping unit, a marking unit, a normalization unit and a normalization unit;
the shearing unit is used for shearing the effective area of the image to obtain the sheared image;
the marking unit marks the burn depth of each image, the marking types comprise a background, a normal skin surface, a first-degree wound surface, a shallow second-degree wound surface, a deep second-degree wound surface and a third-degree wound surface, and the corresponding label value of each type is 0,1, 2, 3, 4 and 5;
the normalization unit and the normalization unit are used for normalizing and normalizing the wound surface image, the wound surface image consists of three RGB channels, the numerical value of each channel is an integer between [0 and 255], the normalization unit maps the channel into a floating point number between [0 and 1], and the normalization formula is as follows:
Figure FDA0002921431240000011
wherein xiRepresenting the pixel value, X, of each channel before normalizationiExpressing the pixel value of each channel after normalization, wherein N expresses the pixel number of the image;
the normalized formula is:
Figure FDA0002921431240000012
wherein X is each pixel value of the normalized single channel, mu is the mean value of all pixel values of the normalized image single channel, and the formula is as follows:
Figure FDA0002921431240000013
sigma is the standard deviation of all pixel values of a single channel of the normalized image, and the formula is as follows:
Figure FDA0002921431240000014
Xsis a normalized single-channel image value.
3. The burn wound depth classification system based on the neural network as claimed in claim 1, wherein the image data enhancement module (3) comprises a rotation unit, a flipping unit, a displacement unit, a scaling unit, a filtering unit, a blurring unit, a sharpening unit, an adjusting unit;
the rotation unit randomly rotates the burn wound image in a clockwise direction within an angle range of [0,360], and a random number generator randomly generates an angle to rotate the burn wound image;
the turnover unit randomly turns the burn image, including horizontal turnover and vertical turnover, and a random number generator randomly generates a mode to turn the burn image;
the displacement unit is used for randomly displacing the burn image, displacing the image in four directions, namely up, down, left and right, filling the displaced area with 0, and randomly generating a mode by a random number generator to displace the burn image;
the scaling unit scales the burn image randomly, scales the burn image by using the center of the image, fills the redundant area with 0 during scaling, and randomly enlarges or reduces the burn image by the random number generator;
the filtering unit carries out bilateral filtering on the burn image, and the formula is as follows:
Figure FDA0002921431240000021
Figure FDA0002921431240000022
wherein IfilteredFor the filtered image, I is the original input image, x is the coordinates of the current pixel to be filtered, Ω is the area of pixels around coordinates with x as the center, frDistance filter kernel to smooth intensity differences, gsA spatial filter kernel that is a smoothed coordinate difference;
the blurring unit is used for carrying out Gaussian blurring on the partial burn image, and the formula of a filter kernel used by the Gaussian blurring is as follows:
Figure FDA0002921431240000023
wherein x is the distance from the horizontal axis to the origin, y is the distance from the vertical axis to the origin, and σ is the standard deviation of the Gaussian distribution;
the sharpening power supply is used for sharpening the burn image, and a sharpened filtering kernel is as follows:
Figure FDA0002921431240000024
the adjusting unit is used for adjusting the brightness and the saturation of the burn image, converting an RGB channel of the burn image into an HSV channel, wherein the HSV channel represents hue, saturation and lightness, then multiplying all pixel values in the saturation and lightness channel by a value between [0.8 and 1.2] to adjust the saturation and the lightness of the burn image, and then converting the HSV channel into the RGB channel.
4. The burn wound depth classification system based on the neural network as claimed in claim 1, characterized in that the front-end multi-model feature extraction module (4) includes a plurality of front-end neural network models, the image data set after data enhancement is inputted into the plurality of front-end neural network models for training, during training, each wound image is inputted into the plurality of front-end neural network models respectively, the front-end neural network models include VGG16 model, inconetionv 3 model, MobileNetV2 model and EfficientNetB5 model, the original output layers of the front-end neural network models are deleted respectively, the size of the output feature matrix is adjusted, and the output matrices of the models are stacked to form a new output matrix.
5. A neural network based burn wound depth classification system as claimed in claim 1, characterized in that the back end model classification prediction module (5) comprises a back end neural network model, taking the feature matrix as input and training the back end neural network model, the back end neural network model selecting a ResNet50 model, improving the input of the original ResNet50 model to conform to the feature testimony of the input, deleting the last fully connected layer of the original ResNet50 model, changing to the output of a plurality of fully connected layers, the output comprising: background, normal skin surface, first degree wound, superficial second degree wound, deep second degree wound and third degree wound.
6. A neural network-based depth classification system for a burn wound according to claim 5 wherein the activation function of each output is a ReLU function expressed as:
Figure FDA0002921431240000031
where x is the input to the layer above the model.
7. A neural network-based depth classification system for a burn wound as claimed in claim 5, wherein a plurality of classification results are output using a softmax function, each result ranging between [0,1], the expression of the softmax function being:
Figure FDA0002921431240000032
wherein SiFor the output of the ith class, xiRepresenting the output of the activation function.
8. A neural network-based burn wound depth classification system as claimed in claim 5, wherein in the training process of the model, an Adam optimizer is adopted for updating the parameters of the model, wherein the formula of the Adam optimizer is as follows:
Figure FDA0002921431240000033
Figure FDA0002921431240000034
wherein theta istParameter of the model, θ, representing t iterationst-1Parameter of the model, θ, representing t-1 iterationst+1Parameters of the model representing t +1 iterations, J represents the Loss function, gtRepresenting the t-th iteration Loss function with respect to thetat-1The magnitude of the gradient of (a).
9. The system of claim 5, wherein during training, the calculated Loss function is an improved weighted cross entropy Loss function, and the weight calculation expression is as follows:
Figure FDA0002921431240000035
wherein N isallNumber of pictures showing the total burn, NiNumber of burn pictures, w, representing class iiRepresenting the weighted value of the ith class, and the weighted cross entropy loss function expression is as follows:
Figure FDA0002921431240000036
wherein y isiA value that is representative of the value of the tag in reality,
Figure FDA0002921431240000037
predicted values, w, representing model outputsiAnd N represents the number of burn image types output by the model as a weight coefficient.
10. The neural network-based burn wound depth classification system according to claim 1, wherein the back-end model classification prediction module (5) outputs a plurality of values in the range of [0,1], and the probability of the corresponding wound image being background, normal skin surface, first degree wound, shallow second degree wound, deep second degree wound or third degree wound is selected as the result of model prediction, wherein the output with the largest value corresponds to the type.
CN202110119362.2A 2021-01-28 2021-01-28 Burn surface of a wound degree of depth classification system based on neural network Pending CN112686336A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110119362.2A CN112686336A (en) 2021-01-28 2021-01-28 Burn surface of a wound degree of depth classification system based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110119362.2A CN112686336A (en) 2021-01-28 2021-01-28 Burn surface of a wound degree of depth classification system based on neural network

Publications (1)

Publication Number Publication Date
CN112686336A true CN112686336A (en) 2021-04-20

Family

ID=75459459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110119362.2A Pending CN112686336A (en) 2021-01-28 2021-01-28 Burn surface of a wound degree of depth classification system based on neural network

Country Status (1)

Country Link
CN (1) CN112686336A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113576508A (en) * 2021-07-21 2021-11-02 华中科技大学 Cerebral hemorrhage auxiliary diagnosis system based on neural network
CN114565800A (en) * 2022-04-24 2022-05-31 深圳尚米网络技术有限公司 Method for detecting illegal picture and picture detection engine
CN114581410A (en) * 2022-03-04 2022-06-03 深圳市澈影医生集团有限公司 Training system and method of neural network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113576508A (en) * 2021-07-21 2021-11-02 华中科技大学 Cerebral hemorrhage auxiliary diagnosis system based on neural network
CN114581410A (en) * 2022-03-04 2022-06-03 深圳市澈影医生集团有限公司 Training system and method of neural network
CN114581410B (en) * 2022-03-04 2023-03-21 深圳市澈影医生集团有限公司 Training system and method of neural network
CN114565800A (en) * 2022-04-24 2022-05-31 深圳尚米网络技术有限公司 Method for detecting illegal picture and picture detection engine

Similar Documents

Publication Publication Date Title
CN112686336A (en) Burn surface of a wound degree of depth classification system based on neural network
CN108021916B (en) Deep learning diabetic retinopathy sorting technique based on attention mechanism
CN111815574B (en) Fundus retina blood vessel image segmentation method based on rough set neural network
CN110084318B (en) Image identification method combining convolutional neural network and gradient lifting tree
Lau et al. Automatically early detection of skin cancer: Study based on nueral netwok classification
WO2018082084A1 (en) Brain tumor automatic segmentation method by means of fusion of full convolutional neural network and conditional random field
CN111275696B (en) Medical image processing method, image processing method and device
CN108095683A (en) The method and apparatus of processing eye fundus image based on deep learning
CN112017185B (en) Focus segmentation method, device and storage medium
CN112116009B (en) New coronal pneumonia X-ray image identification method and system based on convolutional neural network
CN113837974B (en) NSST domain power equipment infrared image enhancement method based on improved BEEPS filtering algorithm
CN113781455B (en) Cervical cell image anomaly detection method, device, equipment and medium
CN112446880B (en) Image processing method, electronic device and readable storage medium
CN111223110B (en) Microscopic image enhancement method and device and computer equipment
CN112528947B (en) Method, equipment and storage medium for detecting false hyphae by increasing direction dimension
CN112330613B (en) Evaluation method and system for cytopathology digital image quality
CN109190571B (en) Method and device for detecting and identifying typical plant species eaten by grazing sheep
CN115393239A (en) Multi-mode fundus image registration and fusion method and system
CN117670835A (en) Puncture damage detection method based on neural network
CN117409002A (en) Visual identification detection system for wounds and detection method thereof
CN111862071B (en) Method for measuring CT value of lumbar 1 vertebral body based on CT image
CN113538363A (en) Lung medical image segmentation method and device based on improved U-Net
CN112950611A (en) Liver blood vessel segmentation method based on CT image
CN110619633B (en) Liver image segmentation method based on multipath filtering strategy
CN112634308A (en) Nasopharyngeal carcinoma target area and endangered organ delineation method based on different receptive fields

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination