CN112990199A - Burn wound depth classification system based on support vector machine - Google Patents

Burn wound depth classification system based on support vector machine Download PDF

Info

Publication number
CN112990199A
CN112990199A CN202110344456.XA CN202110344456A CN112990199A CN 112990199 A CN112990199 A CN 112990199A CN 202110344456 A CN202110344456 A CN 202110344456A CN 112990199 A CN112990199 A CN 112990199A
Authority
CN
China
Prior art keywords
image
burn
burn wound
module
svm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110344456.XA
Other languages
Chinese (zh)
Other versions
CN112990199B (en
Inventor
刘昊
王超
李文钧
岳克强
程思一
潘成铭
孙洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202110344456.XA priority Critical patent/CN112990199B/en
Publication of CN112990199A publication Critical patent/CN112990199A/en
Application granted granted Critical
Publication of CN112990199B publication Critical patent/CN112990199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a burn wound depth classification system based on a support vector machine, which comprises: the system comprises a burn wound image acquisition module, a burn wound image preprocessing module, a burn wound image feature extraction module, an SVM training module and a burn image prediction module; the burn wound image acquisition module is used for acquiring a burn wound image of a burn patient; the burn wound image preprocessing module is used for preprocessing the acquired burn wound image; the burn wound image feature extraction module is used for extracting features of the preprocessed burn wound image; the SVM training module is used for dividing the burn images with the extracted features into a training set, a verification set and a test set for training an SVM classification model; and the burn image prediction module is used for expressing an optimal SVM model on the verification set and the test set, predicting the burn depth of the burn wound surface image without marks and obtaining the depth classification result of the burn image.

Description

Burn wound depth classification system based on support vector machine
Technical Field
The invention relates to the field of image processing, in particular to a burn wound depth classification system based on a support vector machine.
Background
Burns are a common traumatic disease with high morbidity and mortality. The treatment of burns first requires an accurate and reliable diagnosis of the depth of the burn wound. Early effective treatment can reduce the burden on the patient and in some cases can save the patient's life. The diagnosis of the depth of the burn wound surface adopts a three-degree quartile method. The third degree quartering method classifies the burn wound into first degree burn, superficial second degree burn, deep second degree burn and third degree burn according to the severity and clinical manifestations of the burn. The first degree burn is the least severe, and the severity of the superficial second degree burn, the deep second degree burn and the third degree burn increases in sequence. First degree burn, also known as erythematous burn, only damages a part of the skin epidermis, and heals in about 3-5 days without leaving scars. Superficial second degree burns can injure the whole epidermis and part of the papillary layer, and if no secondary infection exists, the wound surface is generally healed after about 1-2 weeks. Deep second degree burns are deep below the dermal papilla layer, but part of dermis and skin still remain, and the wound surface needs to be healed for 3-4 weeks generally. Third degree burns can damage the skin all the way through, the epidermis, the dermis and the skin appendages are all damaged, and the repair of the burn wound surface depends on surgical skin grafting or flap repair.
Burns of a large area range can cause various systems of the patient's body to exhibit varying degrees of morphological, functional and metabolic changes. The patient has serious visceral damage and organ failure as complications. The healing and treatment process of the burn wound is complex and long in time. However, early treatment of burns can reduce physical damage and medical burden on patients. Therefore, accurate burn wound depth assessment is of great importance to clinical treatment of patients, in the prior art, the classification of burn wounds mostly depends on the experience of doctors, the classification efficiency is not high, and the accuracy of machine classification or auxiliary classification is not high.
Disclosure of Invention
In order to solve the defects of the prior art, realize the high-efficiency classification of burn wounds and improve the classification accuracy, the invention adopts the following technical scheme:
a depth classification system of a burn wound based on a support vector machine comprises: the system comprises a burn wound image acquisition module, a burn wound image preprocessing module, a burn wound image feature extraction module, an SVM (Support Vector Machine) training module and a burn image prediction module;
the burn wound image acquisition module is used for acquiring a burn wound image of a burn patient;
the burn wound image preprocessing module is used for preprocessing the acquired burn wound image;
the burn wound image feature extraction module is used for extracting features of the preprocessed burn wound image;
the SVM training module is used for dividing the burn images with the extracted features into a training set, a verification set and a test set for training an SVM classification model;
and the burn image prediction module is used for expressing an optimal SVM model on the verification set and the test set, predicting the burn depth of the burn wound surface image without marks and obtaining the depth classification result of the burn image.
Further, the SVM training module classifies the features extracted from the burn picture into a training set, a verification set and a test set, the training set is used for training an SVM classification model, the verification set and the test set are used for evaluating the performance of the SVM classification model, and the SVM classification model is expressed as follows:
Figure BDA0002997020040000021
where S is the set of all support vector observations, αiIs the model parameter to be learned by SVM, (x)i,xi′) Is a pair of support vector observations, is two different vector samples in the burn training set, K represents a kernel function for comparing xiAnd xi′The similarity of (c).
Further, the SVM training module creates different hyperplanes using different kernel functions for depth classification of the burn wound images, the kernel functions including a linear kernel function, a polynomial kernel function, and a radial basis kernel function.
Further, the linear kernel function is formulated as:
Figure BDA0002997020040000022
where p is the number of features representing a substantially linear hyperplane.
Further, the polynomial kernel function has the formula:
Figure BDA0002997020040000023
where d is the degree of the polynomial kernel function, representing a nonlinear decision boundary.
Further, the radial basis kernel function is formulated as:
Figure BDA0002997020040000024
where γ is a hyperparameter greater than 0, representing a non-linear decision boundary.
Further, the burn wound image acquisition module includes: the device comprises a cutting unit, a conversion unit, a scaling unit, a histogram equalization unit and a marking unit;
the shearing unit is used for shearing the effective burn wound area of the burn wound image to obtain a sheared image;
the conversion unit converts the color space of the cut image from [ R, G, B ] to Lab, wherein R represents red, G represents green, B represents blue, L represents brightness, and a and B represent opposite dimensions of color, and the conversion formula is as follows:
Figure BDA0002997020040000025
L=116f(Y/Yn)-16
a=500[f(X/Xn)-f(Y/Yn)]
b=200[f(Y/Yn)-f(Z/Zn)]
Figure BDA0002997020040000031
converting the picture of the RGB color space into an XYZ color space, and then converting the XYZ color space into a Lab color space, wherein RGB represents the value of R, G, B channels of each picture pixel point and ranges from [0,255 ];
the scaling unit unifies the sizes of the pictures after the color space is converted;
the histogram equalization unit performs histogram equalization on the zoomed image, uses the histogram equalization to increase the overall contrast of the burn wound image, and the brightness of the burn wound image can be better distributed on the histogram through the histogram equalization, so that the histogram equalization unit can be used for enhancing the local contrast of the burn wound image without influencing the overall contrast, and the formula of the histogram equalization is as follows:
Figure BDA0002997020040000032
wherein n isjRepresenting the number of occurrences of gray j in a single color channel, n representing the number of pixels in the image, L representing the maximum number of gray levels in a single color channel, and i representing iThe number of grey scales;
the marking unit marks each burn wound image and classifies the burn wound images.
Furthermore, the burn wound image feature extraction module comprises a transformation unit, an extraction unit and a fusion unit;
the transformation unit is used for performing feature extraction on the preprocessed burn wound image by using Discrete Cosine Transform (DCT) to obtain a feature value after DCT, the DCT is used for performing feature extraction on the burn wound image, a signal in a space domain can be converted into a frequency domain, the DCT has good decorrelation performance, for two-dimensional data of the image, two-dimensional DCT is used, and the formula is as follows:
Figure BDA0002997020040000033
Figure BDA0002997020040000034
wherein F (i, j) represents the original picture pixel value at the (i, j) position, F (u, v) represents the transformed value, N represents the number of pixels of the image, and c (u) represents the compensation coefficient;
the extraction unit is used for calculating the mean value and the standard deviation of each burn image, extracting the mean value and the standard characteristics of the images, wherein the mean value of the burn wound images reflects the average brightness of the burn images, the standard deviation reflects the dispersion degree of the pixel values and the mean value of the burn images, the larger the standard deviation is, the better the quality of the burn images is, and the calculation formulas of the mean value and the standard deviation of the burn images are as follows:
Figure BDA0002997020040000041
Figure BDA0002997020040000042
wherein the content of the first and second substances,n denotes the total number of pixels per image, xiRepresenting the value of each pixel of a single channel of the image, mu representing the mean value of the image, and sigma representing the standard deviation of the image;
the fusion unit performs feature fusion on the image features after DCT and the mean value and standard deviation features of the images, firstly expands the two-dimensional image features after DCT into one-dimensional vectors, in the invention, because each burn image is zoomed to the pixel size of 224 x 224, the image features after DCT is also 224 x 224 in size and is expanded into vectors of [1,50176], and then the mean value features and the standard deviation features of each image are added with the expanded DCT features to form a new feature vector of [1,50178 ].
Further, the histogram equalization unit applies the formula to each pixel of each channel of the image to obtain a histogram equalized image.
Further, the marking unit divides the burn image into 5 categories of a normal skin surface, a first-degree burn wound, a shallow second-degree burn wound, a deep second-degree burn wound and a third-degree burn wound, wherein the label values of the corresponding categories are 0, 1, 2, 3 and 4.
Further, X in the conversion unitn=95.0489,Yn=100,Zn=108.8840,
Figure BDA0002997020040000043
The invention has the advantages and beneficial effects that:
compared with the traditional clinical observation method, the burn wound depth classification system based on the support vector machine has the advantages that the SVM classification model has higher accuracy on the prediction result of the burn wound image of the patient, the prediction time is shorter, and timely and accurate burn depth information is provided for the clinical treatment of the patient.
Drawings
FIG. 1 is a system block diagram of the present invention.
FIG. 2 is a flow chart of the operation of the system of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
The burn wound depth classification system based on the support vector machine provided by the invention is used for preprocessing the burn wound image of a patient and extracting characteristics, and then is used for training the support vector machine to perform the classification prediction of the burn depth on the wound. Therefore, different burn depths can be rapidly distinguished, and meanwhile, the prediction result of the system has higher accuracy, so that timely and accurate burn depth information is provided for clinical treatment of patients. The system adopts a burn image preprocessing mode of color space conversion and histogram equalization, extracts picture features through DCT, performs feature fusion with picture mean values and standard deviations, trains SVM with different kernel functions, and finally uses the SVM which shows the best performance in a verification set and a test set for deep classification of burn wound images.
As shown in fig. 1 and 2, a burn wound depth classification system based on a support vector machine includes a burn wound image acquisition module, a burn wound image preprocessing module, a burn wound image feature extraction module, an SVM training module and a burn image prediction module;
the burn wound image acquisition module acquires a burn wound image of a burn patient through the camera equipment;
the burn wound image preprocessing module is used for carrying out primary preprocessing operation on the collected burn wound image;
the burn wound image feature extraction module is used for extracting features of the preprocessed burn wound image;
the SVM training module is used for dividing the burn image after the characteristics are extracted into a training set, a verification set and a test set for training an SVM classification model;
and the burn image prediction module is used for expressing the optimal SVM model on the verification set and the test set, predicting the burn depth of the burn wound surface image without marks and obtaining the depth classification result of the burn image.
Specifically, the burn wound image preprocessing module comprises the following preprocessing processes:
1) and cutting the effective burn wound area of the burn wound image to obtain a cut image.
2) The color space of the clipped image is changed from [ R, G, B ] (Red, Red; green, Green; blue, Blue) to Lab, where L represents brightness and a and b represent color opponent dimensions. The transformation formula is:
Figure BDA0002997020040000051
L=116f(Y/Yn)-16
a=500[f(X/Xn)-f(Y/Yn)]
b=200[f(Y/Yn)-f(Z/Zn)]
Figure BDA0002997020040000052
the picture of the RGB color space is first transformed into the XYZ color space, and then from the XYZ color space to the Lab color space. RGB represents the value of R, G, B channel of each picture pixel point, and the range is [0,255]In the meantime. Wherein Xn=95.0489,Yn=100,Zn=108.8840,
Figure BDA0002997020040000053
3) The picture size after converting the color space is uniformly scaled to a pixel size of 224 × 224.
4) And performing histogram equalization on the zoomed image. Histogram equalization is used to increase the global contrast of the burn wound image. Through histogram equalization, the brightness of the burn wound image can be better distributed on the histogram. Thus, the method can be used for enhancing the local contrast of the burn wound image without influencing the overall contrast. The formula for histogram equalization is:
Figure BDA0002997020040000061
wherein n isjRepresenting the number of times the gray j appears in a single color channel, n representing the number of all pixels in the image, L representing the maximum number of gray levels for a single color channel, and i representing the number of gray levels i. And applying the formula to each pixel of each channel of the image to obtain the histogram equalized image.
5) Marking each burn wound image, and dividing the burn images into 5 categories of normal skin surface, first-degree burn wound, shallow second-degree burn wound, deep second-degree burn wound and third-degree burn wound, wherein the label values of the corresponding categories are 0, 1, 2, 3 and 4.
Specifically, the burn wound image feature extraction module has the following sign extraction process:
1) and performing feature extraction on the preprocessed burn wound surface image by using discrete cosine transform to obtain a feature value after DCT. The DCT is used for extracting the characteristics of the burn wound surface image, can convert the signals of a space domain into a frequency domain, and has good decorrelation performance. For two-dimensional data of an image, two-dimensional DCT is used. The formula of the two-dimensional DCT is:
Figure BDA0002997020040000062
Figure BDA0002997020040000063
where F (i, j) represents the original picture pixel value at the (i, j) position, F (u, v) represents the transformed value, N represents the number of pixels of the image, and c (u) represents the compensation coefficient.
2) And calculating the mean value and standard deviation of each burn image, and extracting the mean value and standard characteristics of the image. The average value of the burn wound images reflects the average brightness of the burn images, the standard deviation reflects the dispersion degree of the pixel values of the burn images and the average value, and the larger the standard deviation is, the better the quality of the burn images is. The calculation formula of the burn image mean value and the standard deviation is as follows:
Figure BDA0002997020040000064
Figure BDA0002997020040000065
where n denotes the total number of pixels per image, xiRepresents the value of each pixel of a single channel of the image, μ represents the mean of the image, and σ represents the standard deviation of the image.
3) And performing feature fusion on the image features after DCT and the mean value and standard deviation features of the image. Firstly, the two-dimensional image features after DCT are expanded into one-dimensional vectors. In the present invention, since each burn image is scaled to a pixel size of 224 × 224, the image feature size after DCT is also 224 × 224. The image is expanded into a vector of [1,50176], and then the mean characteristic and the standard deviation characteristic of each image are added with the expanded DCT characteristics to form a characteristic vector of [1,50178 ].
Specifically, the training process of the SVM training module is as follows:
firstly, classifying the features extracted from the burn picture into a training set, a verification set and a test set, then training the SVM classification model by using the training set, and evaluating the performance of the SVM classification model by using the verification set and the test set. The SVM classification model can be expressed as
Figure BDA0002997020040000071
Where S is the set of all support vector observations, αiIs the model parameter to be learned by SVM, (x)i,xi′) Is a pair of support vector observations, is two different vector samples in the burn training set, K represents a kernel function for comparing xiAnd xi′The similarity of (c).
Different hyperplanes are created by using different kernel functions for depth classification of burn wound images. In the invention, 3 kernel functions are adopted to train the SVM classification model:
the first uses a linear kernel function, the formula is:
Figure BDA0002997020040000072
where p is the number of features representing a substantially linear hyperplane.
The second uses a polynomial kernel, the formula being:
Figure BDA0002997020040000073
where d is the degree of the polynomial kernel function, representing a non-linear decision boundary.
The third uses the radial basis kernel function, and the formula is:
Figure BDA0002997020040000074
where γ is a hyperparameter greater than 0, indicating a non-linear decision boundary.
The method selects the SVM with the 3 different kernel functions to train the burn wound depth classification model, selects the SVM which is optimal in the verification set and the test set as the optimal classification model, and trains the iteration for 1000 steps at most.
Specifically, the burn image prediction module performs the following prediction process:
and predicting the burn depth of the burn wound images without marks by using the SVM classification model which is optimally represented on the verification set and the test set, so as to obtain the depth classification result of the burn images.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A depth classification system of a burn wound based on a support vector machine comprises: the system comprises a burn wound image acquisition module, a burn wound image preprocessing module, a burn wound image feature extraction module, an SVM training module and a burn image prediction module, and is characterized in that:
the burn wound image acquisition module is used for acquiring a burn wound image of a burn patient;
the burn wound image preprocessing module is used for preprocessing the acquired burn wound image;
the burn wound image feature extraction module is used for extracting features of the preprocessed burn wound image;
the SVM training module is used for dividing the burn images with the extracted features into a training set, a verification set and a test set for training an SVM classification model;
and the burn image prediction module is used for expressing an optimal SVM model on the verification set and the test set, predicting the burn depth of the burn wound surface image without marks and obtaining the depth classification result of the burn image.
2. The system of claim 1, wherein the SVM training module classifies the features extracted from the burn image into a training set, a verification set and a test set, performs training of the SVM classification model using the training set, and evaluates the performance of the SVM classification model using the verification set and the test set, and the SVM classification model is expressed as:
Figure FDA0002997020030000011
where S is the set of all support vector observations,αiis the model parameter to be learned by SVM, (x)i,xi′) Is a pair of support vector observations, is two different vector samples in the burn training set, K represents a kernel function for comparing xiAnd xi′The similarity of (c).
3. The system of claim 2, wherein the SVM training module creates different hyperplanes using different kernel functions for depth classification of the burn wound image, the kernel functions comprising linear kernel functions, polynomial kernel functions, and radial basis kernel functions.
4. A system for depth classification of burn wounds based on a support vector machine according to claim 3, wherein the linear kernel function has the formula:
Figure FDA0002997020030000012
where p is the number of features representing a substantially linear hyperplane.
5. A system for depth classification of burn wounds based on a support vector machine according to claim 3, wherein the polynomial kernel function has the formula:
Figure FDA0002997020030000021
where d is the degree of the polynomial kernel function, representing a nonlinear decision boundary.
6. A system for depth classification of a burn wound based on a support vector machine as claimed in claim 3 wherein the radial basis kernel function is formulated as:
Figure FDA0002997020030000022
where γ is a hyperparameter greater than 0, representing a non-linear decision boundary.
7. The system of claim 1, wherein the burn wound image capture module comprises: the device comprises a cutting unit, a conversion unit, a scaling unit, a histogram equalization unit and a marking unit;
the shearing unit is used for shearing the effective burn wound area of the burn wound image to obtain a sheared image;
the conversion unit converts the color space of the cut image from [ R, G, B ] to Lab, wherein R represents red, G represents green, B represents blue, L represents brightness, and a and B represent opposite dimensions of color, and the conversion formula is as follows:
Figure FDA0002997020030000023
L=116f(Y/Yn)-16
a=500[f(X/Xn)-f(Y/Yn)]
b=200[f(Y/Yn)-f(Z/Zn)]
Figure FDA0002997020030000024
converting the picture of the RGB color space into an XYZ color space, and then converting the XYZ color space into a Lab color space, wherein RGB represents the value of R, G, B channels of each picture pixel point and ranges from [0,255 ];
the scaling unit unifies the sizes of the pictures after the color space is converted;
the histogram equalization unit performs histogram equalization on the zoomed image, and the formula of the histogram equalization is as follows:
Figure FDA0002997020030000025
wherein n isjRepresenting the number of times the gray j appears in a single color channel, n representing the number of all pixels in the image, L representing the maximum number of gray levels of a single color channel, and i representing the number of gray levels of i;
the marking unit marks each burn wound image and classifies the burn wound images.
8. The system of claim 1, wherein the burn wound image feature extraction module comprises a transformation unit, an extraction unit, and a fusion unit;
the transformation unit is used for performing feature extraction on the preprocessed burn wound surface image by using DCT (discrete cosine transformation), obtaining a feature value after DCT, and for the two-dimensional data of the image, using two-dimensional DCT, wherein the formula is as follows:
Figure FDA0002997020030000031
Figure FDA0002997020030000032
wherein F (i, j) represents the original picture pixel value at the (i, j) position, F (u, v) represents the transformed value, N represents the number of pixels of the image, and c (u) represents the compensation coefficient;
the extraction unit calculates the mean value and the standard deviation of each burn image, extracts the mean value and the standard characteristics of the image, and the calculation formula of the mean value and the standard deviation of the burn image is as follows:
Figure FDA0002997020030000033
Figure FDA0002997020030000034
where n denotes the total number of pixels per image, xiRepresenting the value of each pixel of a single channel of the image, mu representing the mean value of the image, and sigma representing the standard deviation of the image;
the fusion unit performs feature fusion on the image features after DCT and the mean value and standard deviation features of the image, firstly expands the two-dimensional image features after DCT into one-dimensional vectors, and then adds the expanded DCT features to the mean value features and the standard deviation features of each image to form a new feature vector.
9. The system of claim 7, wherein the histogram equalization unit applies the formula to each pixel of each channel of the image to obtain a histogram equalized image.
10. The system of claim 7, wherein the labeling unit classifies the burn image into 5 categories of normal skin surface, first degree burn wound, shallow second degree burn wound, deep second degree burn wound and third degree burn wound, wherein the label values of the corresponding categories are 0, 1, 2, 3 and 4.
CN202110344456.XA 2021-03-29 2021-03-29 Burn wound surface depth classification system based on support vector machine Active CN112990199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110344456.XA CN112990199B (en) 2021-03-29 2021-03-29 Burn wound surface depth classification system based on support vector machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110344456.XA CN112990199B (en) 2021-03-29 2021-03-29 Burn wound surface depth classification system based on support vector machine

Publications (2)

Publication Number Publication Date
CN112990199A true CN112990199A (en) 2021-06-18
CN112990199B CN112990199B (en) 2024-04-26

Family

ID=76338561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110344456.XA Active CN112990199B (en) 2021-03-29 2021-03-29 Burn wound surface depth classification system based on support vector machine

Country Status (1)

Country Link
CN (1) CN112990199B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107067017A (en) * 2016-11-29 2017-08-18 吴军 The depth of burn forecasting system of near infrared spectrum based on CAGA and SVM
CN108198167A (en) * 2017-12-23 2018-06-22 西安交通大学 A kind of burn intelligent measurement identification device and method based on machine vision
CN110246134A (en) * 2019-06-24 2019-09-17 株洲时代电子技术有限公司 A kind of rail defects and failures sorter
CN110415207A (en) * 2019-04-30 2019-11-05 杭州电子科技大学 A method of the image quality measure based on image fault type

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107067017A (en) * 2016-11-29 2017-08-18 吴军 The depth of burn forecasting system of near infrared spectrum based on CAGA and SVM
CN108198167A (en) * 2017-12-23 2018-06-22 西安交通大学 A kind of burn intelligent measurement identification device and method based on machine vision
CN110415207A (en) * 2019-04-30 2019-11-05 杭州电子科技大学 A method of the image quality measure based on image fault type
CN110246134A (en) * 2019-06-24 2019-09-17 株洲时代电子技术有限公司 A kind of rail defects and failures sorter

Also Published As

Publication number Publication date
CN112990199B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
US8131054B2 (en) Computerized image analysis for acetic acid induced cervical intraepithelial neoplasia
CN116503392B (en) Follicular region segmentation method for ovarian tissue analysis
CN108830149B (en) Target bacterium detection method and terminal equipment
CN110772286A (en) System for discernment liver focal lesion based on ultrasonic contrast
CN111798440A (en) Medical image artifact automatic identification method, system and storage medium
Kuan et al. A comparative study of the classification of skin burn depth in human
Jaworek-Korjakowska A deep learning approach to vascular structure segmentation in dermoscopy colour images
CN113269191A (en) Crop leaf disease identification method and device and storage medium
CN116524224A (en) Machine vision-based method and system for detecting type of cured tobacco leaves
Sarrafzade et al. Skin lesion detection in dermoscopy images using wavelet transform and morphology operations
CN112990199B (en) Burn wound surface depth classification system based on support vector machine
CN112085742B (en) NAFLD ultrasonic video diagnosis method based on context attention
Isa et al. Contrast enhancement image processing technique on segmented pap smear cytology images
CN115359066B (en) Focus detection method and device for endoscope, electronic device and storage medium
CN109886325B (en) Template selection and accelerated matching method for nonlinear color space classification
CN116152168A (en) Medical lung image lesion classification method and classification device
CN111640126B (en) Artificial intelligent diagnosis auxiliary method based on medical image
Zhou et al. Wireless capsule endoscopy video automatic segmentation
CN113052813A (en) Dyeing method based on StrainNet
Savakar et al. Hidden Markov model for identification of different marks on human body in forensic perspective
CN113205484A (en) Mammary tissue classification and identification method based on transfer learning
Song et al. Automatic vaginal bacteria segmentation and classification based on superpixel and deep learning
Salah et al. Hidden Markov Model-based face recognition using selective attention
CN111914632B (en) Face recognition method, device and storage medium
CN116452566B (en) Doppler image identification method, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant