CN112990199A - Burn wound depth classification system based on support vector machine - Google Patents
Burn wound depth classification system based on support vector machine Download PDFInfo
- Publication number
- CN112990199A CN112990199A CN202110344456.XA CN202110344456A CN112990199A CN 112990199 A CN112990199 A CN 112990199A CN 202110344456 A CN202110344456 A CN 202110344456A CN 112990199 A CN112990199 A CN 112990199A
- Authority
- CN
- China
- Prior art keywords
- image
- burn
- burn wound
- module
- svm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012706 support-vector machine Methods 0.000 title claims abstract description 40
- 238000012549 training Methods 0.000 claims abstract description 35
- 238000000605 extraction Methods 0.000 claims abstract description 20
- 238000013145 classification model Methods 0.000 claims abstract description 18
- 238000012360 testing method Methods 0.000 claims abstract description 17
- 238000012795 verification Methods 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 16
- 206010052428 Wound Diseases 0.000 claims description 101
- 208000027418 Wounds and injury Diseases 0.000 claims description 101
- 239000013598 vector Substances 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 230000004927 fusion Effects 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 6
- 238000010008 shearing Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000037311 normal skin Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 2
- 238000002372 labelling Methods 0.000 claims 1
- 238000000034 method Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 4
- 210000003491 skin Anatomy 0.000 description 4
- 210000002615 epidermis Anatomy 0.000 description 3
- 206010006802 Burns second degree Diseases 0.000 description 2
- 210000004207 dermis Anatomy 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 206010006803 Burns third degree Diseases 0.000 description 1
- 208000032544 Cicatrix Diseases 0.000 description 1
- 208000003322 Coinfection Diseases 0.000 description 1
- 206010053159 Organ failure Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000002500 effect on skin Effects 0.000 description 1
- 230000035876 healing Effects 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 230000004066 metabolic change Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 231100000241 scar Toxicity 0.000 description 1
- 230000037387 scars Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000472 traumatic effect Effects 0.000 description 1
- 230000009278 visceral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Abstract
The invention discloses a burn wound depth classification system based on a support vector machine, which comprises: the system comprises a burn wound image acquisition module, a burn wound image preprocessing module, a burn wound image feature extraction module, an SVM training module and a burn image prediction module; the burn wound image acquisition module is used for acquiring a burn wound image of a burn patient; the burn wound image preprocessing module is used for preprocessing the acquired burn wound image; the burn wound image feature extraction module is used for extracting features of the preprocessed burn wound image; the SVM training module is used for dividing the burn images with the extracted features into a training set, a verification set and a test set for training an SVM classification model; and the burn image prediction module is used for expressing an optimal SVM model on the verification set and the test set, predicting the burn depth of the burn wound surface image without marks and obtaining the depth classification result of the burn image.
Description
Technical Field
The invention relates to the field of image processing, in particular to a burn wound depth classification system based on a support vector machine.
Background
Burns are a common traumatic disease with high morbidity and mortality. The treatment of burns first requires an accurate and reliable diagnosis of the depth of the burn wound. Early effective treatment can reduce the burden on the patient and in some cases can save the patient's life. The diagnosis of the depth of the burn wound surface adopts a three-degree quartile method. The third degree quartering method classifies the burn wound into first degree burn, superficial second degree burn, deep second degree burn and third degree burn according to the severity and clinical manifestations of the burn. The first degree burn is the least severe, and the severity of the superficial second degree burn, the deep second degree burn and the third degree burn increases in sequence. First degree burn, also known as erythematous burn, only damages a part of the skin epidermis, and heals in about 3-5 days without leaving scars. Superficial second degree burns can injure the whole epidermis and part of the papillary layer, and if no secondary infection exists, the wound surface is generally healed after about 1-2 weeks. Deep second degree burns are deep below the dermal papilla layer, but part of dermis and skin still remain, and the wound surface needs to be healed for 3-4 weeks generally. Third degree burns can damage the skin all the way through, the epidermis, the dermis and the skin appendages are all damaged, and the repair of the burn wound surface depends on surgical skin grafting or flap repair.
Burns of a large area range can cause various systems of the patient's body to exhibit varying degrees of morphological, functional and metabolic changes. The patient has serious visceral damage and organ failure as complications. The healing and treatment process of the burn wound is complex and long in time. However, early treatment of burns can reduce physical damage and medical burden on patients. Therefore, accurate burn wound depth assessment is of great importance to clinical treatment of patients, in the prior art, the classification of burn wounds mostly depends on the experience of doctors, the classification efficiency is not high, and the accuracy of machine classification or auxiliary classification is not high.
Disclosure of Invention
In order to solve the defects of the prior art, realize the high-efficiency classification of burn wounds and improve the classification accuracy, the invention adopts the following technical scheme:
a depth classification system of a burn wound based on a support vector machine comprises: the system comprises a burn wound image acquisition module, a burn wound image preprocessing module, a burn wound image feature extraction module, an SVM (Support Vector Machine) training module and a burn image prediction module;
the burn wound image acquisition module is used for acquiring a burn wound image of a burn patient;
the burn wound image preprocessing module is used for preprocessing the acquired burn wound image;
the burn wound image feature extraction module is used for extracting features of the preprocessed burn wound image;
the SVM training module is used for dividing the burn images with the extracted features into a training set, a verification set and a test set for training an SVM classification model;
and the burn image prediction module is used for expressing an optimal SVM model on the verification set and the test set, predicting the burn depth of the burn wound surface image without marks and obtaining the depth classification result of the burn image.
Further, the SVM training module classifies the features extracted from the burn picture into a training set, a verification set and a test set, the training set is used for training an SVM classification model, the verification set and the test set are used for evaluating the performance of the SVM classification model, and the SVM classification model is expressed as follows:
where S is the set of all support vector observations, αiIs the model parameter to be learned by SVM, (x)i,xi′) Is a pair of support vector observations, is two different vector samples in the burn training set, K represents a kernel function for comparing xiAnd xi′The similarity of (c).
Further, the SVM training module creates different hyperplanes using different kernel functions for depth classification of the burn wound images, the kernel functions including a linear kernel function, a polynomial kernel function, and a radial basis kernel function.
Further, the linear kernel function is formulated as:
where p is the number of features representing a substantially linear hyperplane.
Further, the polynomial kernel function has the formula:
where d is the degree of the polynomial kernel function, representing a nonlinear decision boundary.
Further, the radial basis kernel function is formulated as:
where γ is a hyperparameter greater than 0, representing a non-linear decision boundary.
Further, the burn wound image acquisition module includes: the device comprises a cutting unit, a conversion unit, a scaling unit, a histogram equalization unit and a marking unit;
the shearing unit is used for shearing the effective burn wound area of the burn wound image to obtain a sheared image;
the conversion unit converts the color space of the cut image from [ R, G, B ] to Lab, wherein R represents red, G represents green, B represents blue, L represents brightness, and a and B represent opposite dimensions of color, and the conversion formula is as follows:
L=116f(Y/Yn)-16
a=500[f(X/Xn)-f(Y/Yn)]
b=200[f(Y/Yn)-f(Z/Zn)]
converting the picture of the RGB color space into an XYZ color space, and then converting the XYZ color space into a Lab color space, wherein RGB represents the value of R, G, B channels of each picture pixel point and ranges from [0,255 ];
the scaling unit unifies the sizes of the pictures after the color space is converted;
the histogram equalization unit performs histogram equalization on the zoomed image, uses the histogram equalization to increase the overall contrast of the burn wound image, and the brightness of the burn wound image can be better distributed on the histogram through the histogram equalization, so that the histogram equalization unit can be used for enhancing the local contrast of the burn wound image without influencing the overall contrast, and the formula of the histogram equalization is as follows:
wherein n isjRepresenting the number of occurrences of gray j in a single color channel, n representing the number of pixels in the image, L representing the maximum number of gray levels in a single color channel, and i representing iThe number of grey scales;
the marking unit marks each burn wound image and classifies the burn wound images.
Furthermore, the burn wound image feature extraction module comprises a transformation unit, an extraction unit and a fusion unit;
the transformation unit is used for performing feature extraction on the preprocessed burn wound image by using Discrete Cosine Transform (DCT) to obtain a feature value after DCT, the DCT is used for performing feature extraction on the burn wound image, a signal in a space domain can be converted into a frequency domain, the DCT has good decorrelation performance, for two-dimensional data of the image, two-dimensional DCT is used, and the formula is as follows:
wherein F (i, j) represents the original picture pixel value at the (i, j) position, F (u, v) represents the transformed value, N represents the number of pixels of the image, and c (u) represents the compensation coefficient;
the extraction unit is used for calculating the mean value and the standard deviation of each burn image, extracting the mean value and the standard characteristics of the images, wherein the mean value of the burn wound images reflects the average brightness of the burn images, the standard deviation reflects the dispersion degree of the pixel values and the mean value of the burn images, the larger the standard deviation is, the better the quality of the burn images is, and the calculation formulas of the mean value and the standard deviation of the burn images are as follows:
wherein the content of the first and second substances,n denotes the total number of pixels per image, xiRepresenting the value of each pixel of a single channel of the image, mu representing the mean value of the image, and sigma representing the standard deviation of the image;
the fusion unit performs feature fusion on the image features after DCT and the mean value and standard deviation features of the images, firstly expands the two-dimensional image features after DCT into one-dimensional vectors, in the invention, because each burn image is zoomed to the pixel size of 224 x 224, the image features after DCT is also 224 x 224 in size and is expanded into vectors of [1,50176], and then the mean value features and the standard deviation features of each image are added with the expanded DCT features to form a new feature vector of [1,50178 ].
Further, the histogram equalization unit applies the formula to each pixel of each channel of the image to obtain a histogram equalized image.
Further, the marking unit divides the burn image into 5 categories of a normal skin surface, a first-degree burn wound, a shallow second-degree burn wound, a deep second-degree burn wound and a third-degree burn wound, wherein the label values of the corresponding categories are 0, 1, 2, 3 and 4.
The invention has the advantages and beneficial effects that:
compared with the traditional clinical observation method, the burn wound depth classification system based on the support vector machine has the advantages that the SVM classification model has higher accuracy on the prediction result of the burn wound image of the patient, the prediction time is shorter, and timely and accurate burn depth information is provided for the clinical treatment of the patient.
Drawings
FIG. 1 is a system block diagram of the present invention.
FIG. 2 is a flow chart of the operation of the system of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
The burn wound depth classification system based on the support vector machine provided by the invention is used for preprocessing the burn wound image of a patient and extracting characteristics, and then is used for training the support vector machine to perform the classification prediction of the burn depth on the wound. Therefore, different burn depths can be rapidly distinguished, and meanwhile, the prediction result of the system has higher accuracy, so that timely and accurate burn depth information is provided for clinical treatment of patients. The system adopts a burn image preprocessing mode of color space conversion and histogram equalization, extracts picture features through DCT, performs feature fusion with picture mean values and standard deviations, trains SVM with different kernel functions, and finally uses the SVM which shows the best performance in a verification set and a test set for deep classification of burn wound images.
As shown in fig. 1 and 2, a burn wound depth classification system based on a support vector machine includes a burn wound image acquisition module, a burn wound image preprocessing module, a burn wound image feature extraction module, an SVM training module and a burn image prediction module;
the burn wound image acquisition module acquires a burn wound image of a burn patient through the camera equipment;
the burn wound image preprocessing module is used for carrying out primary preprocessing operation on the collected burn wound image;
the burn wound image feature extraction module is used for extracting features of the preprocessed burn wound image;
the SVM training module is used for dividing the burn image after the characteristics are extracted into a training set, a verification set and a test set for training an SVM classification model;
and the burn image prediction module is used for expressing the optimal SVM model on the verification set and the test set, predicting the burn depth of the burn wound surface image without marks and obtaining the depth classification result of the burn image.
Specifically, the burn wound image preprocessing module comprises the following preprocessing processes:
1) and cutting the effective burn wound area of the burn wound image to obtain a cut image.
2) The color space of the clipped image is changed from [ R, G, B ] (Red, Red; green, Green; blue, Blue) to Lab, where L represents brightness and a and b represent color opponent dimensions. The transformation formula is:
L=116f(Y/Yn)-16
a=500[f(X/Xn)-f(Y/Yn)]
b=200[f(Y/Yn)-f(Z/Zn)]
the picture of the RGB color space is first transformed into the XYZ color space, and then from the XYZ color space to the Lab color space. RGB represents the value of R, G, B channel of each picture pixel point, and the range is [0,255]In the meantime. Wherein Xn=95.0489,Yn=100,Zn=108.8840,
3) The picture size after converting the color space is uniformly scaled to a pixel size of 224 × 224.
4) And performing histogram equalization on the zoomed image. Histogram equalization is used to increase the global contrast of the burn wound image. Through histogram equalization, the brightness of the burn wound image can be better distributed on the histogram. Thus, the method can be used for enhancing the local contrast of the burn wound image without influencing the overall contrast. The formula for histogram equalization is:
wherein n isjRepresenting the number of times the gray j appears in a single color channel, n representing the number of all pixels in the image, L representing the maximum number of gray levels for a single color channel, and i representing the number of gray levels i. And applying the formula to each pixel of each channel of the image to obtain the histogram equalized image.
5) Marking each burn wound image, and dividing the burn images into 5 categories of normal skin surface, first-degree burn wound, shallow second-degree burn wound, deep second-degree burn wound and third-degree burn wound, wherein the label values of the corresponding categories are 0, 1, 2, 3 and 4.
Specifically, the burn wound image feature extraction module has the following sign extraction process:
1) and performing feature extraction on the preprocessed burn wound surface image by using discrete cosine transform to obtain a feature value after DCT. The DCT is used for extracting the characteristics of the burn wound surface image, can convert the signals of a space domain into a frequency domain, and has good decorrelation performance. For two-dimensional data of an image, two-dimensional DCT is used. The formula of the two-dimensional DCT is:
where F (i, j) represents the original picture pixel value at the (i, j) position, F (u, v) represents the transformed value, N represents the number of pixels of the image, and c (u) represents the compensation coefficient.
2) And calculating the mean value and standard deviation of each burn image, and extracting the mean value and standard characteristics of the image. The average value of the burn wound images reflects the average brightness of the burn images, the standard deviation reflects the dispersion degree of the pixel values of the burn images and the average value, and the larger the standard deviation is, the better the quality of the burn images is. The calculation formula of the burn image mean value and the standard deviation is as follows:
where n denotes the total number of pixels per image, xiRepresents the value of each pixel of a single channel of the image, μ represents the mean of the image, and σ represents the standard deviation of the image.
3) And performing feature fusion on the image features after DCT and the mean value and standard deviation features of the image. Firstly, the two-dimensional image features after DCT are expanded into one-dimensional vectors. In the present invention, since each burn image is scaled to a pixel size of 224 × 224, the image feature size after DCT is also 224 × 224. The image is expanded into a vector of [1,50176], and then the mean characteristic and the standard deviation characteristic of each image are added with the expanded DCT characteristics to form a characteristic vector of [1,50178 ].
Specifically, the training process of the SVM training module is as follows:
firstly, classifying the features extracted from the burn picture into a training set, a verification set and a test set, then training the SVM classification model by using the training set, and evaluating the performance of the SVM classification model by using the verification set and the test set. The SVM classification model can be expressed as
Where S is the set of all support vector observations, αiIs the model parameter to be learned by SVM, (x)i,xi′) Is a pair of support vector observations, is two different vector samples in the burn training set, K represents a kernel function for comparing xiAnd xi′The similarity of (c).
Different hyperplanes are created by using different kernel functions for depth classification of burn wound images. In the invention, 3 kernel functions are adopted to train the SVM classification model:
the first uses a linear kernel function, the formula is:
where p is the number of features representing a substantially linear hyperplane.
The second uses a polynomial kernel, the formula being:
where d is the degree of the polynomial kernel function, representing a non-linear decision boundary.
The third uses the radial basis kernel function, and the formula is:
where γ is a hyperparameter greater than 0, indicating a non-linear decision boundary.
The method selects the SVM with the 3 different kernel functions to train the burn wound depth classification model, selects the SVM which is optimal in the verification set and the test set as the optimal classification model, and trains the iteration for 1000 steps at most.
Specifically, the burn image prediction module performs the following prediction process:
and predicting the burn depth of the burn wound images without marks by using the SVM classification model which is optimally represented on the verification set and the test set, so as to obtain the depth classification result of the burn images.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. A depth classification system of a burn wound based on a support vector machine comprises: the system comprises a burn wound image acquisition module, a burn wound image preprocessing module, a burn wound image feature extraction module, an SVM training module and a burn image prediction module, and is characterized in that:
the burn wound image acquisition module is used for acquiring a burn wound image of a burn patient;
the burn wound image preprocessing module is used for preprocessing the acquired burn wound image;
the burn wound image feature extraction module is used for extracting features of the preprocessed burn wound image;
the SVM training module is used for dividing the burn images with the extracted features into a training set, a verification set and a test set for training an SVM classification model;
and the burn image prediction module is used for expressing an optimal SVM model on the verification set and the test set, predicting the burn depth of the burn wound surface image without marks and obtaining the depth classification result of the burn image.
2. The system of claim 1, wherein the SVM training module classifies the features extracted from the burn image into a training set, a verification set and a test set, performs training of the SVM classification model using the training set, and evaluates the performance of the SVM classification model using the verification set and the test set, and the SVM classification model is expressed as:
where S is the set of all support vector observations,αiis the model parameter to be learned by SVM, (x)i,xi′) Is a pair of support vector observations, is two different vector samples in the burn training set, K represents a kernel function for comparing xiAnd xi′The similarity of (c).
3. The system of claim 2, wherein the SVM training module creates different hyperplanes using different kernel functions for depth classification of the burn wound image, the kernel functions comprising linear kernel functions, polynomial kernel functions, and radial basis kernel functions.
7. The system of claim 1, wherein the burn wound image capture module comprises: the device comprises a cutting unit, a conversion unit, a scaling unit, a histogram equalization unit and a marking unit;
the shearing unit is used for shearing the effective burn wound area of the burn wound image to obtain a sheared image;
the conversion unit converts the color space of the cut image from [ R, G, B ] to Lab, wherein R represents red, G represents green, B represents blue, L represents brightness, and a and B represent opposite dimensions of color, and the conversion formula is as follows:
L=116f(Y/Yn)-16
a=500[f(X/Xn)-f(Y/Yn)]
b=200[f(Y/Yn)-f(Z/Zn)]
converting the picture of the RGB color space into an XYZ color space, and then converting the XYZ color space into a Lab color space, wherein RGB represents the value of R, G, B channels of each picture pixel point and ranges from [0,255 ];
the scaling unit unifies the sizes of the pictures after the color space is converted;
the histogram equalization unit performs histogram equalization on the zoomed image, and the formula of the histogram equalization is as follows:
wherein n isjRepresenting the number of times the gray j appears in a single color channel, n representing the number of all pixels in the image, L representing the maximum number of gray levels of a single color channel, and i representing the number of gray levels of i;
the marking unit marks each burn wound image and classifies the burn wound images.
8. The system of claim 1, wherein the burn wound image feature extraction module comprises a transformation unit, an extraction unit, and a fusion unit;
the transformation unit is used for performing feature extraction on the preprocessed burn wound surface image by using DCT (discrete cosine transformation), obtaining a feature value after DCT, and for the two-dimensional data of the image, using two-dimensional DCT, wherein the formula is as follows:
wherein F (i, j) represents the original picture pixel value at the (i, j) position, F (u, v) represents the transformed value, N represents the number of pixels of the image, and c (u) represents the compensation coefficient;
the extraction unit calculates the mean value and the standard deviation of each burn image, extracts the mean value and the standard characteristics of the image, and the calculation formula of the mean value and the standard deviation of the burn image is as follows:
where n denotes the total number of pixels per image, xiRepresenting the value of each pixel of a single channel of the image, mu representing the mean value of the image, and sigma representing the standard deviation of the image;
the fusion unit performs feature fusion on the image features after DCT and the mean value and standard deviation features of the image, firstly expands the two-dimensional image features after DCT into one-dimensional vectors, and then adds the expanded DCT features to the mean value features and the standard deviation features of each image to form a new feature vector.
9. The system of claim 7, wherein the histogram equalization unit applies the formula to each pixel of each channel of the image to obtain a histogram equalized image.
10. The system of claim 7, wherein the labeling unit classifies the burn image into 5 categories of normal skin surface, first degree burn wound, shallow second degree burn wound, deep second degree burn wound and third degree burn wound, wherein the label values of the corresponding categories are 0, 1, 2, 3 and 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110344456.XA CN112990199B (en) | 2021-03-29 | 2021-03-29 | Burn wound surface depth classification system based on support vector machine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110344456.XA CN112990199B (en) | 2021-03-29 | 2021-03-29 | Burn wound surface depth classification system based on support vector machine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112990199A true CN112990199A (en) | 2021-06-18 |
CN112990199B CN112990199B (en) | 2024-04-26 |
Family
ID=76338561
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110344456.XA Active CN112990199B (en) | 2021-03-29 | 2021-03-29 | Burn wound surface depth classification system based on support vector machine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112990199B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107067017A (en) * | 2016-11-29 | 2017-08-18 | 吴军 | The depth of burn forecasting system of near infrared spectrum based on CAGA and SVM |
CN108198167A (en) * | 2017-12-23 | 2018-06-22 | 西安交通大学 | A kind of burn intelligent measurement identification device and method based on machine vision |
CN110246134A (en) * | 2019-06-24 | 2019-09-17 | 株洲时代电子技术有限公司 | A kind of rail defects and failures sorter |
CN110415207A (en) * | 2019-04-30 | 2019-11-05 | 杭州电子科技大学 | A method of the image quality measure based on image fault type |
-
2021
- 2021-03-29 CN CN202110344456.XA patent/CN112990199B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107067017A (en) * | 2016-11-29 | 2017-08-18 | 吴军 | The depth of burn forecasting system of near infrared spectrum based on CAGA and SVM |
CN108198167A (en) * | 2017-12-23 | 2018-06-22 | 西安交通大学 | A kind of burn intelligent measurement identification device and method based on machine vision |
CN110415207A (en) * | 2019-04-30 | 2019-11-05 | 杭州电子科技大学 | A method of the image quality measure based on image fault type |
CN110246134A (en) * | 2019-06-24 | 2019-09-17 | 株洲时代电子技术有限公司 | A kind of rail defects and failures sorter |
Also Published As
Publication number | Publication date |
---|---|
CN112990199B (en) | 2024-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8131054B2 (en) | Computerized image analysis for acetic acid induced cervical intraepithelial neoplasia | |
CN116503392B (en) | Follicular region segmentation method for ovarian tissue analysis | |
CN108830149B (en) | Target bacterium detection method and terminal equipment | |
CN110772286A (en) | System for discernment liver focal lesion based on ultrasonic contrast | |
CN111798440A (en) | Medical image artifact automatic identification method, system and storage medium | |
Kuan et al. | A comparative study of the classification of skin burn depth in human | |
Jaworek-Korjakowska | A deep learning approach to vascular structure segmentation in dermoscopy colour images | |
CN113269191A (en) | Crop leaf disease identification method and device and storage medium | |
CN116524224A (en) | Machine vision-based method and system for detecting type of cured tobacco leaves | |
Sarrafzade et al. | Skin lesion detection in dermoscopy images using wavelet transform and morphology operations | |
CN112990199B (en) | Burn wound surface depth classification system based on support vector machine | |
CN112085742B (en) | NAFLD ultrasonic video diagnosis method based on context attention | |
Isa et al. | Contrast enhancement image processing technique on segmented pap smear cytology images | |
CN115359066B (en) | Focus detection method and device for endoscope, electronic device and storage medium | |
CN109886325B (en) | Template selection and accelerated matching method for nonlinear color space classification | |
CN116152168A (en) | Medical lung image lesion classification method and classification device | |
CN111640126B (en) | Artificial intelligent diagnosis auxiliary method based on medical image | |
Zhou et al. | Wireless capsule endoscopy video automatic segmentation | |
CN113052813A (en) | Dyeing method based on StrainNet | |
Savakar et al. | Hidden Markov model for identification of different marks on human body in forensic perspective | |
CN113205484A (en) | Mammary tissue classification and identification method based on transfer learning | |
Song et al. | Automatic vaginal bacteria segmentation and classification based on superpixel and deep learning | |
Salah et al. | Hidden Markov Model-based face recognition using selective attention | |
CN111914632B (en) | Face recognition method, device and storage medium | |
CN116452566B (en) | Doppler image identification method, system, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |