CN111508016B - Vitiligo region chromaticity value and area calculation method based on image processing - Google Patents

Vitiligo region chromaticity value and area calculation method based on image processing Download PDF

Info

Publication number
CN111508016B
CN111508016B CN202010287910.8A CN202010287910A CN111508016B CN 111508016 B CN111508016 B CN 111508016B CN 202010287910 A CN202010287910 A CN 202010287910A CN 111508016 B CN111508016 B CN 111508016B
Authority
CN
China
Prior art keywords
vitiligo
area
image
region
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010287910.8A
Other languages
Chinese (zh)
Other versions
CN111508016A (en
Inventor
吴嘉仪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Hongtu Artificial Intelligence Technology Research Institute Co ltd
Original Assignee
Nanjing Hongtu Artificial Intelligence Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Hongtu Artificial Intelligence Technology Research Institute Co ltd filed Critical Nanjing Hongtu Artificial Intelligence Technology Research Institute Co ltd
Priority to CN202010287910.8A priority Critical patent/CN111508016B/en
Publication of CN111508016A publication Critical patent/CN111508016A/en
Application granted granted Critical
Publication of CN111508016B publication Critical patent/CN111508016B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20088Trinocular vision calculations; trifocal tensor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for calculating chromaticity value and area of a vitiligo area based on computer image processing. The chromaticity value and the area ratio of the vitiligo area calculated by the method can effectively help doctors to judge more intuitively and accurately the illness state and evaluate the curative effect after treatment.

Description

Vitiligo region chromaticity value and area calculation method based on image processing
Technical Field
The invention relates to a method for calculating chromaticity values and areas of vitiligo areas based on image processing, and belongs to the fields of computer image processing and intelligent medical treatment.
Background
Vitiligo is a idiopathic skin mucosa pigment-removal disease, and patients account for about 1% of the world population. The pathogenesis of vitiligo is still uncertain at present, and the white spot range and the color depth of vitiligo are large in different types (segmental type, non-segmental type and unfixed type) and different disease stages (progressive stage, stable stage and recovery stage). After treatment, the original white spots may have melanogenesis islands, and the vitiligo in the progressive stage may have new decolored spots. Physicians use the chromaticity value and the area of the vitiligo area as the evaluation standard of the severity of vitiligo. Therefore, accurate calculation of the chromaticity and area of vitiligo areas is a very challenging task.
The judgment of the chromaticity value and the area size of the vitiligo area in the prior hospital mainly depends on the experience of skin disease professional doctors, the method has stronger subjectivity and ambiguity, the examination result is closely related to the experience of the doctors, and a great amount of clinical experience is required for the judgment of the chromaticity value and the area size of the vitiligo area. Therefore, the traditional method of judging by means of clinical experience of doctors has difficulty in meeting the requirement of accurately judging the chromaticity value and the area size of the vitiligo. With the development of computer technology, the application of artificial intelligence deep learning technology and traditional computer image processing method meets the requirement of modern medical treatment on accurate calculation of vitiligo chromaticity value and area.
Disclosure of Invention
Technical problems: aiming at the problem that the traditional method for judging the chromaticity value and the area size of the vitiligo area by means of doctor professional knowledge and a large amount of clinical experience is not accurate enough, the invention provides a method for calculating the chromaticity value and the area size of the vitiligo area based on image processing. Firstly, a skin damage area of the vitiligo is detected and segmented by using a detection and segmentation model of the vitiligo area trained by a deep learning method, then, the chromaticity value of the skin damage area is calculated by a vitiligo chromaticity value calculation method, and the area occupation ratio of the vitiligo area is calculated by calculating the number of pixels of the vitiligo area and the number of pixels of the skin area of the whole image, so that accurate auxiliary information is provided for diagnosis of the illness state of a patient and treatment effect judgment after treatment.
The technical scheme is as follows: in order to achieve the above object, the present invention provides a method for calculating chromaticity value and area of a vitiligo area based on image processing, the method comprising the steps of:
(1) Acquiring a focus area image of vitiligo;
(2) Extracting a vitiligo region by using a vitiligo region detection and segmentation model;
(3) Calculating the chromaticity value of the region according to the extracted vitiligo region;
(4) And calculating the area occupation ratio of the vitiligo area according to the number of pixel points of the vitiligo area and the focus area image.
Further, in the step (2), the vitiligo region detection and segmentation model is obtained by the following method:
(2.1) marking the vitiligo area on the skin image in advance by adopting a rectangular frame marking and pixel level marking mode;
(2.2) acquiring a preset number of vitiligo area images as a training set according to the method;
(2.3) inputting the images marked in the training set, the labels recorded by the marking files and the coordinate information into a deep learning neural network to calculate the characteristic value of the vitiligo area;
and (2.4) generating a model file for recording the vitiligo region and the corresponding characteristic value, thereby obtaining a vitiligo region detection and segmentation model.
Further, in the step (2.3), all the vitiligo area images are loaded into a neural network, and firstly, the input images are normalized and converted into images with preset sizes; normalizing the coordinates of the image to obtain a normalized coordinate corresponding to the proportion of the transformed image; in the process of learning by the neural network, each image is subjected to iterative sampling for 5 times, and then the characteristic value of a final sampling area is calculated by the convolutional neural network.
Further, when the model detects the vitiligo image, the detected vitiligo image is input into the vitiligo region detection and segmentation model, the image is reset to be an image with a preset size, and then the image is sampled to calculate the characteristic value of each sampling region; matching the calculated characteristic values with characteristic values of all vitiligo sampling areas recorded in a training model file, sorting the calculated characteristic values from large to small, taking the maximum value of the calculated characteristic values, marking the maximum value as a vitiligo area if the maximum value is larger than a preset threshold value, and labeling; otherwise, the non-vitiligo area is formed; and the vitiligo area corresponding to the maximum value is the area most similar to the sampling area, the adjacent vitiligo areas are finally combined into a prediction area, and finally, the corresponding coordinates in the original image are reversely calculated according to a normalization method.
Further, when the vitiligo region is extracted, the image of the vitiligo region is segmented from the original image according to the corresponding coordinates of the estimated prediction region in the original image.
Further, in the step (4), the chromaticity value of the region is calculated according to the extracted vitiligo region, and the method comprises the following steps:
(4.1) reading RGB values of each pixel point in the vitiligo area image, and converting the RGB channel of the image into a Lab channel by means of an XYZ color space:
x in the above formula (1) r 、Y g 、Z b RGB three channel values representing pixel points of an image, X in equation (2) n 、Y n 、Z n Three values are constant values;
and (4.2) obtaining values of L, a and b of each pixel point according to the formula, carrying out average value calculation on the values of L, a and b of all the pixel points, obtaining average values of L, a and b, and calculating a leucoderma chromaticity value, wherein the calculation formula is as follows:
further, the area ratio of the vitiligo area is calculated according to the number of pixel points of the vitiligo area and the focus area image, and the formula is as follows:
wherein v represents the number of pixels in the vitiligo area, and a represents the number of pixels in the focus area image of the whole vitiligo.
The beneficial effects are that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
the invention innovatively provides a method for automatically calculating the chromaticity value and the area size of a leucoderma area by using a computer. The accurate chromaticity value and the area of the vitiligo area of a patient are automatically calculated by combining the trained deep convolutional neural network model with computer image processing, and great help is provided for doctors to evaluate the severity of vitiligo. The method effectively applies the computer technology and the image processing technology to clinical medical diagnosis, can be widely applied to dermatology diagnosis rooms of various big hospitals, and provides accurate auxiliary analysis for the diagnosis of dermatology doctors.
Drawings
Fig. 1 is a flow chart of an automatic calculation method for chromaticity values of vitiligo areas.
Fig. 2 is a schematic diagram of the result of calculating the chromaticity value of the vitiligo area.
Fig. 3-4 are block diagrams of vitiligo detection and segmentation networks.
Detailed Description
The method comprises the following steps:
(1) Acquiring a focus area image of a patient suffering from vitiligo;
(2) Extracting a vitiligo region by using a vitiligo region detection and segmentation model;
(3) Inputting the image data of the vitiligo area into an IWA (vitiligo chromaticity value) calculation formula to calculate the vitiligo chromaticity value;
(4) Counting the number of pixel points of the vitiligo area and the whole skin image, and calculating the area ratio of the vitiligo area.
The following specifically describes the method for calculating the chromaticity value of the vitiligo area in the invention:
1. vitiligo region detection and segmentation model extraction principle
The vitiligo detection and segmentation model used herein is a model trained by an image processing method based on a deep learning neural network, and the vitiligo detection and segmentation model is divided into a detection model and a segmentation model. The detection model is responsible for detecting the vitiligo area on the image, and the vitiligo area is automatically marked by a rectangular frame. The segmentation model is responsible for accurately segmenting the vitiligo boundary of the detected vitiligo rectangular frame area.
The detection and segmentation model training needs to manually mark the vitiligo area on the skin image in a rectangular frame marking and pixel level marking mode (marking all pixel points of the vitiligo area), a 1 thousand vitiligo image dataset is used in training, the marked image and the marked file recorded label (the marked file is recorded label information and coordinate information of the marked area, the coordinate information comprises a central coordinate x-transverse axis coordinate, a y-longitudinal axis coordinate, a w-rectangular frame width and a h-rectangular frame height of the rectangular frame area, the label is an identification of the marked area, the label is a mark name of the vit-vitiligo in this invention), the coordinate information and the like are input into a deep learning neural network (the network structure used by the invention is a Dark-Net53 network structure) to calculate the characteristic value of the vitiligo area.
In the training process, the input training data is 1 thousand labeled skin images containing vitiligo areas and label files corresponding to each vitiligo image. Firstly, in each training (the training times set here are 5 ten thousand times), the network randomly reads M images (the training setting of the model is m=16) and loads the images into the network, and each image contains 4 coordinate data: x, y, w, h and 1 tag: vit. When all images are loaded into the network, a normalization process is carried out firstly, all the images are transformed into 608 x 608, and the coordinates are normalized to obtain normalized coordinates corresponding to the proportion of the transformed images (normalized description: for example, the coordinates of the center point of a vitiligo labeling frame are [ x, y ]; the width and the height are respectively [ w, h ]; the resolution of the images are [ u, v ], the original coordinates are [ x, y, w, h ]; and the normalized coordinates are [ x/u, y/v, w/u, h/v ]). In the process of learning by the network, each image is subjected to iterative sampling for 5 times, namely 5 layers of image information sampling, then the convolutional neural network calculates the characteristic value of a final sampling area, and the characteristic value of the manually marked vitiligo area is matched with the vitiligo label. The feature values of the vitiligo region and the tag after matching are stored in the model file. After the whole training is finished, a model file for recording the vitiligo area and the characteristic value is finally generated.
When the model detects vitiligo images, the model is loaded into a network, then the images are reset to 608 x 608, and then the characteristic value of each sampling area is calculated by sampling. Matching the calculated characteristic values with all the characteristic values of the vitiligo areas recorded in the training model file, sorting the calculated characteristic values from large to small, taking the maximum value of the calculated characteristic values, marking the maximum value as a vitiligo area if the maximum value is larger than a preset threshold value, and labeling; otherwise, the sampling area is a non-vitiligo area, and the vitiligo area corresponding to the maximum value is the area most similar to the sampling area.
And finally combining the adjacent vitiligo areas into a prediction area, obtaining detection results of coordinates, labels and a credibility (the similarity) of the vitiligo areas, and reversely calculating the coordinates corresponding to the original image according to a normalization method. Thus, the coordinate detection of the vitiligo is completed.
When the detection and segmentation model is used, the model file is firstly read and loaded into the built deep learning neural network. Then, the detected and segmented image is input into the deep learning neural network, and the coordinates of the vitiligo area predicted by the model and the segmented vitiligo area image can be obtained through calculation of the deep learning neural network.
2. Leucoderma area image chromaticity value and area calculation formula principle
After inputting the extracted image of the vitiligo area into a calculation program, reading the RGB value of each pixel point in the image of the vitiligo area, and then converting the RGB channel of the image into a Lab channel. The RGB color space cannot be directly converted into the Lab color space, and it is necessary to convert the RGB color space into the XYZ color space by means of the XYZ color space, and then convert the XYZ color space into the Lab color space. The RGB, lab and XYZ color spaces have the following relationship:
x in the above formula (1) r 、Y g 、Z b RGB three channel values representing pixel points of an image, X in equation (2) n 、Y n 、Z n The three values are constant values, namely 95.047, 100.0 and 108.883 respectively. The values of L, a, b of each pixel point can be obtained by the above formula, then the average value of the values of L, a, b of all the pixel points is calculated, the average value is obtained, the value of the average value of L, a, b is calculated, the value of IWA (vitiligo chromaticity value) is calculated, the value of IWA represents the chromaticity value of the region of vitiligo, the larger the value represents the more serious vitiligo, the range is generally between 0.8 and 1.5, and the calculation formula is as follows:
and counting the number of the pixel points of the extracted vitiligo area and the skin area of the whole image, and calculating the area occupation ratio of the vitiligo area by the following formula.

Claims (5)

1. A method for calculating chromaticity value and area of a vitiligo area based on image processing, which is characterized by comprising the following steps:
(1) Acquiring a focus area image of vitiligo;
(2) Extracting a vitiligo region by using a vitiligo region detection and segmentation model;
(3) Calculating the chromaticity value of the region according to the extracted vitiligo region;
(4) Calculating the area occupation ratio of the vitiligo area according to the number of pixel points of the vitiligo area and the focus area image;
in the step (2), the vitiligo region detection and segmentation model is obtained by the following method:
(2.1) marking the vitiligo area on the skin image in advance by adopting a rectangular frame marking and pixel level marking mode;
(2.2) acquiring a preset number of leucoderma area images marked on skin images in advance according to the method as a training set;
(2.3) inputting the images marked in the training set, the labels recorded by the marking files and the coordinate information into a deep learning neural network to calculate the characteristic value of the vitiligo area;
(2.4) generating a model file for recording the vitiligo region and the corresponding characteristic value, thereby obtaining a vitiligo region detection and segmentation model;
in the step (2.3), all vitiligo area images are loaded into a neural network, and firstly, the input images are normalized and converted into images with preset sizes; normalizing the coordinates of the image to obtain a normalized coordinate corresponding to the proportion of the transformed image; in the process of learning by the neural network, each image is subjected to iterative sampling for 5 times, then the convolutional neural network calculates the characteristic value of a final sampling area, and if the sampling area is marked with a label, the characteristic value is correspondingly matched with the label of the vitiligo area.
2. The method according to claim 1, wherein when the model detects the vitiligo image, the detected vitiligo image is input into the vitiligo area detection and segmentation model, the image is reset to an image with a preset size, and then the image is sampled to calculate the characteristic value of each sampling area; matching the calculated characteristic values with characteristic values of all vitiligo sampling areas recorded in a training model file, sorting the calculated characteristic values from large to small, taking the maximum value of the calculated characteristic values, marking the maximum value as a vitiligo area if the maximum value is larger than a preset threshold value, and labeling; otherwise, the non-vitiligo area is formed; and the vitiligo area corresponding to the maximum value is the area most similar to the sampling area, the adjacent vitiligo areas are finally combined into a prediction area, and finally, the corresponding coordinates in the original image are reversely calculated according to a normalization method.
3. The method according to claim 2, wherein the image of the vitiligo region is divided from the original image based on the coordinates of the predicted region corresponding to the original image when the vitiligo region is extracted.
4. A method of calculating the chromaticity value and area of a vitiligo region based on image processing as claimed in claim 1 or 2 or 3, wherein the step (4) of calculating the chromaticity value of the region from the extracted vitiligo region comprises the steps of:
(4.1) reading RGB values of each pixel point in the vitiligo area image, and converting the RGB channel of the image into a Lab channel by means of an XYZ color space:
x in the above formula (1) r 、Y g 、Z b RGB three channel values representing pixel points of an image, X in equation (2) n 、Y n 、Z n Three values are constant values;
and (4.2) obtaining values of L, a and b of each pixel point according to the formula, carrying out average value calculation on the values of L, a and b of all the pixel points, obtaining average values of L, a and b, and calculating a leucoderma chromaticity value, wherein the calculation formula is as follows:
5. a method for calculating chromaticity and area of a vitiligo area based on image processing according to claim 1, 2 or 3, wherein the area ratio of the vitiligo area is calculated according to the number of pixel points of the vitiligo area and the focus area image, and the formula is as follows:
wherein v represents the number of pixels in the vitiligo area, and a represents the number of pixels in the focus area image of the whole vitiligo.
CN202010287910.8A 2020-04-14 2020-04-14 Vitiligo region chromaticity value and area calculation method based on image processing Active CN111508016B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010287910.8A CN111508016B (en) 2020-04-14 2020-04-14 Vitiligo region chromaticity value and area calculation method based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010287910.8A CN111508016B (en) 2020-04-14 2020-04-14 Vitiligo region chromaticity value and area calculation method based on image processing

Publications (2)

Publication Number Publication Date
CN111508016A CN111508016A (en) 2020-08-07
CN111508016B true CN111508016B (en) 2023-11-17

Family

ID=71875974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010287910.8A Active CN111508016B (en) 2020-04-14 2020-04-14 Vitiligo region chromaticity value and area calculation method based on image processing

Country Status (1)

Country Link
CN (1) CN111508016B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669959B (en) * 2020-12-17 2024-03-29 中国医学科学院皮肤病医院(中国医学科学院皮肤病研究所) Automatic evaluation method for vitiligo conditions based on images
CN112420199A (en) * 2020-12-17 2021-02-26 中国医学科学院皮肤病医院(中国医学科学院皮肤病研究所) Curative effect evaluation method based on vitiligo chromaticity
CN114190894A (en) * 2021-12-09 2022-03-18 林丹柯 Color spot non-contact measuring method, device, processor and storage medium
CN114757951B (en) * 2022-06-15 2022-11-01 深圳瀚维智能医疗科技有限公司 Sign data fusion method, data fusion equipment and readable storage medium
CN116269217B (en) * 2023-02-10 2024-04-26 安徽医科大学 Vitiligo treatment effect quantitative evaluation method based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107049263A (en) * 2017-06-14 2017-08-18 武汉理工大学 Leucoderma condition-inference and cosmetic effect evaluating method and system based on image procossing
CN108154503A (en) * 2017-12-13 2018-06-12 西安交通大学医学院第附属医院 A kind of leucoderma state of an illness diagnostic system based on image procossing
CN109741336A (en) * 2018-12-06 2019-05-10 东南大学 A kind of leucoderma region segmentation method based on pixel cluster and segmentation threshold

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107049263A (en) * 2017-06-14 2017-08-18 武汉理工大学 Leucoderma condition-inference and cosmetic effect evaluating method and system based on image procossing
CN108154503A (en) * 2017-12-13 2018-06-12 西安交通大学医学院第附属医院 A kind of leucoderma state of an illness diagnostic system based on image procossing
CN109741336A (en) * 2018-12-06 2019-05-10 东南大学 A kind of leucoderma region segmentation method based on pixel cluster and segmentation threshold

Also Published As

Publication number Publication date
CN111508016A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN111508016B (en) Vitiligo region chromaticity value and area calculation method based on image processing
CN110097034B (en) Intelligent face health degree identification and evaluation method
Iyatomi et al. An improved internet-based melanoma screening system with dermatologist-like tumor area extraction algorithm
CN110059697B (en) Automatic lung nodule segmentation method based on deep learning
CN109544518B (en) Method and system applied to bone maturity assessment
CN106023151B (en) Tongue object detection method under a kind of open environment
CN109003269B (en) Medical image focus label extraction method capable of improving doctor efficiency
CN111754453A (en) Pulmonary tuberculosis detection method and system based on chest radiography image and storage medium
CN112489053B (en) Tongue image segmentation method and device and storage medium
CN110070024B (en) Method and system for identifying skin pressure injury thermal imaging image and mobile phone
CN112465766A (en) Flat and micro polyp image recognition method
CN110767293A (en) Brain auxiliary diagnosis system
KR102095730B1 (en) Method for detecting lesion of large intestine disease based on deep learning
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
Alam et al. Deep Learning Analysis of COVID-19 Lung Infections in CT Scans
Guo et al. LLTO: towards efficient lesion localization based on template occlusion strategy in intelligent diagnosis
CN113130050A (en) Medical information display method and system
CN115496748B (en) Method and device for identifying intestine section of small intestine image and storage medium
CN113052194A (en) Garment color cognition system based on deep learning and cognition method thereof
CN113989269B (en) Traditional Chinese medicine tongue image tooth trace automatic detection method based on convolutional neural network multi-scale feature fusion
CN114862868B (en) Cerebral apoplexy final infarction area division method based on CT perfusion source data
CN113112475A (en) Traditional Chinese medicine ear five-organ region segmentation method and device based on machine learning
Javadi et al. Automated detection, 3D position of facial skin lesions using genetic algorithm and Kinect camera
Zang et al. Tuberculin skin test result detection method based on CSN-II and improved OTSU method
CN109145930A (en) It is a kind of based on semi-supervised image significance detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant