CN113053524B - Online auxiliary diagnosis and treatment system based on skin images - Google Patents

Online auxiliary diagnosis and treatment system based on skin images Download PDF

Info

Publication number
CN113053524B
CN113053524B CN202110615272.2A CN202110615272A CN113053524B CN 113053524 B CN113053524 B CN 113053524B CN 202110615272 A CN202110615272 A CN 202110615272A CN 113053524 B CN113053524 B CN 113053524B
Authority
CN
China
Prior art keywords
skin
skin damage
data
historical
damage data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110615272.2A
Other languages
Chinese (zh)
Other versions
CN113053524A (en
Inventor
张靖
张伟
张�育
崔涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yongliu Technology Co ltd
Original Assignee
Hangzhou Yongliu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yongliu Technology Co ltd filed Critical Hangzhou Yongliu Technology Co ltd
Priority to CN202110615272.2A priority Critical patent/CN113053524B/en
Publication of CN113053524A publication Critical patent/CN113053524A/en
Application granted granted Critical
Publication of CN113053524B publication Critical patent/CN113053524B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Epidemiology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention relates to an online auxiliary diagnosis and treatment system based on skin images, which comprises: a data acquisition module, a human body part acquisition module, a user attribute determination module, a labeling module, an analysis module and a result determination module; the data acquisition module is used for acquiring skin damage data; the human body part acquisition module is used for acquiring a human body part corresponding to the skin lesion data; the user attribute determining module is used for determining the user attribute corresponding to the skin damage data; the marking module is used for marking the human body part and the user attribute as marking content on the skin damage data; the analysis module is used for analyzing the marked skin damage data; and the result determining module is used for obtaining an auxiliary inquiry result based on the analysis result. The system of the invention marks the skin damage data, analyzes the marked skin damage data, obtains the auxiliary inquiry result based on the analysis result, and provides an auxiliary implementation scheme for remote medical treatment.

Description

Online auxiliary diagnosis and treatment system based on skin images
Technical Field
The invention relates to the technical field of telemedicine, in particular to an online auxiliary diagnosis and treatment system based on skin images.
Background
With the pace of human progress, the environments in which people rely on to live are also constantly changing. The atmospheric pollution is increasingly serious, so that the incidence rate of skin diseases is continuously increased, and the pathogenic factors of the skin diseases are also continuously upgraded. The WHO has announced that the skin disease is the disease with the highest incidence, the highest disability rate and the strongest infectivity in the history of human in the 21 st century. The dermatosis is a common disease and a frequently encountered disease in medicine, and has the characteristics of wide disease range, multiple disease types, long treatment time and the like. According to the statistical data of the health care commission of China in 2019, one population of the country exceeds 13 hundred million, 1.24 hundred million people per year have the number of outpatients in dermatology, 3300 outpatients are averaged for one doctor, and the doctor can only treat 50 patients at most every day. Because of limited medical resources and large population, the incidence of skin diseases is rapidly increased year by year, and the management of chronic skin diseases faces challenges. Hospitals are unable to effectively provide chronic disease treatment for patients with skin disorders.
Disclosure of Invention
Technical problem to be solved
In view of the above-mentioned drawbacks and deficiencies of the prior art, the present invention provides an online diagnosis and treatment assisting system based on skin images.
(II) technical scheme
In order to achieve the purpose, the invention adopts the main technical scheme that:
an online diagnosis and treatment assisting system based on skin images, the system comprising: the system comprises a data acquisition module, a human body part acquisition module, a user attribute determination module, a labeling module, an analysis module and a result determination module;
the data acquisition module is used for acquiring skin damage data;
the human body part acquisition module is used for acquiring a human body part corresponding to the skin lesion data;
the user attribute determining module is used for determining the user attribute corresponding to the skin damage data;
the marking module is used for marking the human body part and the user attribute as marking content on the skin damage data;
the analysis module is used for analyzing the marked skin damage data;
the result determining module is used for obtaining an auxiliary inquiry result based on the analysis result;
wherein, the human body part is one of the following parts: head and neck type parts, trunk type parts, upper limb type parts and lower limb type parts;
the head and neck part is specifically one of the following parts: frontal plane, anterior cervical, parietal, posterior cervical, right ear, left ear;
the trunk part is specifically one of the following parts: thorax, abdomen, back, perineum;
the upper limb part is specifically one of the following parts: the left arm is arranged at the left side of the left arm;
the lower limb part is specifically one of the following parts: front thigh, back thigh, hip, front outer leg, back leg, instep, and sole.
(III) advantageous effects
And marking the skin damage data, analyzing the marked skin damage data, and obtaining an auxiliary inquiry result based on the analysis result, thereby providing an auxiliary implementation scheme for remote medical treatment.
Drawings
Fig. 1 is a schematic structural diagram of an online auxiliary diagnosis and treatment system based on skin images according to an embodiment of the present invention;
fig. 2 is a schematic view of a head and neck region according to an embodiment of the present invention;
FIG. 3 is a schematic view of a torso-like portion according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an upper limb part according to an embodiment of the present invention;
fig. 5 is a schematic view of a lower limb portion according to an embodiment of the present invention;
FIG. 6 is a schematic illustration of a label according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating the results of a first scoring method according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating the result of a second scoring method according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating a first trend result according to an embodiment of the present invention;
FIG. 10 is a graph illustrating the results of a second trend according to one embodiment of the present invention;
FIG. 11 is a diagram illustrating a first reported result according to an embodiment of the present invention;
FIG. 12 is a diagram illustrating a second reported result according to an embodiment of the present invention;
fig. 13 is a schematic diagram illustrating an execution result of the system according to an embodiment of the present invention.
Detailed Description
For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.
With the pace of human progress, the environments in which people rely on to live are also constantly changing. The atmospheric pollution is increasingly serious, so that the incidence rate of skin diseases is continuously increased, and the pathogenic factors of the skin diseases are also continuously upgraded. The WHO has announced that the skin disease is the disease with the highest incidence, the highest disability rate and the strongest infectivity in the history of human in the 21 st century. The dermatosis is a common disease and a frequently encountered disease in medicine, and has the characteristics of wide disease range, multiple disease types, long treatment time and the like. According to the statistical data of the health care commission of China in 2019, one population of the country exceeds 13 hundred million, 1.24 hundred million people per year have the number of outpatients in dermatology, 3300 outpatients are averaged for one doctor, and the doctor can only treat 50 patients at most every day. Because of limited medical resources and large population, the incidence of skin diseases is rapidly increased year by year, and the management of chronic skin diseases faces challenges. Hospitals are unable to effectively provide chronic disease treatment for patients with skin disorders.
Based on this, the invention provides an online auxiliary diagnosis and treatment system based on skin images, which comprises: acquiring skin damage data; marking the skin damage data; analyzing the marked skin damage data; and obtaining an auxiliary inquiry result based on the analysis result.
Referring to fig. 1, the online diagnosis and treatment assisting system based on skin images according to the present embodiment includes: the system comprises a data acquisition module 101, a human body part acquisition module 102, a user attribute determination module 103, a labeling module 104, an analysis module 105 and a result determination module 106.
And the data acquisition module 101 is used for acquiring the skin damage data.
The human body part obtaining module 102 is configured to obtain a human body part corresponding to the skin lesion data.
And the user attribute determining module 103 is used for determining the user attribute corresponding to the skin damage data.
And the marking module 104 is used for marking the human body part and the user attribute as marking content on the skin damage data.
And the analysis module 105 is used for analyzing the marked skin damage data.
And the result determination module 106 is used for obtaining an auxiliary inquiry result based on the analysis result.
Wherein, the human body part is one of the following parts: head and neck parts, trunk parts, upper limb parts and lower limb parts.
The head and neck part is specifically one of the following parts: frontal plane, anterior cervical, parietal, posterior cervical, right ear, left ear.
The trunk part is specifically one of the following parts: thorax-abdomen, back, perineum.
The upper limb part is specifically one of the following parts: the left arm is arranged on the left hand, the right arm is arranged on the right hand, and the left arm is arranged on the left hand.
The lower limb part is specifically one of the following parts: front thigh, back thigh, hip, front outer leg, back leg, instep, and sole.
Specifically, the execution of the on-line diagnosis and treatment support system based on skin images shown in fig. 1 will be specifically described as follows.
S101, the data acquisition module acquires skin damage data.
In this step, skin lesion data of the patient is acquired by the following method:
s101-1, acquiring an image of the skin lesion part through the image acquisition equipment, and acquiring the distance between the image acquisition equipment and the skin lesion part and the current focal length of the image acquisition equipment.
The image acquisition equipment can be a camera on a mobile phone of a user and also can be a special skin damage data acquisition device. However, the device needs to be equipped with a distance detection sensor for detecting the distance between the image capturing device and the skin site.
The current focal length of the image acquisition device can be obtained by acquiring the output parameters of the image acquisition device. And, the current focal length unit is mm (millimeters).
S101-2, determining the shooting angle of the image acquisition equipment according to the current focal length.
The implementation process of the step is as follows:
s101-2-1, in a preset standard focal length set, determining a first element with the minimum difference with the current focal length and a second element with the minimum difference.
Wherein the elements in the standard focus concentration are 15mm, 17mm, 20mm, 24mm, 28mm, 35mm, 55mm and 58 mm.
If the current focal length is 15mm, the preset standard focal length is concentrated, the minimum focal length difference is 15mm, the difference is 0, the minor focal length difference is 17mm, and the difference is 2 mm. The first element is 15mm and the second element is 17 mm.
S101-2-2, determining a first shooting angle corresponding to the first element and a second shooting angle corresponding to the second element according to the corresponding relation between the preset focal length and the shooting angle.
The preset corresponding relationship between the focal length and the camera angle is as follows:
when the focal length is 15mm, the angle of incidence is 111 degrees.
At a focal length of 17mm, the angle of incidence is 104 degrees.
The focal length is 20mm, and the angle of incidence is 94 degrees.
The focal length is 24mm, and the angle of incidence is 84 degrees.
The focal length is 28mm, and the angle of incidence is 75 degrees.
When the focal length is 35mm, the angle of incidence is 64 degrees.
The focal length is 55mm, and the angle of incidence is 43 degrees.
The focal length is 58mm, and the angle of incidence is 41 degrees.
Also in the above example, the first element is 15mm, the first angle of incidence is 111 degrees, and the second element is 17mm, the second angle of incidence is 104 degrees.
And S101-2-3, obtaining a shooting angle according to the difference between the current focal length and the first element, the difference between the current focal length and the second element, the first shooting angle and the second shooting angle.
In particular, the method comprises the following steps of,
1) if the absolute value of the difference between the current focal length and the first element is 0, the shooting angle = the first shooting angle is obtained.
Also in the above example, the first element is 15mm, and the absolute value of the difference between the current focal length and the first element is 0, and therefore, the first photographing angle 111 degrees is a photographing angle.
2) If the absolute value of the difference between the current focal length and the first element is not 0, the photographing angle = a first photographing angle-a [ | the difference between the current focal length and the first element | i | (a second photographing angle-the first photographing angle)/(a value of the second element-a value of the first element) | + b ] is obtained.
Wherein a is an adjustment coefficient, if the difference between the current focal length and the first element is positive, a =1, if the difference between the current focal length and the first element is negative, a = -1, b is an error adjustment coefficient, and b is obtained according to the difference between the current focal length and the first element and the difference between the current focal length and the second element.
For example, the determination scheme of b is:
if the difference between the current focal length and the first element is positive, the difference between the previous focal length and the second element is positive, or if the difference between the current focal length and the first element is negative, the difference between the previous focal length and the second element is negative, b = the difference between the current focal length and the first element/the difference between the previous focal length and the second element.
If the difference between the current focal length and the first element is positive and the difference between the previous focal length and the second element is negative, or if the difference between the current focal length and the first element is negative and the difference between the previous focal length and the second element is positive, b = | the difference between the current focal length and the first element/the difference between the first element and the second element |.
If the absolute value of the difference between the current focal length and the first element is 0, it can be considered that an accurate photographing angle has been defined in advance, and thus the first photographing angle is determined as the photographing angle. If the absolute value of the difference between the current focal length and the first element is not 0, it indicates that the accurate shooting angle is not predefined, and the final shooting angle can only be determined by the defined shooting angle.
In this way, the current shooting angle of the image acquisition device is determined, and the size of the current shooting object can be known based on the shooting angle.
And S101-3, determining the length and width of the skin damage part according to the shooting angle and the distance.
The implementation process of the step is as follows:
s101-3-1, determining the length of the damaged part in the skin damage part image, and taking the length as an initial length value. The width of the lesion in the lesion image is determined and used as an initial width value.
And S101-3-2, calculating half of the shooting angle to obtain a half shooting angle.
And S101-3-3, calculating the tangent value of the half-shooting angle and obtaining a comparison value.
The distance here is the distance between the image capturing device and the skin lesion site acquired in S101-1, and the tangent is the tangent operation (tan) of the angle.
And S101-3-4, determining the length and the width of the skin damage part according to the comparison value, the length of the image of the skin damage part, the width of the image of the skin damage part, the initial length value and the initial width value.
The implementation methods of S101-3-4 are many, for example: length of lesion site =2 vs. initial length value/length of lesion site image. Width =2 × ratio value of lesion site/initial width value/width of lesion site image.
For another example: r = length of lesion site image/width of lesion site image is determined. If r =1, the length of the lesion site =2 × ratio value initial length value/length of the lesion site image. Width =2 × ratio value of lesion site/initial width value/width of lesion site image. If r >1, the length of the lesion site =2 vs. the initial length value/length of the image of the lesion site. Width =2 × ratio value × initial width value × r × b/width of lesion site image. If r <1, the length of the lesion site =2 vs. the initial length value r b/length of the lesion site image. Width =2 × ratio value of lesion site/initial width value/width of lesion site image.
And S101-4, taking the length, the width, the image of the skin damage part and the user identification of the skin damage part as skin damage data.
The same skin damage part is shot by different patients, the size difference of the skin damage part displayed in the obtained skin damage part image (namely the picture) is larger due to the difference of the angle, the distance between the image acquisition equipment and the skin damage part and the like, and the skin damage part image is compared and analyzed only based on the acquired skin damage part image, so that incomparable property exists. In order to make the images of the skin lesion parts acquired by different patients comparable, the length and the width of the skin lesion parts can be normalized through the steps S101-2 and S101-3, so that the images of the subsequent skin lesion parts have comparability. Therefore, in S101-4, the length, width, image of the lesion site, and the user identifier to which the lesion site belongs are all used as the lesion data.
S102, the human body part acquisition module acquires the human body part corresponding to the skin lesion data.
The human body part includes, but is not limited to, one of the following: head and neck parts, trunk parts, upper limb parts and lower limb parts.
The head and neck part is specifically one of the following parts: frontal plane, anterior cervical, parietal, posterior cervical, right ear, left ear, as shown in fig. 2.
The trunk part is specifically one of the following parts: the thorax-abdomen, back, perineum, as shown in fig. 3.
The upper limb part is specifically one of the following parts: front right arm, front left arm, back right arm, palms of both hands, backs of both hands, as shown in fig. 4.
The lower limb part is specifically one of the following parts: front thigh, back hip, front outer leg, back leg, instep, and sole, as shown in fig. 5.
S103, the user attribute determining module determines the user attribute corresponding to the skin damage data.
The implementation process of the step is as follows:
s103-1, obtaining the identification of the user to which the skin damage data belongs and historical skin damage data.
The identifier of the user to which the skin damage data belongs may be an ID of the patient corresponding to the skin damage data, such as an identification number of the patient, or a user name of the patient, or a medical insurance card number of the patient, or an ID number of the patient in a corresponding hospital (the corresponding hospital here may be set in a user setting, or may be a hospital at the time of first inquiry, and is not limited here), and the like.
In addition, in this step, not all the historical skin damage data are obtained, but skin damage data satisfying a preset relationship is obtained, where the preset relationship is: for any one of the acquired historical skin damage data D1, the human body part to which D1 belongs is the same as the human body part acquired by the human body part acquisition module in S102, and at the same time, 0.8< skin damage area in D1/skin damage area in skin damage data <1.2, and 0.8< (the maximum value of gray levels of all pixels related to skin damage area in D1-the minimum value of gray levels of all pixels related to skin damage area in D1)/(the maximum value of gray levels of all pixels related to skin damage area in skin damage data-the minimum value of gray levels of all pixels related to skin damage area in skin damage data) <1.2 are satisfied.
The damage data here is the damage data acquired by the data acquisition module in S101.
That is, if a historical damage data D2, whose labeled body part is the same as the body part obtained by the body part obtaining module in S102 (for example, all right feet), and the value of the damage area in the damage area/damage data is between 0.8 (excluding 0.8) and 1.2 (excluding 1.2), and the value of (the maximum value of the gray levels of all the pixels related to the damage area thereof-the minimum value of the gray levels of all the pixels related to the damage area thereof)/(the maximum value of the gray levels of all the pixels related to the damage area in the damage data-the minimum value of the gray levels of all the pixels related to the damage area in the damage data) is between 0.8 (excluding 0.8) and 1.2 (excluding 1.2), then this historical damage data D2 is the damage data that this step needs to obtain.
Through the preset relation, historical skin damage data which is similar to the skin damage data area and the pixel value obtained by the data obtaining module in the S101 is selected.
And S103-2, obtaining the weight of the human body part obtained by the human body part obtaining module in S102 according to the historical skin damage data.
The weights in this step are used to characterize the likelihood of skin disorders at the body site.
Specifically, one implementation manner of this step is:
1.1 determining the skin damage area in each historical skin damage data and the gray value of each pixel point related to the skin damage.
1.2 determine the weight from all skin lesion areas and all gray values.
Since the historical lesion data is obtained, the length, width, and lesion site image of the lesion site is obtained, where the lesion area (e.g. length, width, or 3.1415926 length, width/4) and the gray value of each pixel point in the lesion site image can be obtained based on the length and width.
At this time, 1.1 the implementation scheme of determining the weight according to all the skin damage areas and all the gray values is as follows: 1) and calculating the gray index of each historical skin damage data. 2) And sequencing all historical skin damage data from front to back according to the acquisition time to obtain a sequencing sequence. 3) Starting from the first historical damage data of the sorting sequence, the area difference and the gray index difference between the first historical damage data and the first historical damage data sorted behind the first historical damage data are calculated. 4) Determining the weight according to the area difference and the gray index difference, for example, determining the weight as (the maximum value of the area difference/| the mean value of all the area differences-the skin loss area | of the last historical skin loss data in the sequencing sequence) per (the mean value of the skin loss areas in all the historical skin loss data/the skin loss area in the skin loss data) (| the gray index of the skin loss data-the gray index |/the mean value of the gray index difference of the last historical skin loss data).
The gray level index of any skin damage data = mean value of gray levels of pixels related to skin damage in any skin damage data. Or, the gray scale index of any skin damage data = E x (ave-min)/(max-ave).
Any piece of skin damage data can be historical skin damage data, and can also be skin damage data obtained by the data acquisition module in the S101.
E is a standard deviation of gray values of each pixel point related to the skin loss in any skin loss data, ave is an average value of gray values of each pixel point related to the skin loss in any skin loss data, min is a minimum value of gray values of each pixel point related to the skin loss in any skin loss data, and max is a maximum value of gray values of each pixel point related to the skin loss in any skin loss data.
Taking the historical data as D2, D3, D4, and D5, and the gray scale index of any one of the to-be-processed data = the mean of the gray scale values of the pixels involved in the skin loss in any one of the to-be-processed data, the area of the skin loss portion in D2 (e.g., S21) and the gray scale value of each pixel are calculated, and the mean of the gray scale values of the pixels in D2 is taken as the gray scale index of D2 (e.g., G21). Similarly, the area of the skin damage portion in D3 (e.g., S31) and the gray scale value of each pixel are calculated, and the mean of the gray scale values of each pixel in D3 is used as the gray scale index of D3 (e.g., G31). The area of the skin damage part in D4 (e.g. S41) and the gray value of each pixel are calculated, and the mean value of the gray values of each pixel in D4 is used as the gray index of D4 (e.g. G41). The area of the skin damage part in D5 (e.g. S51) and the gray value of each pixel are calculated, and the mean value of the gray values of each pixel in D5 is used as the gray index of D5 (e.g. G51).
Arranging the D2, D3, D4 and D5 in the acquisition order, wherein the arrangement order is as follows: d5, D3, D2 and D4. Then the area differences S51-S31 between D5 and D3 (for convenience of description, the values of S51-S31 are denoted as DS 531), the area differences S31-S21 between D3 and D2 (for convenience of description, the values of S31-S21 are denoted as DS 321), and the area differences S21-S41 between D2 and D4 (for convenience of description, the values of S21-S41 are denoted as DS 241) are calculated. Gray scale index differences G51 to G31 between D5 and D3 (for convenience of description, values of G51 to G31 are denoted as DG 531), gray scale index differences G31 to G21 between D3 and D2 (for convenience of description, values of G31 to G21 are denoted as DG 321), and gray scale index differences G21 to G41 between D2 and D4 (values of G21 to G41 are denoted as DG241 for convenience of description) are calculated.
If the maximum value among DS531, DS321, and DS241 is DS321, the minimum value is DS241, the average value is (DS 531+ DS321+ DS 241)/3 (for convenience of description, it is denoted as ES 1), and if the maximum value among DG531, DG321, and DG241 is DG531, the minimum value is DG321, and the average value is (DG 531+ DG321+ DG 241)/3 (for convenience of description, it is denoted as EG1), the weight = (DS321/| ES1-S41|) (the average value of S21, S31, S41, S51/the area of skin loss in skin loss data) (| gray index-G51 |/EG1 of skin loss data obtained by the data obtaining module in S101).
Wherein 1) | the mean value of the skin damage areas in all the historical skin damage data-the skin damage area of the last historical skin damage data in the sorting sequence | describes the area difference between the latest one historical skin damage data and the average data, 2) the mean value of the skin damage areas in all the historical skin damage data/the skin damage area in the skin damage data describes the relationship between the skin damage area in the historical skin damage data and the skin damage data area of the skin damage data obtained by the data obtaining module in S101, 3) the average of the difference between the gray index of the skin damage data obtained by the data obtaining module in S101 and the gray index of the latest historical skin damage data and the average difference between the gray indexes in the historical skin damage data is described by the average of the gray index difference between the gray index of the skin damage data and the gray index of the latest historical skin damage data. The similarity between the skin damage data obtained by the data acquisition module in the step S101 and the historical skin damage data can be reflected through the three indexes, and the possibility of the skin damage data obtained by the data acquisition module in the step S101 to be ill can be known according to the final ill condition of the historical skin damage data, so that the weight can be used for representing the possibility of skin diseases.
In addition, when the historical skin damage data is selected, the same historical data of the human body part is selected, so the weight can be used for representing the possibility of skin diseases of the human body part.
Besides the above implementation manner, the step can be implemented in the following manner, and another implementation manner of the step is as follows:
2.1 determining the skin damage area in each historical skin damage data, the gray value of each pixel point related to the skin damage and the accurate diagnosis conclusion of the skin disease.
Wherein, the diagnosis result of the skin disease is no skin disease or the name of the skin disease.
2.2 classifying the historical skin damage data corresponding to the same skin disease diagnosis conclusion into one category.
2.3 determining the weight corresponding to each class according to all the skin damage areas and all the gray values to form a weight vector
Wherein, each element in the vector is a corresponding relation, and the corresponding relation is the corresponding relation between the skin disease diagnosis conclusion corresponding to the class and the weight corresponding to the class.
2.4, the weight vector is used as the weight of the human body part acquired by the human body part acquisition module in S102.
The implementation process of determining the weight corresponding to each class according to all the skin damage areas and all the gray values in 2.3 is as follows: and aiming at each class, 1) calculating a gray index of each historical skin damage data in the class. 2) And sequencing all historical skin damage data from front to back according to the acquisition time to obtain a sequencing sequence. 3) Starting from the first historical damage data of the sorted sequence, the area difference and the gray index difference of the first historical damage data after the first historical damage data are calculated. 4) Determining the weight corresponding to the class as (the number of historical skin damage data in the class/the total number of the historical skin damage data) × (the maximum value of the area difference/| the mean value of all the area differences-the skin damage area | of the last historical skin damage data in the sequencing sequence) (the mean value of the skin damage areas in all the historical skin damage data/the skin damage area in the skin damage data) | the gray index of the skin damage data-the gray index |/the mean value of the gray index difference of the last historical skin damage data).
The gray level index of any skin damage data = mean value of gray levels of pixels related to skin damage in any skin damage data. Or, the gray scale index of any skin damage data = E x (ave-min)/(max-ave).
Any piece of skin damage data can be historical skin damage data, and can also be skin damage data obtained by the data acquisition module in the S101.
E is a standard deviation of gray values of each pixel point related to the skin loss in any skin loss data, ave is an average value of gray values of each pixel point related to the skin loss in any skin loss data, min is a minimum value of gray values of each pixel point related to the skin loss in any skin loss data, and max is a maximum value of gray values of each pixel point related to the skin loss in any skin loss data.
Taking the historical data of a certain class as D20, D30, D40 and D50, taking the gray scale index of any to-be-processed data = the mean value of the gray scale values of the pixels involved in the skin loss in any to-be-processed data as an example, the area of the skin loss part in D20 (e.g., S22) and the gray scale value of each pixel are calculated, and the mean value of the gray scale values of the pixels in D20 is taken as the gray scale index of D20 (e.g., G22). Similarly, the area of the skin damage portion in D30 (e.g., S32) and the gray scale value of each pixel are calculated, and the mean of the gray scale values of each pixel in D30 is used as the gray scale index of D30 (e.g., G32). The area of the skin damage part in D40 (e.g. S42) and the gray value of each pixel are calculated, and the mean value of the gray values of each pixel in D40 is used as the gray index of D40 (e.g. G42). The area of the skin damage part in D50 (e.g. S52) and the gray value of each pixel are calculated, and the mean value of the gray values of each pixel in D50 is used as the gray index of D50 (e.g. G52).
Arranging the D20, D30, D40 and D50 in the acquisition order, wherein the arrangement order is as follows: d50, D30, D20 and D40. Then the area differences S52-S32 between D50 and D30 (for convenience of description, the values of S52-S32 are denoted as DS 532), the area differences S32-S22 between D30 and D20 (for convenience of description, the values of S32-S22 are denoted as DS 322), and the area differences S22-S42 between D20 and D40 (for convenience of description, the values of S22-S42 are denoted as DS 242) are calculated. Gray scale index differences G52 to G32 between D50 and D30 (for convenience of description, values of G52 to G32 are denoted as DG 532), gray scale index differences G32 to G22 between D30 and D20 (for convenience of description, values of G32 to G22 are denoted as DG 322), and gray scale index differences G22 to G42 between D20 and D40 (values of G22 to G42 are denoted as DG 242) are calculated.
If the maximum value among DS532, DS322, and DS242 is DS322, the minimum value is DS242, the average value is (DS 532+ DS322+ DS 242)/3 (for convenience of description, it is denoted as ES 2), and if the maximum value among DG532, DG322, and DG242 is DG532, the minimum value is DG322, and the average value is (DG 532+ DG322+ DG 242)/3 (for convenience of description, it is denoted as EG2), the weight = (the number of historical skin damage data in this class/the total number of historical skin damage data acquired in S103-1) (DS322/| ES2-S42|) (the average value of the historical skin damage data in S22, S32, S42, and S52/the skin damage area in the skin damage data) | gray scale index-G52 |/EG2 of the skin damage data acquired by the data acquisition module in S101).
Wherein, for any class, 1) the number of historical skin damage data in the class/the total number of historical skin damage data obtained in S103-1 describes the ratio of the amount of skin damage data in the class; 2) the average value of the skin damage areas in all historical skin damage data-the skin damage area of the last historical skin damage data in the sorting sequence | describes the area difference between the latest historical skin damage data and the average data, 3) the average value of the skin damage areas in all historical skin damage data/the skin damage area in the skin damage data describes the relationship between the skin damage area in the historical skin damage data and the skin damage data area of the skin damage data obtained by the data obtaining module in S101, and 4) | the gray scale index of the skin damage data-the average value of the gray scale index difference |/the gray scale index difference of the last historical skin damage data describes the relationship between the gray scale index of the skin damage data obtained by the data obtaining module in S101 and the difference between the latest historical skin damage data index and the average difference of the gray scale index in the historical skin damage data. The similarity between the skin damage data obtained by the data acquisition module in the step S101 and the historical skin damage data and the disease probability of the type can be reflected by the three indexes, and the disease probability of the skin damage data obtained by the data acquisition module in the step S101 can be known according to the final disease condition of the historical skin damage data, so that the weight can be used for representing the possibility of skin diseases corresponding to the type. In addition, when the historical skin damage data is selected, the historical data which are the same with the human body part are selected, so the weight can be used for representing the possibility that the corresponding skin diseases of the human body part are obtained.
And S103-3, determining the identification and the weight as the user attribute.
And S104, the marking module marks the human body part and the user attribute as marking contents on the skin damage data.
And if the human body part is the denomination, marking the denomination, the user ID and the weight as marked contents on the skin damage data.
Besides the human body part and the user attribute, the label content may also include other attributes, for example, information labeled in the following three dimensions:
the three dimensions are: picture type, location characteristics, skin damage characteristics.
The picture type indexes are divided into clinical general pictures, skin mirror pictures, pathological pictures, ultrasonic pictures and the like; the difference is based on different disease site characteristics and skin lesion characteristics, such as psoriasis (site characteristic 20 body site divisions, skin lesion characteristic 4 "area/erythema/infiltration/scaling"), atopic dermatitis (site characteristic 19 body site divisions, skin lesion characteristic 6 "erythema/papular edema/exudative crusting/epidermal exfoliation/lichenification/dry skin").
For example, the psoriasis skin lesion characteristic index is as follows:
area of rash:
1:1~10%
2:10~29%
3:30~49%
4:50~69%
5:70~89%
6:90~100%
degree of erythema:
0: wu (visible without erythema)
1: slight (in light red)
2: medium (Red)
3: severe (deep red)
4: extremely severe (deep red)
Degree of scale:
0: none (surface no visible scale)
1: slight (part of the skin damaged surface is covered with scales, and the fine scales are the main)
2: moderate degree (most of the skin damage surface is completely or incompletely covered with scale, the scale is flaky)
3: severe (almost all the skin damage surface covered with scales, thicker scale layer)
4: extremely severe (scales are covered on the surface of all the skin damage, and the scales are thick and layered)
And (3) infiltration degree:
0: nothing (skin damage level with normal skin)
1: slight (slight skin damage higher than normal skin surface)
2: moderate (moderate bump, the edge of the plaque is round or slope shaped)
3: severe (skin damage, pachynsis and obvious uplift)
4: extremely severe (high thickness of skin damage, obvious uplift)
For another example, the skin damage characteristic index of atopic dermatitis is as follows:
area of skin damage:
0: without skin damage
1:1-10%
2:11-20%
3:21-30%
4:31-40%
5:41-50%
6:51-60%
7:61-70%
8:71-80%
9:81-90%
10:91-100%
Erythema:
0: no erythema
1: slight erythema and lighter color
2: erythema flakiness, darker color
3: the erythema is extensive, the color is bright red or purplish red, the skin temperature is increased
Papules or oedemas:
0: without pimple and edema
1: little papule or mild edema
2: greater number of papules or moderate edema
3: papule with pronounced eruption or edema
Oozing or crusting:
0: no exudation or scabbing
1: slight exudation or slight incrustation
2: the exudation is obvious or the scab is formed to a certain degree
3: obvious exudation or large area yellow scab and black scab
Peeling off the skin:
0: no peeling of epidermis
1: slight exfoliation of the epidermis
2: moderate exfoliation of epidermis
3: severe exfoliation of epidermis
Moss sample change:
0: no lichenification change of skin damage
1: slightly hypertrophic skin damage
2: the skin damage is obvious, the skin lines are hyperplastic, and the ridge is slightly raised
3: obvious pachynsis, obvious hyperplasia of skin lines and raised ridge
And (3) drying the skin:
0: moistening and moistening skin
1: slight dryness of the skin
2: a dry skin with mild desquamation
3: significant dryness and desquamation of the skin
The three dimensions are labeled as shown in fig. 6.
And during labeling, the labeling can be realized through a labeling model of the training number. The training process of the model is as follows:
and performing data conversion based on the labeled content to obtain historical training data and a required format (training _ data.csv, training _ data.xml) required by model training, wherein the labeled content is required by training picture disease classification and skin damage characteristic detection. And then, the labeled content is used as a parameter for model training and tuning. Such as: the characteristic model training of psoriasis generally comprises the following steps:
1. training data: and converting the format of the existing psoriasis classification data set (containing corresponding labeled contents) xls file into the csv/txt format as required. Then, according to the random disordered picture sequence, 90% of the training data are extracted as training data, txt, the rest are used as test data val/test, txs file pd.
As done by:
all_index = list(range(img_num))
random.shuffle(all_index)
train_index = all_index[:int(img_num * 0.9)]
for i, img_url in zip(all_index, img_con['0']):
if i in train_index:
setTrainSet(i)
else:
setTestSet(i)
2. pre-training a model: selecting an existing common classification model such as a resnet series and determining the pre-training weight of the classification model on imagenet to obtain a baseline model structure and training weight.
As done by:
state_dict = load_state_dict_from_url('https://download.pytorch.org/models/resnet18-5c106cde.pth', model_dir='.')
model = torchvision.models.resnet18()
model.load_state_dict(state_dict)
3. model ablation tuning experiment: adjusting different model input sizes, optimizing modes, learning rate reduction modes, picture enhancement modes (mirroring, clipping, color gamut and the like), different loss (loss) calculation modes and the like.
As done by:
F.interpolate(img.unsqueeze(0), size=(416,416), mode="nearest").squeeze(0)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
transforms.Normalize(mean=[0.502236, 0.50259746, 0.5104974], std=[0.22640832, 0.18960338, 0.20399043])(img)
weight = torch.as _ sensor ([1,1, 1]). cuda () # where these 1 s are replaced by the reciprocal of the number of pictures taken for each class (if the data has an unequal distribution of classes)
loss_func = torch.nn.CrossEntropyLoss(weight)
4. And (3) outputting a model: and recording the performance effect of the model in the third step on the test set. And selects the best effort model.
# P R of Each class
for j in range(class_num):
tp_j[j] += (pred[pred == batch_y]==j).sum()
target_j[j] += (batch_y == j).sum()
pred_j[j] += (pred==j).sum()
print('P',tp_j[z]/ (pred_j[z]+1e-4),'R'tp_j[z]/target_j[z]))
tp_sum = 0
for t in tp_j:
tp_sum += t
Recall = tp_sum / (class_num * len(test_loader) * (batch_x.shape[0]))
mAP=0
if Recall>best_Recall:
# preservation model
best_Recall = Recall
5. Model deployment: converting the model obtained in the step 4, wherein three general routes are PyTorch- > ONNX (independent of opencv and GPU)
model = torchvision.models.resnet18()
x = torch.randn((batch_size, 3, net_h, net_w))
onnx_file_name = "piyan_{}_{}_{}_static.onnx".format(batch_size, net_h, net_w)
torch.onnx.export(model,
x,
onnx_file_name,
export_params = True,
opset_version = 12,
do_constant_folding = True,
input_names = ["input"],
output_names = ['output'],
dynamic_axes = dynamic_axes)
with trt.Builder(TRT_LOGGER) as builder, builder.create_network(
EXPLICIT_BATCH) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
config = builder.create_builder_config()
builder.max_batch_size = 1
config.max_workspace_size = 1 << 30
if trt_type == '_FP16':
config.set_flag(trt.BuilderFlag.FP16)
if trt_type == '_INT8':
config.set_flag(trt.BuilderFlag.INT8)
config.int8_calibrator = YOLOEntropyCalibrator(
'/home/cmv/PycharmProjects/YOLOv4-PyTorch/data/wenyi/test', (cfg.h, cfg.w), 'calib_yolov4.bin')
print ('parse is done, TensorRT Engine { }, which probably takes a while...'. format (Engine _ file _ path))
engine = builder.build_engine(network, config)
with open(engine_file_path, "wb") as t:
t.write(engine.serialize())
print ("TensorRT Engine construction complete")
# obtaining a final deployment application model according to a specific deployment environment
PyTorch->ONNX->TensorRT(Nvidia GPU)、PyTorch->ONNX->OpenVINO(Intel CPU)。
And S105, analyzing the marked skin damage data by an analysis module.
For example, the analysis module analyzes the marked skin damage data through a convolutional neural network, determines the association relationship between the skin damage data and each skin disease, finds the most similar skin disease, and outputs the most similar skin disease as a final analysis result.
And S106, the result determining module obtains an auxiliary inquiry result based on the analysis result.
The form of the results of the inquiry can be varied, such as the scoring results shown in fig. 7 and 8. Or the trend results shown in fig. 9 and 10. The reporting results shown in fig. 11 and 12 may also be used.
In specific implementation, the scheme provided by the embodiment can be used for analysis and diagnosis through a trained model.
After the human body part acquisition module, the user attribute determination module, the labeling module, the analysis module and the result determination module complete the processes of S102-S106 through the trained model to obtain the most probable three diseases as shown in FIG. 13, the erythema nodosum, verruca plana and urticaria can be checked by sliding left and right to see the specific disease information, and the disease characteristics and diagnosis and treatment cases are introduced in more detail.
The system provided by the embodiment labels the skin damage data, analyzes the labeled skin damage data, obtains an auxiliary inquiry result based on the analysis result, and provides an auxiliary implementation scheme for remote medical treatment.
In order to better understand the above technical solutions, exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the terms first, second, third and the like are for convenience only and do not denote any order. These words are to be understood as part of the name of the component.
Furthermore, it should be noted that in the description of the present specification, the description of the term "one embodiment", "some embodiments", "examples", "specific examples" or "some examples", etc., means that a specific feature, structure, material or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, the claims should be construed to include preferred embodiments and all changes and modifications that fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention should also include such modifications and variations.

Claims (8)

1. An online auxiliary diagnosis and treatment system based on skin images, which is characterized by comprising: the system comprises a data acquisition module, a human body part acquisition module, a user attribute determination module, a labeling module, an analysis module and a result determination module;
the data acquisition module is used for acquiring skin damage data;
the human body part acquisition module is used for acquiring a human body part corresponding to the skin lesion data;
the user attribute determining module is used for determining the user attribute corresponding to the skin damage data;
the marking module is used for marking the human body part and the user attribute as marking content on the skin damage data;
the analysis module is used for analyzing the marked skin damage data;
the result determining module is used for obtaining an auxiliary inquiry result based on the analysis result;
wherein, the human body part is one of the following parts: head and neck type parts, trunk type parts, upper limb type parts and lower limb type parts;
the head and neck part is specifically one of the following parts: frontal plane, anterior cervical, parietal, posterior cervical, right ear, left ear;
the trunk part is specifically one of the following parts: thorax, abdomen, back, perineum;
the upper limb part is specifically one of the following parts: the left arm is arranged at the left side of the left arm;
the lower limb part is specifically one of the following parts: front thigh, back thigh, hip, front outer shank, back shank, instep and sole of feet;
the determining of the user attribute corresponding to the skin damage data specifically includes:
s103-1, acquiring an identifier of a user to which the skin damage data belongs and historical skin damage data;
s103-2, obtaining the weight of the human body part obtained by the human body part obtaining module according to the historical skin damage data, wherein the weight is used for representing the possibility of obtaining skin diseases of the human body part;
s103-3, determining the identification and the weight as user attributes;
the acquired historical skin damage data meet the following preset relationship: for any acquired historical skin damage data D1, the human body part to which the D1 belongs is the same as the human body part acquired by the human body part acquisition module, and simultaneously, the conditions that 0.8< the skin damage area in D1/the skin damage area in the skin damage data is <1.2, and 0.8< (the maximum gray value of all pixels related to the skin damage area in D1-the minimum gray value of all pixels related to the skin damage area in D1)/(the maximum gray value of all pixels related to the skin damage area in the skin damage data-the minimum gray value of all pixels related to the skin damage area in the skin damage data) <1.2 are met;
the weight is obtained according to the skin damage area in the historical skin damage data and the gray value of each pixel point related to the skin damage, or the weight is obtained according to the skin damage area in the historical skin damage data, the gray value of each pixel point related to the skin damage and a skin disease diagnosis conclusion.
2. The system according to claim 1, wherein the S103-2 specifically comprises:
determining the skin damage area in each historical skin damage data and the gray value of each pixel point related to the skin damage;
the weights are determined from all the skin lesion areas and all the gray values.
3. The system of claim 2, wherein determining the weight from all areas of skin damage and all gray values comprises:
calculating the gray index of each historical skin damage data;
sorting all historical skin damage data from front to back according to the acquisition time to obtain a sorting sequence;
calculating the area difference and the gray index difference between the first historical skin damage data and the first historical skin damage data sequenced after the first historical skin damage data from the first historical skin damage data in the sequencing sequence;
and determining the weight according to the area difference and the gray index difference.
4. The system according to claim 3, wherein the determining the weight according to the area difference and the gray scale index difference specifically comprises:
determining the weight as (maximum value of area difference/| mean value of all area differences-skin loss area | of last historical skin loss data in the sequencing sequence) per (mean value of skin loss areas in all historical skin loss data/skin loss area in the skin loss data) (| gray index of the skin loss data-gray index | of last historical skin loss data/mean value of gray index differences).
5. The system according to claim 1, wherein the S103-2 specifically comprises:
determining the skin damage area in each historical skin damage data, the gray value of each pixel point related to the skin damage and a diagnosis conclusion of the skin disease; the confirmed diagnosis conclusion of the skin disease is that the skin disease does not exist or the name of the skin disease is suffered;
classifying the historical skin damage data corresponding to the same skin disease diagnosis conclusion into one category;
determining the weight corresponding to each class according to all the skin damage areas and all the gray values to form a weight vector, wherein each element in the vector is a corresponding relation, and the corresponding relation is the corresponding relation between the skin disease diagnosis conclusion corresponding to the class and the weight corresponding to the class;
and taking the weight vector as the weight of the human body part acquired by the human body part acquisition module.
6. The system of claim 5, wherein determining the weight corresponding to each class from all areas of skin damage and all gray values comprises:
for each of the classes, the method further comprises,
calculating the gray index of each historical skin damage data in the category;
sorting all historical skin damage data from front to back according to the acquisition time to obtain a sorting sequence;
calculating the area difference and the gray index difference between the first historical skin damage data and the time of the first historical skin damage data sequenced after the first historical skin damage data from the first historical skin damage data in the sequencing sequence;
determining the weight corresponding to the class as (the number of historical skin damage data in the class/the total number of the historical skin damage data) × (the maximum value of the area difference/| the average value of all the area differences-the skin damage area | of the last historical skin damage data in the sorting sequence) (the average value of the skin damage areas in all the historical skin damage data/the skin damage area in the skin damage data) | the gray index of the skin damage data-the gray index |/the average value of the gray index difference of the last historical skin damage data).
7. The system according to claim 3 or 4 or 6, wherein the gray scale indicator of any one of said skin lesion data is E (ave-min)/(max-ave);
e is a standard deviation of gray values of pixels related to the skin loss in any one of the skin loss data, ave is an average value of gray values of pixels related to the skin loss in any one of the skin loss data, min is a minimum value of gray values of pixels related to the skin loss in any one of the skin loss data, and max is a maximum value of gray values of pixels related to the skin loss in any one of the skin loss data.
8. The system of claim 1, wherein the analysis module is configured to analyze the labeled skin lesion data via a convolutional neural network.
CN202110615272.2A 2021-06-02 2021-06-02 Online auxiliary diagnosis and treatment system based on skin images Active CN113053524B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110615272.2A CN113053524B (en) 2021-06-02 2021-06-02 Online auxiliary diagnosis and treatment system based on skin images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110615272.2A CN113053524B (en) 2021-06-02 2021-06-02 Online auxiliary diagnosis and treatment system based on skin images

Publications (2)

Publication Number Publication Date
CN113053524A CN113053524A (en) 2021-06-29
CN113053524B true CN113053524B (en) 2021-08-27

Family

ID=76518667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110615272.2A Active CN113053524B (en) 2021-06-02 2021-06-02 Online auxiliary diagnosis and treatment system based on skin images

Country Status (1)

Country Link
CN (1) CN113053524B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114947756B (en) * 2022-07-29 2022-11-22 杭州咏柳科技有限公司 Atopic dermatitis severity intelligent evaluation decision-making system based on skin image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107049263A (en) * 2017-06-14 2017-08-18 武汉理工大学 Leucoderma condition-inference and cosmetic effect evaluating method and system based on image procossing
CN110648751A (en) * 2019-10-30 2020-01-03 中南大学湘雅三医院 System and method for delineating possible diseases by utilizing skin CT
CN110648318A (en) * 2019-09-19 2020-01-03 泰康保险集团股份有限公司 Auxiliary analysis method and device for skin diseases, electronic equipment and storage medium
CN110755045A (en) * 2019-10-30 2020-02-07 湖南财政经济学院 Skin disease comprehensive data analysis and diagnosis auxiliary system and information processing method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104490361A (en) * 2014-12-05 2015-04-08 深圳市共创百业科技开发有限公司 Remote dermatosis screening system and method based on network hospitals
GB201715447D0 (en) * 2017-09-25 2017-11-08 Deb Ip Ltd Preclinical evaluation of skin condition and evaluation
CN108648825B (en) * 2018-05-30 2019-07-12 江苏大学附属医院 A kind of leucoderma hickie appraisal procedure based on image recognition
US10878567B1 (en) * 2019-09-18 2020-12-29 Triage Technologies Inc. System to collect and identify skin conditions from images and expert knowledge

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107049263A (en) * 2017-06-14 2017-08-18 武汉理工大学 Leucoderma condition-inference and cosmetic effect evaluating method and system based on image procossing
CN110648318A (en) * 2019-09-19 2020-01-03 泰康保险集团股份有限公司 Auxiliary analysis method and device for skin diseases, electronic equipment and storage medium
CN110648751A (en) * 2019-10-30 2020-01-03 中南大学湘雅三医院 System and method for delineating possible diseases by utilizing skin CT
CN110755045A (en) * 2019-10-30 2020-02-07 湖南财政经济学院 Skin disease comprehensive data analysis and diagnosis auxiliary system and information processing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Deep Learning in Skin Disease Image Recognition: A Review;Ling-Fang Li,and etc;《 IEEE Access》;20201111;第8卷;第208264-208280页 *
基于皮肤影像大数据的皮肤病人工智能系列产品研发与应用;沈长兵等;《中国数字医学》;20190331;第14卷(第3期);第22-25页 *

Also Published As

Publication number Publication date
CN113053524A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
US10219736B2 (en) Methods and arrangements concerning dermatology
CN111292839B (en) Image processing method, image processing device, computer equipment and storage medium
CN113011485A (en) Multi-mode multi-disease long-tail distribution ophthalmic disease classification model training method and device
US20210174505A1 (en) Method and system for imaging and analysis of anatomical features
US20140313303A1 (en) Longitudinal dermoscopic study employing smartphone-based image registration
WO2014172671A1 (en) Physiologic data acquisition and analysis
CN109948671B (en) Image classification method, device, storage medium and endoscopic imaging equipment
CN110338759B (en) Facial pain expression data acquisition method
CN113053524B (en) Online auxiliary diagnosis and treatment system based on skin images
CN114947756B (en) Atopic dermatitis severity intelligent evaluation decision-making system based on skin image
Ghaznavi Bidgoli et al. Automatic diagnosis of dental diseases using convolutional neural network and panoramic radiographic images
CN114287915A (en) Noninvasive scoliosis screening method and system based on back color image
CN112863699B (en) ESD preoperative discussion system based on mobile terminal
Silva et al. A two-phase learning approach for the segmentation of dermatological wounds
Li et al. Application of UNETR for automatic cochlear segmentation in temporal bone CTs
CN113160151A (en) Panoramic film dental caries depth identification method based on deep learning and attention mechanism
CN113257391B (en) Course of disease management system of skin disease
WO2023178972A1 (en) Intelligent medical film reading method, apparatus, and device, and storage medium
Fadzil et al. Independent component analysis for assessing therapeutic response in vitiligo skin disorder
CN113066119B (en) Analytical system of skin damage information
Luo et al. Identification of nitrogen nutrition in rice based on BP neural network optimized by genetic algorithms.
CN115222675A (en) Hysteromyoma automatic typing method and device based on deep learning
Thatcher et al. Clinical investigation of a rapid non-invasive multispectral imaging device utilizing an artificial intelligence algorithm for improved burn assessment
TW202005609A (en) Oral image analysis system and method
CN113598756B (en) Spinal health condition monitoring method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant