CN111150369A - Medical assistance apparatus, medical assistance detection apparatus and method - Google Patents

Medical assistance apparatus, medical assistance detection apparatus and method Download PDF

Info

Publication number
CN111150369A
CN111150369A CN202010001003.2A CN202010001003A CN111150369A CN 111150369 A CN111150369 A CN 111150369A CN 202010001003 A CN202010001003 A CN 202010001003A CN 111150369 A CN111150369 A CN 111150369A
Authority
CN
China
Prior art keywords
image
skin
illumination
medical assistance
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010001003.2A
Other languages
Chinese (zh)
Inventor
王斯凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202010001003.2A priority Critical patent/CN111150369A/en
Publication of CN111150369A publication Critical patent/CN111150369A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Dermatology (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application provides a medical auxiliary device, a medical auxiliary detection device and a medical auxiliary detection method, wherein the medical auxiliary device comprises: image acquisition unit, ultraviolet illumination unit and output unit, wherein, ultraviolet illumination unit, an ultraviolet light for sending the ultraviolet light of setting for the wave band, image acquisition unit, a first image for gathering under the ultraviolet illumination unit illumination, and gather the second image under the natural light illumination, output unit, a first image and second image are used for exporting, obtain skin information with the discernment, the picture under the natural light that will gather in image acquisition unit combines with the picture that ultraviolet illumination unit shines, a skin information for discernment, the comprehensiveness that skin information obtained has been improved, and the rate of accuracy of discernment, solve the relatively poor technical problem of skin information recognition effect among the prior art.

Description

Medical assistance apparatus, medical assistance detection apparatus and method
Technical Field
The application relates to the technical field of image recognition, in particular to medical auxiliary equipment, medical auxiliary detection equipment and a medical auxiliary detection method.
Background
In recent years, as the pace of life is increasing, many people develop unhealthy living habits and eating habits, and the problem of environmental pollution and the like is solved, so that the prevalence rate of skin diseases is rising year by year. However, because the skin problem does not affect life in the early stage, and some patients do not want to spend time on hospital to check, but face diseases occur, the patients often misunderstand the skin care product problem, and treatment time is delayed, so that the disease condition is aggravated, and therefore people increasingly pay attention to a convenient and quick diagnosis form.
In the prior art, the picture of the affected part shot by the mobile phone of the user is uploaded to the cloud for intelligent analysis, however, the information obtained by the information obtaining mode is less, so that the technical problem of poor diagnosis effect is caused when further diagnosis is carried out according to the obtained information, and meanwhile, the application range is smaller.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first objective of the present application is to provide a medical auxiliary device, which combines the picture under the natural light collected by the image collecting unit and the picture under the irradiation of the ultraviolet illuminating unit, so as to identify the skin information, thereby improving the comprehensiveness of the skin information acquisition and the identification accuracy.
A second object of the present application is to propose a medical assistance detection device.
A third object of the present application is to provide a medical auxiliary detection method.
A fourth object of the present application is to propose a non-transitory computer-readable storage medium.
To achieve the above object, a first aspect of the present application provides a medical assistance device, including: the device comprises an image acquisition unit, an ultraviolet illumination unit and an output unit;
the ultraviolet illumination unit is used for emitting ultraviolet light with a set waveband;
the image acquisition unit is used for acquiring a first image under the illumination of the ultraviolet illumination unit and acquiring a second image under the illumination of natural light;
the output unit is used for outputting the first image and the second image so as to identify and obtain skin information.
To achieve the above object, a second aspect of the present application provides a medical assistance detection apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, to perform:
acquiring a first image and a second image; the first image is an image acquired under the illumination of ultraviolet light with a set waveband; the second image is an image acquired under natural light illumination;
and identifying and obtaining skin information according to the first image and the second image.
In order to achieve the above object, a third aspect of the present application provides a medical assistance detection method, including:
acquiring a first image and a second image; the first image is an image acquired under the illumination of ultraviolet light with a set waveband; the second image is an image acquired under natural light illumination;
and identifying and obtaining skin information according to the first image and the second image.
To achieve the above object, a fourth aspect of the present application provides a non-transitory computer-readable storage medium, having a computer program stored thereon, where the computer program, when executed by a processor, implements:
acquiring a first image and a second image; the first image is an image acquired under the illumination of ultraviolet light with a set waveband; the second image is an image acquired under natural light illumination;
and identifying and obtaining skin information according to the first image and the second image.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the medical auxiliary equipment comprises an image acquisition unit, an ultraviolet illumination unit and an output unit, wherein the ultraviolet illumination unit is used for emitting ultraviolet light with a set waveband, the image acquisition unit is used for acquiring a first image under illumination of the ultraviolet illumination unit and acquiring a second image under illumination of natural light, the output unit is used for outputting the first image and the second image to identify and obtain skin information, the image under the natural light acquired by the image acquisition unit is combined with the image under illumination of the ultraviolet illumination unit in the application to identify the skin information, the comprehensiveness of skin information acquisition is improved, and the accuracy of identification is improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic structural diagram of a medical auxiliary device provided in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of another medical assistance apparatus provided in an embodiment of the present application;
FIG. 3 is a schematic structural diagram of another medical assistance apparatus provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a medical auxiliary detection device provided in an embodiment of the present application;
FIG. 5 is a schematic flow chart of a medical assistance detection method provided by the present application; and
fig. 6 is a schematic flow chart of another medical assistance detection method provided by the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
A medical assistance apparatus, a medical assistance detection apparatus, and a method of the embodiments of the present application are described below with reference to the drawings.
Fig. 1 is a schematic structural diagram of a medical auxiliary device according to an embodiment of the present application.
As shown in fig. 1, the medical assistance apparatus 100 includes: an image acquisition unit 10, an ultraviolet illumination unit 20, and an output unit 30.
And an ultraviolet illuminating unit 20 for emitting ultraviolet light of a set wavelength band. For example, the ultraviolet illuminating unit 20 is a UV lamp which obtains 320 nm to 400 nm long-wave ultraviolet rays through a filter containing nickel oxide by using the characteristics of the ultraviolet rays, irradiates the affected part with the long-wave ultraviolet rays, and has strong refraction and light color if melanin is reduced, and has weak refraction and dark color if melanin is increased. The use of the ultraviolet illumination unit 20 to illuminate the affected area to assist in the detection of skin diseases, such as the presence or absence of fungal infections, psoriasis, vitiligo, acne, skin cancer, etc., will aid in the diagnosis and treatment of such skin diseases.
And the image acquisition unit 10 is used for acquiring a first image under the illumination of the ultraviolet illumination unit 20 and acquiring a second image under the illumination of natural light.
And an output unit 30 for outputting the first image and the second image to identify skin information.
Specifically, the states of affected parts of the skin under the irradiation of the ultraviolet illumination unit 20 are different, so that the collected first image can be helpful for diagnosis and treatment of these skin diseases, and the overall skin color of the skin under natural light can also help doctors to analyze some potential diseases, so in the embodiment, the first picture under natural light is collected by the image collection unit 10, and the second picture collected under the irradiation of the ultraviolet illumination unit 20 is collected, and the collected first picture and the collected second picture are combined for diagnosis of skin diseases, thereby improving the diagnosis accuracy.
As a possible implementation manner, as shown in fig. 2, the medical auxiliary device 100 further includes a light supplement unit 40 and a light sensor 50.
And a supplementary lighting unit 40 for emitting natural light to improve the image brightness of the second image.
And a light sensor 50 for monitoring the ambient brightness.
Furthermore, light filling unit 40 specifically is used for adjusting the natural light intensity that sends according to the ambient brightness of light sensor 50 monitoring, carries out intelligent light filling to guarantee that skin is under a standard light source all the time, promote the uniformity and the degree of accuracy that skin state judged.
In order to illustrate a medical auxiliary device according to an embodiment of the present application, this embodiment provides a possible implementation manner of the medical auxiliary device, and fig. 3 is a schematic structural diagram of another medical auxiliary device according to an embodiment of the present application.
The medical auxiliary device 100 may be fixed to an upper end of a mobile terminal, such as a smart phone, via a fixing frame, and is in communication connection with the mobile terminal via the communication interface 70, so as to transmit the acquired first image and second image to the mobile terminal, so as to identify skin information.
Specifically, the image acquisition unit 10 is aligned with a camera of the mobile terminal, so as to enhance the shooting effect. The states of the affected parts of the skin under the irradiation of the ultraviolet illumination unit 20 are different, so that the collected first image can be used for diagnosis and treatment of certain skin diseases, for example, whether fungal infection exists or not, whether acne exists or not, and the whole skin color of the skin under natural light can also help a doctor to analyze certain potential diseases, for example, jaundice and the like, therefore, in the embodiment, the picture under the natural light collected by the image collection unit 10 is combined with the picture collected under the irradiation of the ultraviolet illumination unit 20 for diagnosis of skin diseases, so as to improve the diagnosis accuracy, and in an actual application scene, the brightness of the environment is not uniform, when the light supplement unit 40 is used for an environmental light source, the intensity of the light source of the light supplement unit 40 is intelligently adjusted through the optical sensor 50 to achieve the effect of intelligent light supplement so as to ensure that the skin is always under a standard light source, the accuracy of judging the skin state is improved, and the diagnosis accuracy can be improved.
In the medical auxiliary device of this embodiment, the ultraviolet illumination unit included therein is configured to emit ultraviolet light of a set wavelength band, the image acquisition unit is configured to acquire a first image under illumination of the ultraviolet illumination unit and acquire a second image under illumination of natural light, and the output unit is configured to output the first image and the second image to identify and obtain skin information.
Based on the previous embodiment, this embodiment provides a medical auxiliary detection device, and fig. 4 is a schematic structural diagram of the medical auxiliary detection device provided in the embodiment of the present application.
As shown in fig. 4, the medical assistance detection apparatus 110 comprises a memory 111, a processor 112 and a computer program 1110 stored on the memory 111 and executable on the processor 112.
As a possible implementation, a computer program 1110 running on the processor 112 is to perform the following steps:
step 401, acquiring a first image and a second image, wherein the first image is an image acquired under ultraviolet light illumination of a set waveband, and the second image is an image acquired under natural light illumination.
Specifically, a first image and a second image are acquired by using a medical auxiliary device, wherein the medical auxiliary device can be fixed at the upper end of the electronic device, because the states of the affected part of the skin under the irradiation of ultraviolet light with a set waveband are different, for example, under the irradiation of ultraviolet light with long wave of 320 nm-400 nm, specifically, if the melanin is reduced, the refraction is strong, the color is light, if the melanin is increased, the refraction is weak, the color is dark, so that the first image acquired under the illumination of ultraviolet light with the set waveband can help to identify the skin change information, and because the skin presents the whole skin color, the brightness of the skin color, the distribution of dark areas and the like under the natural light, the doctor can also help to analyze some potential diseases, therefore, in the embodiment, the second image under the natural light is combined with the first image acquired under the illumination of ultraviolet light with the set waveband, the method is used for acquiring comprehensive skin information to diagnose the skin diseases, and improves the accuracy of skin disease diagnosis.
Step 402, identifying and obtaining skin information according to the first image and the second image.
Specifically, the first image and the second image are subjected to image feature extraction to obtain input features, such as skin texture features, skin color features, shape features and the like, the input features are input into a recognition model to obtain skin information for indicating skin diseases, wherein the recognition model is trained by adopting a training sample set to learn mapping relations between the input features and various skin diseases, the training sample set comprises a plurality of pathological images, each pathological image is marked with a corresponding skin disease, the first image collected under natural light and the second image collected under an ultraviolet illumination unit are combined to obtain the skin information, the comprehensiveness of information acquisition is improved, so that the accuracy of disease diagnosis is improved, in the application, the plurality of recognition models trained in advance are adopted to recognize the plurality of skin diseases according to the obtained skin information, the accuracy of skin disease identification is further improved, wherein the skin information also comprises skin information which cannot identify whether the skin disease is the skin disease, namely the part of the skin information is used as an intermediate result, and other medical detection means are further adopted for screening so as to confirm whether the specific skin disease is the skin disease or the autoimmune disease, and the like, and the accuracy and the reliability of diagnosis are improved.
It should be noted that, the medical auxiliary detection device in this embodiment performs data processing based on the first image and the second image acquired by the medical auxiliary device, where the medical auxiliary device may be a part of the medical auxiliary detection device or may be independent of the medical auxiliary detection device, and thus, the medical auxiliary detection device further includes a communication interface, where the communication interface is used for being in communication connection with the medical auxiliary device in the real-time example corresponding to fig. 1 to fig. 3, so as to acquire the first image and the second image acquired by the medical auxiliary device.
In the medical auxiliary detection device of the embodiment, a first image and a second image are acquired based on the medical auxiliary device, wherein the first image is acquired under ultraviolet light illumination of a set waveband, the second image is acquired under natural light illumination, and skin information is acquired according to the first image and the second image through identification, so that the comprehensiveness of acquiring the skin information is improved, and the diagnosis accuracy is improved.
As another possible implementation, the computer program 1110 running on the processor 112 in the medical assistance detection device 110 may further perform the following steps:
step 501, acquiring a first image and a second image, wherein the first image is an image acquired under ultraviolet light illumination of a set waveband, and the second image is an image acquired under natural light illumination.
Specifically, reference may be made to step 401, which has the same principle and is not described herein again.
Step 502, performing face recognition on the first image, determining whether a face is recognized, if so, executing step 504, and if not, executing step 503.
Specifically, after the first image is acquired, the face is recognized, if the face is not recognized, step 503 is executed to obtain skin information indicating skin diseases, and if the face is recognized, step 504 is executed to generate a face-face skin status report.
Step 503, performing image feature extraction on the first image and the second image to obtain an input feature, and inputting the input feature into the recognition model to obtain skin information for indicating the skin disease.
The input features include input features obtained by extracting image features of the first image and the second image, such as skin texture features, skin color features, shape features and the like, and semantic features of symptom descriptions, wherein the symptom descriptions are obtained in response to input operations, that is, descriptions about symptoms input by the user himself, such as whether pain exists or not, whether pruritus exists or not, and the like.
In the recognition model in this embodiment, the classification model for recognizing a face and a non-face is a classification model of a deep learning convolutional neural network based on a Visual Geometry Group (VGG), for example, an inclusion _ v4 model, as a possible implementation manner, the recognition models are multiple, each recognition model is used for determining a probability of existence of a corresponding skin disease, and is obtained by training a lesion map labeled with a corresponding skin disease in a training sample set, the input features are respectively input into each recognition model to obtain a probability of a corresponding skin disease, skin information is generated according to the probability of each skin disease output by the multiple recognition models, wherein the skin information includes skin diseases with a probability greater than a threshold, and by setting the multiple recognition models, for example, 20, each recognition model can recognize a probability of a common skin disease, common skin diseases are screened and identified through a plurality of identification models, and the accuracy of skin disease identification is improved.
As a possible implementation manner, the identification model for identifying the skin diseases of the first image is an inclusion _ v4 model, the identification model for identifying the twenty-multiple skin diseases of the second image is an inclusion _ v4 model, each identification model is used for judging the probability of the skin diseases, the identification models are obtained by training a lesion map labeled with skin disease category labels in a training sample set, input features are respectively input into each identification model to obtain the probabilities of all the types of the skin diseases, and the probabilities of the skin diseases with the probability values output by the identification models larger than a threshold value and the probabilities of the skin diseases of the first three types are taken to generate skin information.
It should be noted that the skin information obtained in the present application includes probabilities of suffering from various skin diseases, but the skin information still cannot be used as a final diagnosis result, and the skin information needs to be detected by a related detection means to confirm the suffering diseases, for example, lupus erythematosus seems like skin diseases, and is actually immune system diseases, so that the method helps a user to determine the direction of the diagnosis, guides the user to take the diagnosis in time, and avoids delaying the optimal treatment time.
Furthermore, a skin disease diagnosis report can be generated according to the recognition result, the report content comprises disease judgment, analysis of possible reasons causing diseases, recommendation of treatment modes, hospital recommendation of nearby treatment, recommendation and reminding of the time of the follow-up diagnosis and the like, so that professional analysis and recommendation of the user are given, and the satisfaction degree of the user on the platform is improved.
At step 504, a facial skin report is generated.
Specifically, after the face region is identified for the first image collected under natural light, a facial skin report is generated, and in order to further clearly illustrate how to generate the facial skin report, step 504 may include the following sub-steps:
and 5041, identifying the relative position relations of the plurality of key points in the face area to obtain similar faces matched with the relative position relations, and showing the users corresponding to the similar faces as recommended friends.
Specifically, after a face region is identified from a first image collected under natural light, the face region is identified according to the relative position relation of a plurality of key points of facial features, facial key features including facial shapes, eyebrow shapes, ear shapes, mouth shapes and nose shapes are extracted to form the face features, the obtained face features are matched with the face features of other users stored in a cloud end to obtain a face most similar to the user, the users corresponding to similar faces are displayed as recommended friends to promote mutual attention among the users, the social function is enhanced, and the activity of a user platform is enhanced.
Step 5042, according to the relative position of the face region in the first image, determining a target region corresponding to the relative position from the second image, and determining whether the target region has the set lesion color.
Specifically, a target area is determined in a second image obtained under ultraviolet irradiation according to the relative position of the identified face area in the first image, and whether a set lesion color appears in the target area is identified, for example, whether the skin has fungal infection or the like is identified.
Step 5043, identifying skin color based on the chromaticity of the face region.
Specifically, a color space (HSV) is used for extracting skin brightness, calculating a dark area, an RGB color space is used for extracting a skin color Value, calculating the overall skin color of the skin, and identifying the skin state of the human face based on the skin color, so that a beautifying suggestion can be conveniently given subsequently.
Step 5044, identifying skin flatness according to texture features contained in the face region.
Specifically, a Candy edge detection operator is used as a local edge in an image, the state that each pixel in the image is directly adjacent to the edge detection operator is obtained through detection, so that whether the corresponding pixel is outside the edge detection operator or not is determined, and whether wrinkles, namely texture features exist or not is determined based on the gradient between the pixel and the edge detection operator, namely, the flatness of the face region is recognized. As another possible implementation, the skin aging is determined to be severe by calculating the skin flatness using conventional image processing techniques including filtering, contrast enhancement, mathematical morphology, and other algorithms, determining whether there is an abnormality in the facial skin, such as acne, based on the flatness, or determining the facial texture based on the flatness, identifying the skin age of the face based on the facial texture, such as the skin is more uneven, indicating that there are more wrinkles.
And further, according to the similar human face matched in the steps, whether the human face has the set lesion color, the skin color and the flatness of the human face, analyzing to generate a facial skin state report of the human face, wherein the facial skin state report can contain the skin quality of the human face, the skin bell of the human face, the suggestion of makeup or the recommendation of skin care products and the like, so as to increase the high sensitivity of the user.
When generating the facial skin report, one or more of steps 5041 to 5042 in the present embodiment may be executed. In addition, the steps 5041 to 5042 are not performed sequentially, and those skilled in the art can flexibly set the steps as required.
Optionally, the user may select a diagnosis mode while performing the above intelligent diagnosis, as a possible implementation manner, according to a doctor list, select a corresponding doctor diagnosis, and submit the first image acquired under the illumination of the ultraviolet illumination unit, the second image acquired under the illumination of natural light, and the description of the disease, wait for the doctor to feed back the diagnosis result, and the doctor may give the diagnosis result within 24 hours, and the diagnosis result includes a disease type diagnosis, a medication recommendation, a nearby pharmacy recommendation, and other cautions, so that the user knows the disease condition, and may select to go to the pharmacy to purchase a medicine or select to send the medicine to home for service according to the medication recommendation and the pharmacy recommendation. If the selected doctor chooses to refuse to diagnose, the system reminds the user to reselect other doctors, and the user can continue to select other doctors on the platform, and the user complains about the platform to increase the satisfaction degree and the viscosity of the user on the platform.
As another possible implementation manner, the user may select online consultation to implement remote diagnosis service, the user is required to reserve doctor diagnosis time, remote video connection is performed at the appointed time, diagnosis is performed by taking a mobile phone camera as a medium, after the diagnosis is completed, the doctor gives a diagnosis report, the diagnosis result includes disease type diagnosis, medication recommendation, nearby pharmacy recommendation and other cautionary matters, so that the user knows the illness state, and can select to go to the pharmacy to purchase a medicine or select to send the medicine to home for service according to the medication recommendation and the pharmacy recommendation, so that the satisfaction degree and the viscosity of the user on the platform are increased.
In the medical auxiliary detection device of the embodiment, a first image and a second image are acquired based on the medical auxiliary device, wherein the first image is an image acquired under ultraviolet light illumination of a set waveband, the second image is an image acquired under natural light illumination, a plurality of identification models which are trained in advance are utilized, each identification model can identify one skin disease, the identification models are adopted to identify various skin diseases according to the first image and the second image, skin information is obtained through identification, and the identification purpose and the diagnosis accuracy are improved.
Based on the foregoing embodiments, the present embodiment provides a medical auxiliary detection method, and fig. 5 is a schematic flow chart of the medical auxiliary detection method provided in the embodiments of the present application.
As shown in fig. 5, the method may include the steps of:
step 601, acquiring a first image and a second image, wherein the first image is an image acquired under ultraviolet light illumination of a set waveband, and the second image is an image acquired under natural light illumination.
Specifically, a first image and a second image are acquired by using a medical auxiliary device, wherein the medical auxiliary device can be fixed at the upper end of the electronic device, because the states of the affected part of the skin under the irradiation of ultraviolet light with a set waveband are different, for example, under the irradiation of ultraviolet light with long wave of 320 nm-400 nm, specifically, if the melanin is reduced, the refraction is strong, the color is light, if the melanin is increased, the refraction is weak, the color is dark, so that the first image acquired under the illumination of ultraviolet light with the set waveband can help the diagnosis and treatment of the skin diseases, and because the skin presents the whole skin color, the brightness of the skin color, the distribution of dark areas and the like under the natural light, the doctor can also help the doctor to analyze some potential diseases, therefore, in the embodiment, the second image under the natural light is combined with the first image acquired under the illumination of ultraviolet light with the set waveband, the kit is used for diagnosing skin diseases, and improves the diagnosis accuracy.
Step 602, identifying and obtaining skin information according to the first image and the second image.
Specifically, image feature extraction is performed on a first image and a second image to obtain input features, such as skin texture features, skin color features, shape features and the like, the input features are input into a recognition model to obtain skin information for indicating skin diseases, wherein the recognition model is trained by adopting a training sample set, the mapping relation between the input features and various skin diseases is obtained by learning, the training sample set comprises a plurality of pathological images, each pathological image is marked with a corresponding skin disease, the diagnosis accuracy is improved by combining the first image collected under natural light and the second image collected under an ultraviolet illumination unit for diagnosing the skin diseases, and the accuracy of skin disease recognition is further improved by adopting a plurality of recognition models trained in advance to recognize the plurality of skin diseases.
In the medical auxiliary detection method, a first image and a second image are acquired based on medical auxiliary equipment, wherein the first image is acquired under ultraviolet light illumination of a set waveband, the second image is acquired under natural light illumination, and skin information is acquired according to the first image and the second image in an identification mode, so that the diagnosis accuracy is improved.
Based on the above embodiments, the present application also provides a possible implementation manner of another medical assistance detection method, and fig. 6 is a schematic flow chart of the another medical assistance detection method provided by the present application, as shown in fig. 6, the method includes the following steps:
step 701, acquiring a first image and a second image, wherein the first image is an image acquired under ultraviolet light illumination of a set waveband, and the second image is an image acquired under natural light illumination.
Specifically, reference may be made to step 601 in the previous embodiment, which has the same principle and is not described herein again.
Step 702, performing face recognition on the first image, determining whether a face is recognized, if so, executing step 704, and if not, executing step 703.
Specifically, after the first image is acquired, the face is recognized, if the face is not recognized, step 703 is executed to obtain skin information indicating skin diseases, and if the face is recognized, step 704 is executed to generate a face-face skin status report.
And 703, extracting image features of the first image and the second image to obtain input features, and inputting the input features into the identification model to obtain skin information for indicating skin diseases.
The input features include input features obtained by extracting image features of the first image and the second image, such as skin texture features, skin color features, shape features and the like, and semantic features of symptom descriptions, wherein the symptom descriptions are obtained in response to input operations, that is, descriptions about symptoms input by the user himself, such as whether pain exists or not, whether pruritus exists or not, and the like.
The identification model in this embodiment is, for example, a classification model of a deep learning convolutional neural network based on a Visual Geometry Group (VGG), where the identification model is multiple, each identification model is used to determine the probability of the existence of a corresponding skin disease, and is obtained by training a lesion map labeled with a corresponding skin disease in a training sample set, and the input features are respectively input into each identification model to obtain the probability of the corresponding skin disease, and skin information is generated according to the probability of each skin disease output by the multiple identification models, wherein the skin information comprises skin diseases with the probability larger than a threshold value, and by setting a plurality of recognition models, for example, 20, each recognition model can recognize the probability of getting a common skin disease, common skin diseases are screened and identified through a plurality of identification models, and the accuracy of skin disease identification is improved.
Furthermore, a skin disease diagnosis report can be generated according to the recognition result, the report content comprises disease judgment, analysis of possible reasons causing diseases, recommendation of treatment modes, hospital recommendation of nearby treatment, recommendation and reminding of the time of the follow-up diagnosis and the like, so that professional analysis and recommendation of the user are given, and the satisfaction degree of the user on the platform is improved.
Step 704, a facial skin report is generated.
Specifically, after the face region is identified for the first image collected under natural light, a facial skin report is generated, and in order to further clarify how to generate the facial skin report, step 704 may include the following sub-steps:
step 7041, the relative position relationship of the plurality of key points is identified for the face region to obtain a similar face matched with the relative position relationship, and the user corresponding to the similar face is displayed as a recommended friend.
Specifically, after a face region is identified from a first image collected under natural light, the face region is identified according to the relative position relation of a plurality of key points of facial features, facial key features including facial shapes, eyebrow shapes, ear shapes, mouth shapes and nose shapes are extracted to form the face features, the obtained face features are matched with the face features of other users stored in a cloud end to obtain a face most similar to the user, the users corresponding to similar faces are displayed as recommended friends to promote mutual attention among the users, the social function is enhanced, and the activity of a user platform is enhanced.
Step 7042, according to the relative position of the face region in the first image, a target region corresponding to the relative position is determined from the second image, and whether the target region has a set lesion color is determined.
Specifically, a target area is determined in a second image obtained under ultraviolet irradiation according to the relative position of the identified face area in the first image, and whether a set lesion color appears in the target area is identified, for example, whether the skin has fungal infection or the like is identified.
Step 7043, identifying the skin color according to the chromaticity of the face region.
Specifically, the method comprises the steps of extracting skin brightness by using a color space (HSV), calculating a dark area, extracting a skin color Value by using an RGB color space, calculating the overall skin color of the skin, and identifying the skin state of a human face based on the skin color so as to facilitate subsequent beauty instruction or beauty product recommendation.
Step 7044, identify skin flatness according to texture features contained in the face region.
Specifically, a Candy edge detection operator is used as a local edge in an image, the state that each pixel in the image is directly adjacent to the edge detection operator is obtained through detection, whether the corresponding pixel is outside the edge detection operator or not is determined, and whether wrinkles exist or not is determined based on the gradient between the pixel and the edge detection operator, namely, the flatness of the face region is identified. As another possible implementation, the skin aging is determined to be severe by calculating the skin flatness using conventional image processing techniques including filtering, contrast enhancement, mathematical morphology, and other algorithms, determining whether there is an abnormality in the facial skin, such as acne, based on the flatness, or determining the facial texture based on the flatness, identifying the skin age of the face based on the facial texture, such as the skin is more uneven, indicating that there are more wrinkles.
And further, according to the similar human faces matched in the steps, whether the human faces have set lesion colors, skin colors and the flatness of the human faces, analyzing to generate a facial skin state report of the human faces, wherein the facial skin state report can contain the skin types of the human faces, the skin ages of the human faces, suggestions of makeup or recommendations of skin care products and the like, so that the high sensitivity of the users is improved.
When generating the facial skin report, one or more of steps 7041 to 7042 in the present embodiment may be performed. In addition, the execution of steps 7041 to 7042 is not in sequence, and those skilled in the art can flexibly set the sequence as required.
Optionally, the user may select a diagnosis mode while performing the above intelligent diagnosis, as a possible implementation manner, according to a doctor list, select a corresponding doctor diagnosis, and submit the first image acquired under the illumination of the ultraviolet illumination unit, the second image acquired under the illumination of natural light, and the description of the disease, wait for the doctor to feed back the diagnosis result, and the doctor may give the diagnosis result within 24 hours, and the diagnosis result includes a disease type diagnosis, a medication recommendation, a nearby pharmacy recommendation, and other cautions, so that the user knows the disease condition, and may select to go to the pharmacy to purchase a medicine or select to send the medicine to home for service according to the medication recommendation and the pharmacy recommendation. If the selected doctor chooses to refuse to diagnose, the system reminds the user to reselect other doctors, and the user can continue to select other doctors on the platform, and the user complains about the platform to increase the satisfaction degree and the viscosity of the user on the platform.
As another possible implementation manner, the user may select online consultation to implement remote diagnosis service, the user is required to reserve doctor diagnosis time, remote video connection is performed at the appointed time, diagnosis is performed by taking a mobile phone camera as a medium, after the diagnosis is completed, the doctor gives a diagnosis report, the diagnosis result includes disease type diagnosis, medication recommendation, nearby pharmacy recommendation and other cautionary matters, so that the user knows the illness state, and can select to go to the pharmacy to purchase a medicine or select to send the medicine to home for service according to the medication recommendation and the pharmacy recommendation, so that the satisfaction degree and the viscosity of the user on the platform are increased.
In the medical auxiliary detection method, a first image and a second image are acquired based on medical auxiliary equipment, wherein the first image is acquired under ultraviolet light illumination of a set waveband, the second image is acquired under natural light illumination, a plurality of pre-trained recognition models are utilized, each recognition model can recognize one skin disease, and according to the first image and the second image, a plurality of recognition models are adopted to recognize a plurality of skin diseases, so that skin information is obtained through recognition, and the recognition purpose and the diagnosis accuracy are improved.
To implement the above embodiments, the present application also provides a non-transitory computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing:
acquiring a first image and a second image; the first image is an image acquired under the illumination of ultraviolet light with a set waveband; the second image is an image acquired under natural light illumination;
and identifying and obtaining skin information according to the first image and the second image.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. A medical assistance device, comprising: the device comprises an image acquisition unit, an ultraviolet illumination unit and an output unit;
the ultraviolet illumination unit is used for emitting ultraviolet light with a set waveband;
the image acquisition unit is used for acquiring a first image under the illumination of the ultraviolet illumination unit and acquiring a second image under the illumination of natural light;
the output unit is used for outputting the first image and the second image so as to identify and obtain skin information.
2. The medical assistance device of claim 1, further comprising: a light supplement unit;
the light supplementing unit is used for emitting natural light to improve the image brightness of the second image.
3. The medical assistance device of claim 2, further comprising a light sensor;
the optical sensor is used for monitoring the ambient brightness;
the light supplementing unit is specifically used for adjusting the light intensity of the emitted natural light according to the ambient brightness monitored by the light sensor.
4. A medical assistance detection apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor to perform:
acquiring a first image and a second image; the first image is an image acquired under the illumination of ultraviolet light with a set waveband; the second image is an image acquired under natural light illumination;
and identifying and obtaining skin information according to the first image and the second image.
5. The medical assistance detection apparatus according to claim 4, wherein said identifying skin information from the first image and the second image comprises:
performing image feature extraction on the first image and the second image to obtain input features;
inputting the input features into a recognition model to obtain skin information used for indicating skin diseases;
the identification model is trained by adopting a training sample set, and the mapping relation between the input characteristics and various skin diseases is obtained by learning; the training sample set comprises a plurality of pathological images, and each pathological image is marked with a corresponding skin disease.
6. The medical auxiliary detection device according to claim 5, wherein the number of the recognition models is plural, and each recognition model is used for judging the probability of the existence of a corresponding skin disease and is obtained by training by using a lesion map marked with the corresponding skin disease in the training sample set;
inputting the input characteristics into each recognition model respectively to obtain the probability of a corresponding skin disease;
generating the skin information according to the probability of each skin disease output by a plurality of recognition models; wherein the skin information includes skin diseases having a probability greater than a threshold.
7. The medical assistance detection device of claim 5, wherein the input features further comprise semantic features of a symptom description; wherein the symptom description is acquired in response to an input operation.
8. The medical assistance detection apparatus according to claim 4, wherein before the image feature extraction of the first image and the second image to obtain the input feature, further comprising:
performing face recognition on the first image;
the image feature extraction of the first image and the second image to obtain the input features includes:
and if the face area in the first image is not identified, performing image feature extraction on the first image and the second image.
9. The medical assistance detection apparatus according to claim 8, further comprising, after the face recognition of the first image:
if the face area in the first image is identified, identifying the relative position relation of a plurality of key points for the face area to obtain a similar face matched with the relative position relation; displaying the user corresponding to the similar face as a recommended friend;
and/or if the face region in the first image is obtained through recognition, determining a target region which accords with the relative position from the second image according to the relative position of the face region in the first image; judging whether the target area presents a set lesion color or not;
and/or if the face region in the first image is obtained through identification, identifying skin color according to the chromaticity of the face region;
and/or if the face area in the first image is obtained through identification, identifying the skin flatness according to the texture features contained in the face area.
10. Medical assistance detection device according to any of claims 4-9, wherein the medical assistance detection device is provided with a communication interface for communication connection with a medical assistance device according to any of claims 1-3.
11. A medical assistance detection method, comprising:
acquiring a first image and a second image; the first image is an image acquired under the illumination of ultraviolet light with a set waveband; the second image is an image acquired under natural light illumination;
and identifying and obtaining skin information according to the first image and the second image.
12. A non-transitory computer readable storage medium having a computer program stored thereon, the program when executed by a processor implementing:
acquiring a first image and a second image; the first image is an image acquired under the illumination of ultraviolet light with a set waveband; the second image is an image acquired under natural light illumination;
and identifying and obtaining skin information according to the first image and the second image.
CN202010001003.2A 2020-01-02 2020-01-02 Medical assistance apparatus, medical assistance detection apparatus and method Pending CN111150369A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010001003.2A CN111150369A (en) 2020-01-02 2020-01-02 Medical assistance apparatus, medical assistance detection apparatus and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010001003.2A CN111150369A (en) 2020-01-02 2020-01-02 Medical assistance apparatus, medical assistance detection apparatus and method

Publications (1)

Publication Number Publication Date
CN111150369A true CN111150369A (en) 2020-05-15

Family

ID=70561014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010001003.2A Pending CN111150369A (en) 2020-01-02 2020-01-02 Medical assistance apparatus, medical assistance detection apparatus and method

Country Status (1)

Country Link
CN (1) CN111150369A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113299363A (en) * 2021-04-27 2021-08-24 西安理工大学 Yoov 5-based dermatology over-the-counter medicine selling method
CN117379005A (en) * 2023-11-21 2024-01-12 欣颜时代(广州)技术有限公司 Skin detection control method, device, equipment and storage medium of beauty instrument

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009145735A1 (en) * 2008-05-29 2009-12-03 National University Of Singapore Method of analysing skin images using a reference region to diagnose a skin disorder
US20130322711A1 (en) * 2012-06-04 2013-12-05 Verizon Patent And Licesing Inc. Mobile dermatology collection and analysis system
CN109064465A (en) * 2018-08-13 2018-12-21 上海试美网络科技有限公司 It is a kind of that labeling method is merged with the skin characteristic of natural light based on UV light
CN109949272A (en) * 2019-02-18 2019-06-28 四川拾智联兴科技有限公司 Identify the collecting method and system of skin disease type acquisition human skin picture
CN110298304A (en) * 2019-06-27 2019-10-01 维沃移动通信有限公司 A kind of skin detecting method and terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009145735A1 (en) * 2008-05-29 2009-12-03 National University Of Singapore Method of analysing skin images using a reference region to diagnose a skin disorder
US20130322711A1 (en) * 2012-06-04 2013-12-05 Verizon Patent And Licesing Inc. Mobile dermatology collection and analysis system
CN109064465A (en) * 2018-08-13 2018-12-21 上海试美网络科技有限公司 It is a kind of that labeling method is merged with the skin characteristic of natural light based on UV light
CN109949272A (en) * 2019-02-18 2019-06-28 四川拾智联兴科技有限公司 Identify the collecting method and system of skin disease type acquisition human skin picture
CN110298304A (en) * 2019-06-27 2019-10-01 维沃移动通信有限公司 A kind of skin detecting method and terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113299363A (en) * 2021-04-27 2021-08-24 西安理工大学 Yoov 5-based dermatology over-the-counter medicine selling method
CN117379005A (en) * 2023-11-21 2024-01-12 欣颜时代(广州)技术有限公司 Skin detection control method, device, equipment and storage medium of beauty instrument
CN117379005B (en) * 2023-11-21 2024-05-28 欣颜时代(广州)技术有限公司 Skin detection control method, device, equipment and storage medium of beauty instrument

Similar Documents

Publication Publication Date Title
US11699102B2 (en) System and method for multiclass classification of images using a programmable light source
CN113011485B (en) Multi-mode multi-disease long-tail distribution ophthalmic disease classification model training method and device
CN110211087B (en) Sharable semiautomatic marking method for diabetic fundus lesions
CN110689025B (en) Image recognition method, device and system and endoscope image recognition method and device
US20160228008A1 (en) Image diagnosis device for photographing breast by using matching of tactile image and near-infrared image and method for aquiring breast tissue image
CN109948671B (en) Image classification method, device, storage medium and endoscopic imaging equipment
KR102162683B1 (en) Reading aid using atypical skin disease image data
Rajathi et al. Varicose ulcer (C6) wound image tissue classification using multidimensional convolutional neural networks
CN114930407A (en) System, microscope system, method and computer program for training or using machine learning models
CN111150369A (en) Medical assistance apparatus, medical assistance detection apparatus and method
CN110960193A (en) Traditional Chinese medicine diagnosis analysis system and method based on face and tongue image acquisition
CN113205490A (en) Mask R-CNN network-based auxiliary diagnosis system and auxiliary diagnosis information generation method
CN114269231A (en) Determining a diagnosis based on a patient's skin tone using a set of machine-learned diagnostic models
CN112788200A (en) Method and device for determining frequency spectrum information, storage medium and electronic device
Gavrilov et al. Deep learning based skin lesions diagnosis
CN116188466B (en) Method and device for determining in-vivo residence time of medical instrument
CN117237351A (en) Ultrasonic image analysis method and related device
CN116912247A (en) Medical image processing method and device, storage medium and electronic equipment
Junayed et al. A transformer-based versatile network for acne vulgaris segmentation
CN115910300A (en) Medical virtual platform system based on artificial intelligence and information processing method
CN114092974A (en) Identity recognition method, device, terminal and storage medium
CN115035086A (en) Intelligent tuberculosis skin test screening and analyzing method and device based on deep learning
Ghodke et al. Novel Approach of Automatic Disease Prediction And Regular Check-Up System Using Ml/Dl
KR20080109425A (en) Methods and system for extracting facial features and verifying sasang constitution through image recognition
KR102165487B1 (en) Skin disease discrimination system based on skin image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination