WO2009145735A1 - Method of analysing skin images using a reference region to diagnose a skin disorder - Google Patents

Method of analysing skin images using a reference region to diagnose a skin disorder Download PDF

Info

Publication number
WO2009145735A1
WO2009145735A1 PCT/SG2009/000190 SG2009000190W WO2009145735A1 WO 2009145735 A1 WO2009145735 A1 WO 2009145735A1 SG 2009000190 W SG2009000190 W SG 2009000190W WO 2009145735 A1 WO2009145735 A1 WO 2009145735A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
image
instructions
squares
interest
Prior art date
Application number
PCT/SG2009/000190
Other languages
French (fr)
Inventor
Keng Hui Lim
Chee Leok Goh
Wee Kheng Leow
Xin-Wei Aw
Original Assignee
National University Of Singapore
National Skin Centre
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University Of Singapore, National Skin Centre filed Critical National University Of Singapore
Publication of WO2009145735A1 publication Critical patent/WO2009145735A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • A61B5/442Evaluating skin mechanical properties, e.g. elasticity, hardness, texture, wrinkle assessment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • A61B5/444Evaluating skin marks, e.g. mole, nevi, tumour, scar
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/0077Types of the still picture apparatus
    • H04N2201/0079Medical imaging device

Definitions

  • This invention relates to computer-aided dermatological diagnosis and treatment of skin disorders. More particularly, this invention relates to a system that captures a suitable image of skin of a body part and processes for identifying skin disorders in the image. Still more particularly, this invention relates to a system that captures a suitable image of facial skin and provides processes for identifying skin disorders in a region of interest in the image using a reference region from the image.
  • diagnosing and monitoring of skin disorders depend upon identifying the size and color of the afflicted area of skin. Any number of environmental factors may affect an image and an analysis performed using an image. These include, but are not limited to, the lighting of the room affecting the color of an image, and the angle and distance from which a camera captures the image affecting both size and shape of the imaged area. Thus, conventional images do not offer an adequate alternative. In the past, numerous systems have been proposed to try to provide uniform lighting, camera angle and distance with respect to the afflicted area to provide consistent images for use in analysis of the area.
  • a first group of prior art systems provides a hand held device to place a cone over a small portion of skin to block ambient lighting and then illuminate the skin under the cone with a lighting system.
  • the device may then provide viewing optics to view the illuminated skin or a camera to capture an image of the skin.
  • a second group of prior art systems provides systems with a compartment which regulate the light applied to skin on a body part such as a face.
  • This second group of systems often provide a substantially enclosed compartment to block out ambient light and a lighting system in the compartment to provide the desired lighting for the skin such as described in US patent number 6,993,167, entitled “System and Method for Examining, Recording, and Analyzing Dermatological Conditions" issued to Skladnev et al.
  • this system does not provide a camera for taking pictures from varying angles or a method for holding a head or other body part in substantially the same position for different examinations.
  • Some of these second group systems do provide a head rest and other devices for holding the head or other body part in place while the proper lighting is provided in the compartment.
  • the cameras provided in these systems are either fixed to capture images from one vantage point or connected to devices for moving the camera within the compartment.
  • the use of devices to position the camera often makes it difficult to capture images from substantially the same view points in subsequent visits.
  • the use of the devices often makes movement of the camera time consuming and bothersome for a patient enclosed in the compartment.
  • a third group of prior art systems describes systems that receive images of skin and identify defects or blemishes and recommend a product or treatment. Examples of these systems are disclosed in US patent 6,571 ,003, entitled “Skin Imaging and Analysis Systems and Methods” issued to Hillebrand et al.; and US Patent Publication 2004/0125996, entitled “Skin Diagnostic Imaging Method and Apparatus” on behalf of Eddowes et al.
  • these systems are often used to identify areas of a body area such as a face with blemishes or other defects and provide little or no analysis of the condition or provide a method for monitoring and analysing progress over multiple uses on a patient.
  • a first advantage of a system in accordance with this invention is that a compartment is provided that provides desired lighting for imaging areas of skin parts such as a facial area of a patient.
  • a second advantage of a system in accordance with this invention is that an array of mirrors and other easily moved components are provided to allow a single stationary camera to capture multiple views of the area in a quick and convenient manner. The array of mirrors can be replaced by an array of cameras to capture multiple views of the area in a quick and convenient manner.
  • a third advantage of a system in accordance with this invention is that applications executed by a computer system can detect, classify, and analyze skin disorders from the image to provide an objective measurement of the disorders and progress made during treatment of the disorder.
  • a skin analysis system may include an imaging compartment, a light source, a camera, and a computer system.
  • the imaging compartment includes a partially enclosed cavity for blocking ambient light.
  • the light source is inside the cavity and provides adequate lighting for the body part.
  • the light source may provide colored, infrared, polarized, ultraviolet, or white light depending on the ailments being detected.
  • a polarizer may be used to control the polarization of the polarization state of the lighting for the captured image.
  • the polarizer may be movable via an actuator or other means to allow adjustment of the polarization of the light
  • the camera is also in the cavity and captures images of the body part for analysis.
  • Optical filters or other devices may be used in conjunction with the camera to capture conventional, colored, infrared, polarized, or ultraviolet images depending on the types of skin disorders being detected. These filters may be movable by actuators or other means to allow the type of image captured.
  • the computer system receives an image from the camera and executes instructions to analysis the image to detect skin disorders.
  • the analysis is provided in the following manner.
  • the system receives an image.
  • a reference region representing a natural skin color of the body part is determined from the image.
  • a region of interest that is an area of the image to be analyzed for skin disorders is determined from the image.
  • the system determines one or more problems areas in the area of interest using the reference region as a reference for a natural skin color. Skin disorders in the problem area are then classified and results of the analysis are generated.
  • the image compartment may allow a single camera to take single view images of a body part such as a face of a patient and a split image of a left and right oblique of the body part.
  • the image compartment may include a split image device.
  • the split image device may be a prism device and mirrors in some embodiments.
  • the prism device may be a prism for refracting images from associated mirrors.
  • the prism device may include angled mirrors that reflect the image from the associates mirrors.
  • the prism device is configured in the cavity to be between the camera and the body part.
  • the mirrors are configured in the cavity to reflect images of the body part onto the prism device.
  • the camera is then positioned with respect to the prism device to capture a split image showing a first side and a second side of the body part.
  • the prism device and mirrors be movable to allow both single view and the split images of the body part to be captured.
  • the prism and mirrors may be replaced by cameras to capture images showing a first side and a second side of the body part.
  • a rest stand may also be included in the cavity.
  • the rest stand is configured in the cavity to position the body part in a substantially constant position in the cavity with relation to the camera.
  • the rest stand may further include a chin rest defined as an indenture in the rest stand to allow a chin to rest in the indenture.
  • the image compartment includes a back side wall, a first side wall, and a second side wall.
  • a first mirror is affixed to said first side wall and a second mirror is affixed to the second side wall.
  • the positioning of the first and second mirrors is adjustable.
  • the positions of the mirrors are adjusted by moving the first and second side walls.
  • the side walls may be connected to the back side wall by hinges. These hinges may be movable by controlling an actuator connected to each hinge.
  • the image compartment may also include a top covering in accordance with the preferred embodiment.
  • the top cover at least partially encloses the cavity and may include movable panels that can be adjusted to block the ambient light.
  • a head guide may also be affixed to the cover and extend into the compartment to aid in aligning of a head or other body part of a subject to capture images of a face of the subject.
  • the applications for analyzing an image may be configured in the following manner.
  • the system may determine the reference region and regions of interest in an image either by receiving an input of the regions from a user or by using conventional feature identification methods to find areas of an imaged body part likely to provide the reference region or a region of interest.
  • the process of detecting problem areas may be performed in the following manner in accordance with some embodiments of this invention.
  • the process determines a natural pixel color from the pixels in the reference region and computes a color difference from the natural pixel color for each pixel in a region of interest.
  • the entire image; or the reference region and/or region of interest may be normalized prior to performing the calculations.
  • pixels having a color difference greater than a threshold are identified.
  • Like identified groups of pixels from said identified pixels having similar color differences are formed using connected component analysis. The process then identifies areas of pixels in sets of connected like identified groups.
  • the determining of like identified groups is performed in the following manner.
  • the process begins by constructing a color difference histogram for the pixels in the region of interest.
  • a distribution model is then fixed over the histogram.
  • High and low thresholds are then determined from the distribution model.
  • Each pixel in the region of interest having a color difference that is greater than the high threshold is identified as a large color difference pixel.
  • Each pixel in the region of interest having a color difference that is greater than the low threshold and less than the high threshold is identified as a medium color difference pixel and the remainder of the pixels are identified as small color difference pixels.
  • Each like identified group includes connected pixels having the same type of color difference.
  • the process for classifying a skin disorder in a region is performed by dividing the area with connected like identified groups of pixels into sample squares and classifying each sample square.
  • the classifying of the sample squares is performed by extracting features from the sample squares and performing a classifier process on the extracted features.
  • the classifier may be a Bayesian classifier or a support vector machine or other methods.
  • One process for determining the sample squares to classify skin disorders is performed in the following manner. First, a primary minimum bounding box including all pixels in the group is determined. A morphological operator is then applied to dilate the boundary of the primary minimum bounding box. Each group of neighboring medium color difference pixels in the primary bounding box is formed into a group of connected components. Each group of neighboring large color difference pixels in the primary bounding box is also formed into a group of connected components. A secondary minimum bounding box is then formed for each group of connected components. Each secondary minimum bounding box determined to be entirely within the primary minimum bounding box is returned as a sample square.
  • a second process for determining the sample squares in accordance with other embodiments of this invention is performed in the following manner. First, the process determines a primary minimum bounding box including pixels from the group. The primary bounding box is then divided into a set of squares of a particular size and a ratio of pixels of interest in each square in the set is determined. The process then identifies squares with a ratio greater than a threshold and returns the identified squares as sample squares. Depending on the embodiment, the pixel of interest may by large, medium, and/or small color difference pixels. The process is repeated for different sets of equal sized squares with each set having squares of different areas to provide sample squares of different sizes.
  • FIG. 1 illustrating components of a skin analysis system in accordance with one embodiment of this invention
  • FIG. 2 illustrating a block diagram of a processing system included in a computer in accordance with one embodiment of this invention
  • FIG. 3 illustrating a perspective view of an image compartment in accordance with one embodiment of this invention
  • Figure 4 illustrating a view downward into an image compartment in accordance with one embodiment of this invention
  • FIG. 5 illustrating a side view inside an image compartment in accordance with one embodiment of this invention
  • Figure 6 illustrating the manner in which a split view of left and right oblique of a face are captured in one image in accordance with embodiment of this invention
  • Figure 7 illustrating a frontal image of a face captured with a camera in accordance with one embodiment of this invention
  • Figure 8 illustrating a split view image of left and right oblique of a face in accordance with one embodiment of this invention
  • Figure 9 illustrating a split view image of a left and a right oblique of a face with a reference region highlighted in accordance with one embodiment of this invention
  • Figure 10 illustrating a split view image of a left and a right oblique of a face with two different manners of region of interest being highlighted in accordance with one embodiment of this invention
  • Figure 11 illustrating a flow diagram of a process for analysing an image in accordance with one embodiment of this invention
  • Figure 12 illustrating a flow diagram of a process for capturing images of a face in accordance with one embodiment of this invention
  • FIG. 13 illustrating a flow diagram of a process for reporting results in accordance with one embodiment of this invention
  • Figure 14 illustrating a flow diagram of a process for classifying skin disorders in area of a region of interest in accordance with one embodiment of this invention
  • Figure 15 illustrating a flow diagram of a process for classifying pixels in the region of interest in accordance with one embodiment of this invention
  • Figure 16 illustrating a flow diagram for a first process for determining sample squares in the region of interest in accordance with one embodiment of this invention
  • Figure 17 illustrating a flow diagram for a second process for determining sample squares in the region of interest in accordance with another embodiment of this invention
  • Figure 18 illustrating a flow diagram of a process for classifying skin disorders in sample squares of a region of interest in accordance with one embodiment of this invention.
  • This invention relates to computer-aided dermatological diagnosis and treatment of skin disorders. More particularly, this invention relates to a system that captures a suitable image of skin of a body part and processes for identifying skin disorders in the image. Still more particularly, this invention relates to a system that captures a suitable image of facial skin and provides processes for identifying skin disorders in a region of interest in the image using a reference region from the image. For clarity same components shown in more than one figure are given the reference numeral throughout this description.
  • This invention includes components for capturing images of a body that are then analyzed by applications executed by a processing system such as a computer.
  • a processing system such as a computer.
  • processes described herein are instructions stored in software, hardware or firmware that are executed by system to perform the processes described and may be executed on any processing system connected to a network.
  • the exact processing system executing the applications and the exact connection of the processing system to a processing system used by user are not important to this invention and are left as a design choice to those skilled in the art.
  • FIG. 1 illustrates skin analysis system 100 in accordance with one embodiment of this invention.
  • Skin analysis system 100 includes computer system 105 and image compartment 110.
  • Computer system 105 executes applications for receiving an image from a camera (Not shown in Figure 1 ) inside image compartment 110.
  • computer system 105 may be any type of processing device having a processor and a memory that meet requirements for executing the software applications in accordance with this invention.
  • computer system 105 may be connected to other processing system 120 via a network 115.
  • the applications of this invention may either be stored on or executed by the connected processing system 120 without departing from this invention.
  • the exact network configuration and connections of devices are also unimportant to this invention and are left as a design choice.
  • Image compartment 110 at least partially encloses a cavity in which images are captured in accordance with this invention.
  • Image compartment 110 blocks ambient light from the surrounding environment and provides proper lighting of an enclosed body part such as a head to provide images that may be analyzed by a process provided in accordance with this invention.
  • a more complete description of image compartment 110 is provided below with reference to Figure 3.
  • FIG. 2 illustrates an exemplary processing system 200 of computer 105 in accordance with an embodiment of this invention.
  • Processing system 200 includes the components needed to execute the applications from instructions stored in memory in accordance with this invention.
  • One skilled in the art will recognize that the exact configuration of each processing system may be different and the exact configuration executing processes in accordance with this invention will vary and the figure is given by way of example only.
  • Processing system 200 includes Central Processing Unit (CPU) 205.
  • CPU 205 is a processor, microprocessor, or any combination of processors and microprocessors that execute instructions to perform the processes in accordance with the present invention.
  • CPU 205 connects to memory bus 210 and Input/Output (I/O) bus 215.
  • Memory bus 210 connects CPU 205 to memories 220 and 225 to transmit data and instructions between the memories and CPU 205.
  • I/O bus 215 connects CPU 205 to peripheral devices to transmit data between CPU 205 and the peripheral devices.
  • I/O bus 215 and memory bus 210 may be combined into one bus or subdivided into many other busses and the exact configuration is left to those skilled in the art.
  • a non-volatile memory 220 such as a Read Only Memory (ROM), is connected to memory bus 210.
  • Non-volatile memory 220 stores instructions and data needed to operate various sub-systems of processing system 200 and to boot the system at start-up.
  • ROM Read Only Memory
  • a volatile memory 225 such as Random Access Memory (RAM) is also connected to memory bus 210.
  • Volatile memory 225 stores the instructions and data needed by CPU 205 to perform software instructions for processes such as the processes for providing a system in accordance with this invention.
  • RAM Random Access Memory
  • I/O device 230 is any device that transmits and/or receives data from CPU 205.
  • Digital camera is an I/O device 230 connected to processing system 200 in accordance with this invention. Those skilled in the art will recognize that any number of I/O devices 230 may be connected to processing system 200 without departing from this invention.
  • Keyboard 235 is a specific type of I/O that receives user input and transmits the input to CPU 205.
  • Display 240 receives display data from CPU 205 and display images on a screen for a user to see.
  • Memory 245 is a device that transmits and receives data to and from CPU 205 for storing data to a media.
  • Network device 250 connects CPU 205 to a network for transmission of data to and from other processing systems.
  • Figure 3 illustrates one embodiment of image compartment 110 in accordance with this invention.
  • Image compartment 110 includes back side wall 300.
  • First side wall 315 and second side wall 320 extend outward from the same surface of back side wall 300 and may be integral to back side wall 300. These side walls block ambient light from a body part, such as a face that is being imaged.
  • first side wall 315 and second side wall 320 are affixed to back side wall by first and second hinges (Not Shown) to allow first and second side walls to be rotated in relation to back side wall 300.
  • These hinges may be connected to one or more actuators (Not Shown) that may be controllable to adjust the angles of sidewalls as described below.
  • back side wall 300, first side wall 315 and second side wall 320 may be placed on a base or may be free standing to rest on a platform such as a table, counter, or shelf.
  • Cover 340 may be provided over the enclosure, termed a cavity for this discussion, formed by back side wall 300, first side wall 315 and second side wall 320.
  • the cover is to further block ambient light.
  • the cover may partially or totally enclose the cavity.
  • cover 340 may be made of moveable panels (Not Shown) to allow cover
  • Camera 305 is either affixed to or placed proximate back side wall 300.
  • camera 305 is placed at a height and levelled to capture both a full frontal picture of a body part, such as a face, and split view image of the left oblique and right oblique of the body.
  • Camera 305 is a digital imaging device that may directly transmit images to a connected computer.
  • conventional and other types of imaging devices may be used in conjunction with other I/O devices and drivers to provide images to a computer without departing from this invention.
  • camera 305 may also capture colored, infrared, polarized and/or ultraviolet images of the body part for detection of particular skin disorders without departing from this invention.
  • Optical filter 390 is positioned in between camera 305 and the body part being imaged.
  • Optical filter 390 may be used to allow specific wavelength of light to enter a lense of camera 305.
  • optical filter 390 may be a polarizer that controls the polarization of light that enters the lense of camera 305.
  • An actuator (Not shown) or some other means may be provided for adjusting the position of optical filter 390.
  • optical filter 390 may be moved out of a line of sight of camera 305 if necessary.
  • light sources 345 are affixed to back side wall 300.
  • Light sources 345 are positioned and of sufficient luminescence to provide clear images of the body part for analysis. The exact positioning and luminescence of the body part being left to a designer of the system. Depending on the skin disorders being detected, light sources may also or alternatively provide colored, infrared, polarized, and/or ultraviolet lighting of the body part.
  • Rest 330 is preferably positioned at a set position with respect to back side wall
  • first side wall 315 and second side wall 320 The set position allows for the body part to be positioned in substantially same position to have the images taken from substantially the same view point in subsequent session.
  • the rest may also be free standing without departing from this invention.
  • an indenture in the top surface such as a chin rest 335, may be provided to aid in properly aligning the body part and to provide comfort to a patient whose part is being imaged.
  • a secondary rest such as head guide 505 (Shown in Figure 5) may be added for further comfort and/or alignment of the imaged body part.
  • image compartment 110 includes a prism device 325 situated between rest 330 or body part area of the cavity and camera 305.
  • Prism device 325 may be a prism that refracts light, an array of angled mirrors, or any other device to alter the image captured by camera 305.
  • prism device 325 is movable between a first position where prism device 325 is positioned with respect to camera 305 to provide the split image and a second position where prism device 325 is position out of a view line of camera 305 to allow a frontal image of the body part to be captured.
  • prism device 325 may include or be connected to an actuator (Not shown) to move between the first and second positions.
  • Mirrors 350 and 355 are affixed to first side wall 315 and second side wall 320 respectively.
  • Mirrors are positioned on the side wall to reflect images of first and second sides of a body part onto prism device 325 to provide the split view of the left oblique and right oblique of the body part.
  • the angle of mirrors 350 and 355 with respect to the body part and prism device 325 may be adjusted by movement of first side wall 315 and second side wall 320.
  • the positions of the side wall may be set and mirrors 350 and 355 may be movable on pivoting mounts connecting mirrors 350 and 355 to first and second side wall
  • Figure 4 illustrates an alternative configuration of components inside compartment 110.
  • camera 405 is positioned proximate a back side wall.
  • Mirrors 410 are positioned on opposing sides of camera 405 and are angled to reflect a view of the left oblique and right oblique of head 425 onto prism 415.
  • Light sources 445 are position on opposing sides of each mirror 410 and positioned to provide a consistent luminescence to the surface of head 425. As stated above light sources 445 may provide white, infrared, and/or ultraviolet lighting depending on the skin disorder being analyzed.
  • Prism device 415 is positioned in front of camera 405 between head 425 and camera 405 and is movable between the first and second positions as described above with respect to Figure 3.
  • prism device 415 is a prism device as described with respect to figure 3 above (prism device 325) and is positioned to receive reflections from mirror 410 and refract and/or reflect the reflections onto a focusing lense 450.
  • Focusing lense 450 is positioned between prism device 415 and camera 405 to direct the refracted images from prism 415 onto a lense of camera 405.
  • Optical filter 490 is between focussing lense 450 and prism device 415.
  • Optical filets allows specific wavelengths of light to enter a lense of camera 305.
  • optical filter 390 may be a polarizer that controls the polarization of light that enters the lense of camera 305.
  • An actuator (Not shown) or some other means may be provided for adjusting the position of optical filter 390 between the first and second position as described with respect to Figure 3. Thus, optical filter 390 may be moved out of a line of sight of camera 305 if necessary.
  • prism device 415 and optical filter 490 are removed and mirrors 410 are replaced by cameras to capture left oblique and right oblique of head 425, while at the same time, camera 405 captures frontal view of head 425.
  • Figure 5 illustrates a cross view of the configuration shown in Figure 4 with prism 415 and focusing lense 450 in a second position to allow a frontal image of head 425 to be captured.
  • a chin of head 425 may rest on a rest 510 and a head guide 505 may extend downward from a top cover and rest on a forward to help position head 425 to capture an image of a single view of the body part from one view point such as a frontal view of head 425.
  • Figure 6 illustrates the paths travelled by light in image compartment 110 when prism 415 and focusing lense 450 are in a first position to provide a split image of a left oblique and a right oblique of head 425.
  • An image of a right side of face travels along path 605 to mirror 410.
  • Mirror 410 reflects the right image at an angle and the right image travels along path 615 to prism device 415.
  • Prism device 415 refracts and/or reflects the right image to cause the right image to travel along path 625 to a right side of optical filter 490.
  • Optical filter 490 then directs the right image along path 630 onto a right side focusing lense 450 of camera 405.
  • the left image travels from a left side of the face of head 425 along path 610 to a second mirror 410 on the left side of head 425.
  • Second mirror 410 reflects the left image along path 620 onto prism device 415.
  • Prism device 415 refracts and/or reflects the left image along path 635 onto a left side of optical filter 490.
  • Optical filter 490 then directs the left image along path 640 onto a left side of focusing lense 450 of camera 405.
  • Camera 405 then captures a split image having both the left side image and right side image of the face of head 425.
  • Figure 7 illustrates first image 700 that may be captured by a camera in accordance with this invention.
  • Image 700 is a frontal view of face 705 that has two problem skin regions 710 and 715.
  • Image 700 is a conventional frontal image. It should be noted that image 700 should present face 700 at a sufficient size and magnification to optimize analysis of problem skin regions 710 and 715. The exact size and magnification being left a design choice that may depend on the software being used; light conditions; and other environmental and/or system conditions.
  • Figure 8 illustrates a split image 800 captured by a camera in accordance with embodiments of this invention. In Figure 8, a left oblique and a right oblique of face 705 are shown in image 800. The right oblique clearly shows problem region 710 and the left oblique clearly shows problem region 715.
  • split image 800 is used for illustrative purposes, one skilled in the art will recognize that any type of image and/or multiple images may be analyzed using the described processes in accordance with this invention.
  • the processes may be stored as instructions in software, hardware, or firmware of a processing system having sufficient processing and memory parameters to execute the processes.
  • the exact parameters are left as a design choice to those skilled in the art implementing a system in accordance with this invention.
  • the programming of these exact instructions may be done in any number of programming languages using any number of platforms without departing from this invention and are left as a design choice for those skilled in the art.
  • FIG 11 illustrates a flow diagram of process 1100 for analyzing skin disorders from images in accordance with an embodiment of this invention.
  • Process 1100 begins in step 1105 by receiving an image.
  • receiving of the image may include capturing the image or images with a digital camera and transmitting the image to a processing system.
  • a method for capturing the image is described in process 1200 shown in Figure 12.
  • the image may be received by transmission or the network or from a read operation performed on a memory either internal to or external of the processing system.
  • Figure 9 illustrates a reference region 905 provided in step 1110.
  • the reference region may be input by user. The input may be made by "dragging and dropping" preconfigured shape over a region of the image or by use of an I/O device to draw a shape around the region of interest.
  • a process may select a region of interest based upon either a feature finding process or a process that looks for a contiguous group of pixels having substantially the same pixel color.
  • the reference region should be of sufficient size in terms of the number of pixels in the image to give an adequate sample of the natural skin color of a patient in the image. The exact number of pixels needed for such a sample being left as a design choice to those skilled in the art.
  • regions of interest to analyze are received.
  • the regions of interest are particular regions of pixels in the image to be evaluated. Although it is possible in some embodiments to analyze the entire image, regions of interest are used to reduce the number of computations and steps of the processes that need to be performed during the analysis.
  • Figure 10 shows regions 1005 and 1010 selected as regions of interest. These regions of interest include problem regions 710 and 715 shown in Figure 8. In some embodiments, the regions of interest may be input by user.
  • the input may be made by "dragging and dropping" preconfigured shape (as shown by region of interest 1010 in which an ellipse is used) over a region of the image or by use of an I/O device to draw a shape around the region of interest (as shown by region 1005 which is an amorphous shape drawn around problem region 710).
  • a process may select a region of interest based upon either a feature finding process or a process that looks for contiguous groups of pixels having substantially different pixel color.
  • the regions of interest should be of sufficient size to contain most if not all of the problem patches of skin shown in an image. The exact number of pixels needed for such a sample being left as a design choice or implementation choice of those skilled in the art.
  • Process 1100 then analyzes the region(s) of interest in step 1120.
  • the analysis performed detects and classifies various skin disorders, including but not limited to types of acne.
  • the types of acne that may be classified include, but are not limited to, papules; pustules; open and closed comedones; scars; and pores.
  • this analysis performed may also detect skin disorders that may be identified using prior art methods including but not limited to pigmentation variations; pores; wrinkles; color tone variation; blood and melanin distribution; sun damage; and skin cancer.
  • the process used to perform the analysis is discussed below in respect to Figures 14-18.
  • Process 1100 then ends after generating and storing the results of the analysis in step 1125. A complete description and method for displaying the results are described below with respect to Figure 13.
  • Figure 12 illustrates a flow diagram for capturing images in accordance with this invention.
  • the captured images are then transmitted to computer 100 from a digital imager such as a camera in step 1105 of process 1100.
  • Process 1200 begins by having a patient adjusts a position of his face or other body part to cause the image presented by the camera to a display to align with a template or other alignment indicator also displayed in step 1205. This may be done by making minor adjustments to a head or other body part placed in image compartment 110 in some embodiments of this invention.
  • Step 1210 may be performed by moving and/or adjusting optical filters; or changing the illumination provided by a light source.
  • the removal of specular reflections may be performed using two cross polarizing filters oriented perpendicular to one another. If the skin surface features are to be analysed, the 2 polarized filters are oriented parallel to one another. A frontal image of the face or other body part is then captured in step 1215.
  • step 1220 the components are configured for to capture a split image.
  • Step 1120 may include moving a prism device and optical filter into a first position and adjusting the position of mirrors.
  • a split image of the left oblique and right oblique of the face or other body part is captured in step 1225.
  • Process 1200 then ends.
  • FIG. 13 illustrates a process 1300 for reporting results in accordance with an embodiment of this invention.
  • Process 1300 begins in 1305 with reporting of identified problem areas in the regions of interest. Preferably this is done by a visual display with either indicia around identified problem areas or a color scheme of the image of the region of interest indicating various features such as problem areas and/or different types of problem areas.
  • the alignment may be an overlay of the current and previous images, color coded images distinguishing differences, or any other presentation that a design may want to provide to convey the difference between images to a user and/or patient.
  • Process 1300 then ends after step 1327 when all of the presentation and/or images analyzed are stored to a connected memory for record keeping and future use.
  • FIG 14 illustrates process 1400 for classifying skin disorders in regions of interest in an image in accordance with embodiments of this invention.
  • Process 1400 begins in 1402 by normalizing the image. This may be done by flattening the brightness or value V in the Hue-Saturation-Value (HSV) color space to a predefined value, such as, but not limited to, 0.8. After the image has been normalized, a natural pixel color is determined from the pixels in the reference region previously received or determined in step 1405.
  • HSV Hue-Saturation-Value
  • the color difference of each pixel from the natural pixel color is determined in step 1410.
  • the color difference of a pixel is the absolute value of the natural pixel color subtracted from color value of the pixel
  • step 1415 the pixels having a pixel value above an interest threshold are identified.
  • the interest threshold is a predefined value that is dependent upon the skin disorders being studied, the quality of the images, and quality of the disorder data and is therefore left as a design choice of a system in accordance with this invention.
  • One method for determining threshold and identifying pixels is described with reference to Figure 15 below.
  • the identified pixels are then grouped into groups of like identified pixels in step 1430.
  • the grouping into like identified groups may be performed using a conventional connected component analysis algorithm.
  • sample squares are selected for analysis to determine the skin disorder shown in step 1435. Method for determining the sample squares are described with respect to Figures 16 and 17 below.
  • the skin disorders are then identified or classified from the sample squares in step 1440. A process for classifying the skin disorders from the sample squares is described with respect to Figure 18 below. Process 1400 then ends.
  • FIG. 15 illustrates a flow diagram of process 1500 for classifying pixels of interest in a region of interest in accordance with an embodiment of this invention.
  • Process 1500 begins in step 1505 by constructing a histogram of the color differences of the pixels in the region(s) of interest.
  • a distribution model is then fitted over the histogram.
  • a Gaussian distribution model is used.
  • other distribution models may be used without departing from this invention.
  • a high threshold and a low threshold are determined in step 1515.
  • the high and low threshold may be the values of a predetermined standard deviation from the mean.
  • one skilled in the art can use any number of methods for determining these thresholds without departing from the invention,
  • each pixel having a pixel color difference greater than the high threshold is identified as large color difference pixels in step 1520.
  • each pixel in the region(s) of interest having a pixel color difference that is greater then the low threshold and less than the high threshold is identified as a medium color difference pixel.
  • Each pixel having a pixel color difference less than the low threshold or each remaining pixel is then identified as a small color difference pixel in step 1530 and process 1500 ends.
  • Figures 16 and 17 illustrate processes for selecting sample squares of pixels for analysis after process 1500 classifies the pixels in accordance with two embodiments of this invention.
  • Figure 16 illustrates process 1600 which is a first process for determining sample squares for analysis in accordance with an embodiment of this invention.
  • Process 1600 begins in step 1605 by defining a primary minimum bounding box (B) that encloses a group of pixels defined as a connected component as discussed in 1430 of process 1400 ( Figure 14). A morphological operator is then applied to the primary minimum bounding box in step 1610 to dilate the boundaries of the group.
  • B primary minimum bounding box
  • step 1615 all identified neighboring medium color difference pixels in the primary boundary box are connected into connected components using a conventional connected component analysis algorithm. All identified large color difference pixels in primary bounding box are then connected into connected components using a conventional connected component analysis algorithm in step 1620.
  • a secondary minimum bounding box (b) is then formed for each connected component formed in steps 1615 and 1620.
  • a sample square is then generated and returned for each minimum bounding box determined to be entirely within primary bounding box B in step 1630. The sample square may be a difference image within a square proportional to the secondary minimum bounding box.
  • Process 1600 determines if there are other groups remaining in the image in step 1635 and process 1600 is repeated for each group until no more groups remain to process. Process 1600 then ends.
  • FIG. 17 illustrates a flow diagram of a second process 1700 for determining sample squares in accordance with another embodiment of this invention.
  • Process 1700 begins in step 1705 by defining a minimum bounding box B that encloses a group of pixels defined as a connected component as discussed in step 1430 of process 1400 ( Figure 14).
  • process 1700 places a square box of a predetermined size at each location in the minimum bounding box. It is left to those skilled in the art to determine locations for placement of the boxes. However, the location of the boxes may be determined as a predetermined set of pixels within minimum bounding box, a calculated center of a connected component, a calculated center of a group like components, or in any other manner.
  • the ratio of pixels of interest compared to total number of pixels in each square box is then determined in step 1715.
  • the pixels of interest may be pixels identified as large color difference pixels and/or pixels identified as medium color difference pixels. The exact pixels of interest are left as a design choice.
  • step 1720 the square box b with the largest ratio is selected.
  • step 1725 the ratio of the selected square is compared with a threshold.
  • step 1727 if the ratio is greater than the threshold, then the difference image inside the square box b is output as a sample square, and the pixels in the square box are removed from the difference image within the minimum bounding box B. Thus, any pixels in a set of pixels shared with an overlapping square box are removed from the overlapping box.
  • process 1700 is repeated from 1715 to determine new ratios for the square boxes. If the largest ratios is not greater than the threshold, process 1700 proceeds to step 1730 and determines whether another set of squares of a different predetermined size has yet to be processed.
  • process 1700 progresses to step 1740 to set the predetermined size and is repeated from 1710 with a new set of squares. If not, process 1700 proceeds to step 1735 and determines if another connected component remains. If so, process 1700 repeats from step 1705 with the connected component. If not, process 1700 ends.
  • Figure 18 illustrates a flow diagram of process 1800 for classifying skin disorders in the sample squares.
  • Process 1800 begins in step 1805 by extracting features from a sample square. Some examples of features that may be extracted include, but are not limited to, color difference histograms and color difference co-occurrence matrices. The extracted features are then used to classify a disorder from the extracted features using classifier algorithm in step 1810. Some examples of classifier algorithms that may be used include, but are not limited to, Bayesian classifiers and Support Vector Machines. After the sample square is classified, process 1800 determines in step 1805 whether there is another sample square to classify. If so, process 1800 is repeated from step 1805 for a new sample square. Otherwise, process 1800 ends.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Dermatology (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

A system for analysing skin disorders from images. The system includes a compartment for controlling light of the skin of a body part and for providing frontal and split oblique images of a body part. A processing system then executes processes that use a reference region from the image to identify and analyze an area in a region of interest from the image to classify skin disorders detected.

Description

METHOD OF ANALYSING SKIN IMAGES USING A REFERENCE REGION TO DIAGNOSE A SKIN DISORDER
Field of the Invention
This invention relates to computer-aided dermatological diagnosis and treatment of skin disorders. More particularly, this invention relates to a system that captures a suitable image of skin of a body part and processes for identifying skin disorders in the image. Still more particularly, this invention relates to a system that captures a suitable image of facial skin and provides processes for identifying skin disorders in a region of interest in the image using a reference region from the image.
Summary of the Prior art
It is a problem that dermatologists, general practitioners, and other skin specialists often have problems diagnosing and monitoring treatment of various skin disorders. Often times the diagnosis and treatment of disorders are based on the subjective observations of a caregiver. These observations can be affected by any number of personal or environmental factors. Furthermore, the measurement of progress of a treatment program often relies on the memory and subjective observations of caregiver. Thus, the measurement of progress during treatment is often not accurate. Therefore, caregivers are often late to recognize that an ailment is not responding to the current treatment program. Furthermore, caregivers are often not able to show the progress of a treatment program to a patient to assure the patient that progress is being made. Therefore, those skilled in the art are constantly striving to find better systems for diagnosing and monitoring treatment of skin disorders.
One manner that those skilled in the art have sought to make diagnosis and monitoring more objective is image processing. However, the use of images presents many problems. First, diagnosing and monitoring of skin disorders depend upon identifying the size and color of the afflicted area of skin. Any number of environmental factors may affect an image and an analysis performed using an image. These include, but are not limited to, the lighting of the room affecting the color of an image, and the angle and distance from which a camera captures the image affecting both size and shape of the imaged area. Thus, conventional images do not offer an adequate alternative. In the past, numerous systems have been proposed to try to provide uniform lighting, camera angle and distance with respect to the afflicted area to provide consistent images for use in analysis of the area. A first group of prior art systems provides a hand held device to place a cone over a small portion of skin to block ambient lighting and then illuminate the skin under the cone with a lighting system. The device may then provide viewing optics to view the illuminated skin or a camera to capture an image of the skin. Example of these prior art systems are described in GB Patent number 2,364,376, entitled "Skin Illumination and Examination Apparatus" in the name of Astron Clinics Limited; US Patent Number 4,911 ,544, entitled "Skin Condition Analyser for Cosmetologists" issued to Walsh; US Patent Number 6,251 ,070, entitled "Device and Method for Measuring Skin Parameters" issued to Khazaka; US Patent number 5, 825, 502, entitled "Device for Close-up Imagery of Surfaces" issued to Mayer; US patent publication 2004/0174525, entitled "Dermoscopy Epiluminescence Employing Cross and Parallel Polarization" on behalf of Mullani; and US Patent Publication Number 2004/0257439, entitled "Skin Observing Apparatus" on behalf of Shirai et al. One problem with these hand held devices is that surface of skin that can be viewed and/or imaged at one time is limited. Therefore, these devices are not ideal for diagnosing and/or analysing skin disorders such as acne or the like which may be spread over a large amount of skin such as a face or back of a subject.
A second group of prior art systems provides systems with a compartment which regulate the light applied to skin on a body part such as a face. This second group of systems often provide a substantially enclosed compartment to block out ambient light and a lighting system in the compartment to provide the desired lighting for the skin such as described in US patent number 6,993,167, entitled "System and Method for Examining, Recording, and Analyzing Dermatological Conditions" issued to Skladnev et al. However, this system does not provide a camera for taking pictures from varying angles or a method for holding a head or other body part in substantially the same position for different examinations.
Some of these second group systems do provide a head rest and other devices for holding the head or other body part in place while the proper lighting is provided in the compartment. However, the cameras provided in these systems are either fixed to capture images from one vantage point or connected to devices for moving the camera within the compartment. The use of devices to position the camera often makes it difficult to capture images from substantially the same view points in subsequent visits. Furthermore, the use of the devices often makes movement of the camera time consuming and bothersome for a patient enclosed in the compartment. Such second group systems are described in US Patent Publication 2004/0218810, entitled "Systems and Methods for Computer Analysis of Skin Image" on behalf of Momma; WO Publication Number 2005/099575, entitled "Face Imaging Device" on behalf of Moritex Corporation; and US Patent Publication 2006/0092315, entitled "Skin Imaging System with Probe" on behalf of Payonk et al.
A third group of prior art systems describes systems that receive images of skin and identify defects or blemishes and recommend a product or treatment. Examples of these systems are disclosed in US patent 6,571 ,003, entitled "Skin Imaging and Analysis Systems and Methods" issued to Hillebrand et al.; and US Patent Publication 2004/0125996, entitled "Skin Diagnostic Imaging Method and Apparatus" on behalf of Eddowes et al. However, these systems are often used to identify areas of a body area such as a face with blemishes or other defects and provide little or no analysis of the condition or provide a method for monitoring and analysing progress over multiple uses on a patient.
Thus, those skilled in the art are constantly striving to provide a system that can quickly and efficiently take images of a large portion of a body part such as a face from multiple view points and further is able to identify and analyse skin disorders from the captured images.
Summary of the Invention
The above and other problems are solved and an advance in the art is made by Skin Analysis System in accordance with this invention. A first advantage of a system in accordance with this invention is that a compartment is provided that provides desired lighting for imaging areas of skin parts such as a facial area of a patient. A second advantage of a system in accordance with this invention is that an array of mirrors and other easily moved components are provided to allow a single stationary camera to capture multiple views of the area in a quick and convenient manner. The array of mirrors can be replaced by an array of cameras to capture multiple views of the area in a quick and convenient manner. A third advantage of a system in accordance with this invention is that applications executed by a computer system can detect, classify, and analyze skin disorders from the image to provide an objective measurement of the disorders and progress made during treatment of the disorder.
In accordance with embodiments of this invention, a skin analysis system may include an imaging compartment, a light source, a camera, and a computer system. The imaging compartment includes a partially enclosed cavity for blocking ambient light. The light source is inside the cavity and provides adequate lighting for the body part. The light source may provide colored, infrared, polarized, ultraviolet, or white light depending on the ailments being detected. In some embodiments, a polarizer may be used to control the polarization of the polarization state of the lighting for the captured image. The polarizer may be movable via an actuator or other means to allow adjustment of the polarization of the light
The camera is also in the cavity and captures images of the body part for analysis. Optical filters or other devices may be used in conjunction with the camera to capture conventional, colored, infrared, polarized, or ultraviolet images depending on the types of skin disorders being detected. These filters may be movable by actuators or other means to allow the type of image captured. The computer system receives an image from the camera and executes instructions to analysis the image to detect skin disorders.
In accordance with these embodiments, the analysis is provided in the following manner. The system receives an image. A reference region representing a natural skin color of the body part is determined from the image. A region of interest that is an area of the image to be analyzed for skin disorders is determined from the image. The system then determines one or more problems areas in the area of interest using the reference region as a reference for a natural skin color. Skin disorders in the problem area are then classified and results of the analysis are generated.
In accordance with some embodiments, the image compartment may allow a single camera to take single view images of a body part such as a face of a patient and a split image of a left and right oblique of the body part. In these embodiments, the image compartment may include a split image device. The split image device may be a prism device and mirrors in some embodiments. In other some embodiments, the prism device may be a prism for refracting images from associated mirrors. In other embodiments, the prism device may include angled mirrors that reflect the image from the associates mirrors. The prism device is configured in the cavity to be between the camera and the body part. The mirrors are configured in the cavity to reflect images of the body part onto the prism device. The camera is then positioned with respect to the prism device to capture a split image showing a first side and a second side of the body part. In these embodiments, it is preferable that the prism device and mirrors be movable to allow both single view and the split images of the body part to be captured. In accordance with some embodiments, the prism and mirrors may be replaced by cameras to capture images showing a first side and a second side of the body part.
In accordance with further embodiments, a rest stand may also be included in the cavity. The rest stand is configured in the cavity to position the body part in a substantially constant position in the cavity with relation to the camera. In embodiments that the compartment is used for capturing images of the face, the rest stand may further include a chin rest defined as an indenture in the rest stand to allow a chin to rest in the indenture.
In accordance with a preferred embodiment, the image compartment includes a back side wall, a first side wall, and a second side wall. A first mirror is affixed to said first side wall and a second mirror is affixed to the second side wall. The positioning of the first and second mirrors is adjustable. Preferably, the positions of the mirrors are adjusted by moving the first and second side walls. In order to allow the side walls to move, the side walls may be connected to the back side wall by hinges. These hinges may be movable by controlling an actuator connected to each hinge.
The image compartment may also include a top covering in accordance with the preferred embodiment. The top cover at least partially encloses the cavity and may include movable panels that can be adjusted to block the ambient light. A head guide may also be affixed to the cover and extend into the compartment to aid in aligning of a head or other body part of a subject to capture images of a face of the subject.
In accordance with some embodiments of this invention, the applications for analyzing an image may be configured in the following manner. The system may determine the reference region and regions of interest in an image either by receiving an input of the regions from a user or by using conventional feature identification methods to find areas of an imaged body part likely to provide the reference region or a region of interest. Once the reference region and the region of interest are determined, the process of detecting problem areas may be performed in the following manner in accordance with some embodiments of this invention. The process determines a natural pixel color from the pixels in the reference region and computes a color difference from the natural pixel color for each pixel in a region of interest. The entire image; or the reference region and/or region of interest may be normalized prior to performing the calculations.
After the color difference of each pixel in a region of interest is determined, pixels having a color difference greater than a threshold are identified. Like identified groups of pixels from said identified pixels having similar color differences are formed using connected component analysis. The process then identifies areas of pixels in sets of connected like identified groups.
In accordance with some embodiments of the invention, the determining of like identified groups is performed in the following manner. First, the process begins by constructing a color difference histogram for the pixels in the region of interest. A distribution model is then fixed over the histogram. High and low thresholds are then determined from the distribution model. Each pixel in the region of interest having a color difference that is greater than the high threshold is identified as a large color difference pixel. Each pixel in the region of interest having a color difference that is greater than the low threshold and less than the high threshold is identified as a medium color difference pixel and the remainder of the pixels are identified as small color difference pixels. Each like identified group includes connected pixels having the same type of color difference.
In accordance with some of these embodiments of the invention, the process for classifying a skin disorder in a region is performed by dividing the area with connected like identified groups of pixels into sample squares and classifying each sample square. Preferably, the classifying of the sample squares is performed by extracting features from the sample squares and performing a classifier process on the extracted features. The classifier may be a Bayesian classifier or a support vector machine or other methods.
One process for determining the sample squares to classify skin disorders is performed in the following manner. First, a primary minimum bounding box including all pixels in the group is determined. A morphological operator is then applied to dilate the boundary of the primary minimum bounding box. Each group of neighboring medium color difference pixels in the primary bounding box is formed into a group of connected components. Each group of neighboring large color difference pixels in the primary bounding box is also formed into a group of connected components. A secondary minimum bounding box is then formed for each group of connected components. Each secondary minimum bounding box determined to be entirely within the primary minimum bounding box is returned as a sample square.
A second process for determining the sample squares in accordance with other embodiments of this invention is performed in the following manner. First, the process determines a primary minimum bounding box including pixels from the group. The primary bounding box is then divided into a set of squares of a particular size and a ratio of pixels of interest in each square in the set is determined. The process then identifies squares with a ratio greater than a threshold and returns the identified squares as sample squares. Depending on the embodiment, the pixel of interest may by large, medium, and/or small color difference pixels. The process is repeated for different sets of equal sized squares with each set having squares of different areas to provide sample squares of different sizes.
Brief Description of the Drawings
The above and other features and advantages of a skin analysis system in accordance with this invention are described in the following detailed description and are shown in the following drawings:
Figure 1 illustrating components of a skin analysis system in accordance with one embodiment of this invention;
Figure 2 illustrating a block diagram of a processing system included in a computer in accordance with one embodiment of this invention;
Figure 3 illustrating a perspective view of an image compartment in accordance with one embodiment of this invention; Figure 4 illustrating a view downward into an image compartment in accordance with one embodiment of this invention;
Figure 5 illustrating a side view inside an image compartment in accordance with one embodiment of this invention;
Figure 6 illustrating the manner in which a split view of left and right oblique of a face are captured in one image in accordance with embodiment of this invention; Figure 7 illustrating a frontal image of a face captured with a camera in accordance with one embodiment of this invention;
Figure 8 illustrating a split view image of left and right oblique of a face in accordance with one embodiment of this invention; Figure 9 illustrating a split view image of a left and a right oblique of a face with a reference region highlighted in accordance with one embodiment of this invention;
Figure 10 illustrating a split view image of a left and a right oblique of a face with two different manners of region of interest being highlighted in accordance with one embodiment of this invention; Figure 11 illustrating a flow diagram of a process for analysing an image in accordance with one embodiment of this invention;
Figure 12 illustrating a flow diagram of a process for capturing images of a face in accordance with one embodiment of this invention;
Figure 13 illustrating a flow diagram of a process for reporting results in accordance with one embodiment of this invention;
Figure 14 illustrating a flow diagram of a process for classifying skin disorders in area of a region of interest in accordance with one embodiment of this invention;
Figure 15 illustrating a flow diagram of a process for classifying pixels in the region of interest in accordance with one embodiment of this invention; Figure 16 illustrating a flow diagram for a first process for determining sample squares in the region of interest in accordance with one embodiment of this invention;
Figure 17 illustrating a flow diagram for a second process for determining sample squares in the region of interest in accordance with another embodiment of this invention; and Figure 18 illustrating a flow diagram of a process for classifying skin disorders in sample squares of a region of interest in accordance with one embodiment of this invention.
Detailed Description
This invention relates to computer-aided dermatological diagnosis and treatment of skin disorders. More particularly, this invention relates to a system that captures a suitable image of skin of a body part and processes for identifying skin disorders in the image. Still more particularly, this invention relates to a system that captures a suitable image of facial skin and provides processes for identifying skin disorders in a region of interest in the image using a reference region from the image. For clarity same components shown in more than one figure are given the reference numeral throughout this description.
This invention includes components for capturing images of a body that are then analyzed by applications executed by a processing system such as a computer. One skilled in the art will recognize that processes described herein are instructions stored in software, hardware or firmware that are executed by system to perform the processes described and may be executed on any processing system connected to a network. The exact processing system executing the applications and the exact connection of the processing system to a processing system used by user are not important to this invention and are left as a design choice to those skilled in the art.
Figure 1 illustrates skin analysis system 100 in accordance with one embodiment of this invention. Skin analysis system 100 includes computer system 105 and image compartment 110. Computer system 105 executes applications for receiving an image from a camera (Not shown in Figure 1 ) inside image compartment 110. Although shown as a conventional desktop computer in Figure 1 , one skilled in the art will recognize that computer system 105 may be any type of processing device having a processor and a memory that meet requirements for executing the software applications in accordance with this invention. As shown in Figure 1 , computer system 105 may be connected to other processing system 120 via a network 115. The applications of this invention may either be stored on or executed by the connected processing system 120 without departing from this invention. Furthermore, the exact network configuration and connections of devices are also unimportant to this invention and are left as a design choice.
Image compartment 110 at least partially encloses a cavity in which images are captured in accordance with this invention. Image compartment 110 blocks ambient light from the surrounding environment and provides proper lighting of an enclosed body part such as a head to provide images that may be analyzed by a process provided in accordance with this invention. A more complete description of image compartment 110 is provided below with reference to Figure 3.
Figure 2 illustrates an exemplary processing system 200 of computer 105 in accordance with an embodiment of this invention. Processing system 200 includes the components needed to execute the applications from instructions stored in memory in accordance with this invention. One skilled in the art will recognize that the exact configuration of each processing system may be different and the exact configuration executing processes in accordance with this invention will vary and the figure is given by way of example only.
Processing system 200 includes Central Processing Unit (CPU) 205. CPU 205 is a processor, microprocessor, or any combination of processors and microprocessors that execute instructions to perform the processes in accordance with the present invention. CPU 205 connects to memory bus 210 and Input/Output (I/O) bus 215. Memory bus 210 connects CPU 205 to memories 220 and 225 to transmit data and instructions between the memories and CPU 205. I/O bus 215 connects CPU 205 to peripheral devices to transmit data between CPU 205 and the peripheral devices. One skilled in the art will recognize that I/O bus 215 and memory bus 210 may be combined into one bus or subdivided into many other busses and the exact configuration is left to those skilled in the art.
A non-volatile memory 220, such as a Read Only Memory (ROM), is connected to memory bus 210. Non-volatile memory 220 stores instructions and data needed to operate various sub-systems of processing system 200 and to boot the system at start-up. One skilled in the art will recognize that any number of types of memory may be used to perform this function.
A volatile memory 225, such as Random Access Memory (RAM), is also connected to memory bus 210. Volatile memory 225 stores the instructions and data needed by CPU 205 to perform software instructions for processes such as the processes for providing a system in accordance with this invention. One skilled in the art will recognize that any number of types of memory may be used to provide volatile memory and the exact type used is left as a design choice to those skilled in the art.
I/O device 230, keyboard 235, display 240, memory 245, network device 250 and any number of other peripheral devices connect to I/O bus 215 to exchange data with CPU 205 for use in applications being executed by CPU 205. I/O device 230 is any device that transmits and/or receives data from CPU 205. Digital camera is an I/O device 230 connected to processing system 200 in accordance with this invention. Those skilled in the art will recognize that any number of I/O devices 230 may be connected to processing system 200 without departing from this invention. Keyboard 235 is a specific type of I/O that receives user input and transmits the input to CPU 205. Display 240 receives display data from CPU 205 and display images on a screen for a user to see. Memory 245 is a device that transmits and receives data to and from CPU 205 for storing data to a media. Network device 250 connects CPU 205 to a network for transmission of data to and from other processing systems.
Figure 3 illustrates one embodiment of image compartment 110 in accordance with this invention. Image compartment 110 includes back side wall 300. First side wall 315 and second side wall 320 extend outward from the same surface of back side wall 300 and may be integral to back side wall 300. These side walls block ambient light from a body part, such as a face that is being imaged. In this embodiment, first side wall 315 and second side wall 320 are affixed to back side wall by first and second hinges (Not Shown) to allow first and second side walls to be rotated in relation to back side wall 300. These hinges may be connected to one or more actuators (Not Shown) that may be controllable to adjust the angles of sidewalls as described below. Furthermore, back side wall 300, first side wall 315 and second side wall 320 may be placed on a base or may be free standing to rest on a platform such as a table, counter, or shelf.
Cover 340 may be provided over the enclosure, termed a cavity for this discussion, formed by back side wall 300, first side wall 315 and second side wall 320. The cover is to further block ambient light. The cover may partially or totally enclose the cavity.
Furthermore, cover 340 may be made of moveable panels (Not Shown) to allow cover
340 to be moved to block undesirable ambient lighting.
Camera 305 is either affixed to or placed proximate back side wall 300. Preferably, camera 305 is placed at a height and levelled to capture both a full frontal picture of a body part, such as a face, and split view image of the left oblique and right oblique of the body. Camera 305 is a digital imaging device that may directly transmit images to a connected computer. However, one skilled in the art will recognize that conventional and other types of imaging devices may be used in conjunction with other I/O devices and drivers to provide images to a computer without departing from this invention. Furthermore, in some embodiments, camera 305 may also capture colored, infrared, polarized and/or ultraviolet images of the body part for detection of particular skin disorders without departing from this invention. Optical filter 390 is positioned in between camera 305 and the body part being imaged. Optical filter 390 may be used to allow specific wavelength of light to enter a lense of camera 305. Alternatively, optical filter 390 may be a polarizer that controls the polarization of light that enters the lense of camera 305. An actuator (Not shown) or some other means may be provided for adjusting the position of optical filter 390. Thus, optical filter 390 may be moved out of a line of sight of camera 305 if necessary.
In accordance with this embodiment, light sources 345 are affixed to back side wall 300. Light sources 345 are positioned and of sufficient luminescence to provide clear images of the body part for analysis. The exact positioning and luminescence of the body part being left to a designer of the system. Depending on the skin disorders being detected, light sources may also or alternatively provide colored, infrared, polarized, and/or ultraviolet lighting of the body part.
Rest 330 is preferably positioned at a set position with respect to back side wall
300, first side wall 315 and second side wall 320. The set position allows for the body part to be positioned in substantially same position to have the images taken from substantially the same view point in subsequent session. However, the rest may also be free standing without departing from this invention. In some embodiments, an indenture in the top surface, such a chin rest 335, may be provided to aid in properly aligning the body part and to provide comfort to a patient whose part is being imaged. Further a secondary rest, such as head guide 505 (Shown in Figure 5) may be added for further comfort and/or alignment of the imaged body part.
To capture a split image of a left oblique and a right oblique of the body part, image compartment 110 includes a prism device 325 situated between rest 330 or body part area of the cavity and camera 305. Prism device 325, may be a prism that refracts light, an array of angled mirrors, or any other device to alter the image captured by camera 305. Preferably, prism device 325 is movable between a first position where prism device 325 is positioned with respect to camera 305 to provide the split image and a second position where prism device 325 is position out of a view line of camera 305 to allow a frontal image of the body part to be captured. In some embodiments prism device 325 may include or be connected to an actuator (Not shown) to move between the first and second positions. Mirrors 350 and 355 are affixed to first side wall 315 and second side wall 320 respectively. Mirrors are positioned on the side wall to reflect images of first and second sides of a body part onto prism device 325 to provide the split view of the left oblique and right oblique of the body part. In the shown embodiment, the angle of mirrors 350 and 355 with respect to the body part and prism device 325 may be adjusted by movement of first side wall 315 and second side wall 320. However, one skilled in the art will recognized the positions of the side wall may be set and mirrors 350 and 355 may be movable on pivoting mounts connecting mirrors 350 and 355 to first and second side wall
315, 320 without departing from this invention.
Figure 4 illustrates an alternative configuration of components inside compartment 110. In accordance with this embodiment, camera 405 is positioned proximate a back side wall. Mirrors 410 are positioned on opposing sides of camera 405 and are angled to reflect a view of the left oblique and right oblique of head 425 onto prism 415. Light sources 445 are position on opposing sides of each mirror 410 and positioned to provide a consistent luminescence to the surface of head 425. As stated above light sources 445 may provide white, infrared, and/or ultraviolet lighting depending on the skin disorder being analyzed.
Prism device 415 is positioned in front of camera 405 between head 425 and camera 405 and is movable between the first and second positions as described above with respect to Figure 3. Preferably prism device 415 is a prism device as described with respect to figure 3 above (prism device 325) and is positioned to receive reflections from mirror 410 and refract and/or reflect the reflections onto a focusing lense 450. Focusing lense 450 is positioned between prism device 415 and camera 405 to direct the refracted images from prism 415 onto a lense of camera 405.
Optical filter 490 is between focussing lense 450 and prism device 415. Optical filets allows specific wavelengths of light to enter a lense of camera 305. Alternatively, optical filter 390 may be a polarizer that controls the polarization of light that enters the lense of camera 305. An actuator (Not shown) or some other means may be provided for adjusting the position of optical filter 390 between the first and second position as described with respect to Figure 3. Thus, optical filter 390 may be moved out of a line of sight of camera 305 if necessary. In other embodiments (Not Shown), prism device 415 and optical filter 490 are removed and mirrors 410 are replaced by cameras to capture left oblique and right oblique of head 425, while at the same time, camera 405 captures frontal view of head 425.
Figure 5 illustrates a cross view of the configuration shown in Figure 4 with prism 415 and focusing lense 450 in a second position to allow a frontal image of head 425 to be captured. As shown in figure 5, a chin of head 425 may rest on a rest 510 and a head guide 505 may extend downward from a top cover and rest on a forward to help position head 425 to capture an image of a single view of the body part from one view point such as a frontal view of head 425.
Figure 6 illustrates the paths travelled by light in image compartment 110 when prism 415 and focusing lense 450 are in a first position to provide a split image of a left oblique and a right oblique of head 425. An image of a right side of face travels along path 605 to mirror 410. Mirror 410 reflects the right image at an angle and the right image travels along path 615 to prism device 415. Prism device 415 refracts and/or reflects the right image to cause the right image to travel along path 625 to a right side of optical filter 490. Optical filter 490 then directs the right image along path 630 onto a right side focusing lense 450 of camera 405.
Likewise the left image travels from a left side of the face of head 425 along path 610 to a second mirror 410 on the left side of head 425. Second mirror 410 reflects the left image along path 620 onto prism device 415. Prism device 415 refracts and/or reflects the left image along path 635 onto a left side of optical filter 490. Optical filter 490 then directs the left image along path 640 onto a left side of focusing lense 450 of camera 405. Camera 405 then captures a split image having both the left side image and right side image of the face of head 425.
Figure 7 illustrates first image 700 that may be captured by a camera in accordance with this invention. Image 700 is a frontal view of face 705 that has two problem skin regions 710 and 715. Image 700 is a conventional frontal image. It should be noted that image 700 should present face 700 at a sufficient size and magnification to optimize analysis of problem skin regions 710 and 715. The exact size and magnification being left a design choice that may depend on the software being used; light conditions; and other environmental and/or system conditions. Figure 8 illustrates a split image 800 captured by a camera in accordance with embodiments of this invention. In Figure 8, a left oblique and a right oblique of face 705 are shown in image 800. The right oblique clearly shows problem region 710 and the left oblique clearly shows problem region 715.
The embodiments of the processes performed by computer 100 to analyze images and classify skin disorders in accordance with this invention will now be described with reference to Figure 9-18. Although split image 800 is used for illustrative purposes, one skilled in the art will recognize that any type of image and/or multiple images may be analyzed using the described processes in accordance with this invention. The processes may be stored as instructions in software, hardware, or firmware of a processing system having sufficient processing and memory parameters to execute the processes. The exact parameters are left as a design choice to those skilled in the art implementing a system in accordance with this invention. Furthermore, one skilled in the art will recognize that the programming of these exact instructions may be done in any number of programming languages using any number of platforms without departing from this invention and are left as a design choice for those skilled in the art.
Figure 11 illustrates a flow diagram of process 1100 for analyzing skin disorders from images in accordance with an embodiment of this invention. Process 1100 begins in step 1105 by receiving an image. In accordance with some embodiments, receiving of the image may include capturing the image or images with a digital camera and transmitting the image to a processing system. A method for capturing the image is described in process 1200 shown in Figure 12. In other embodiments, the image may be received by transmission or the network or from a read operation performed on a memory either internal to or external of the processing system.
Once the image is received, a reference region is determined in step 1110. Figure 9 illustrates a reference region 905 provided in step 1110. In some embodiments, the reference region may be input by user. The input may be made by "dragging and dropping" preconfigured shape over a region of the image or by use of an I/O device to draw a shape around the region of interest. Alternatively, a process may select a region of interest based upon either a feature finding process or a process that looks for a contiguous group of pixels having substantially the same pixel color. In either of the above embodiments, the reference region should be of sufficient size in terms of the number of pixels in the image to give an adequate sample of the natural skin color of a patient in the image. The exact number of pixels needed for such a sample being left as a design choice to those skilled in the art.
In step 1115, regions of interest to analyze are received. The regions of interest are particular regions of pixels in the image to be evaluated. Although it is possible in some embodiments to analyze the entire image, regions of interest are used to reduce the number of computations and steps of the processes that need to be performed during the analysis. Figure 10 shows regions 1005 and 1010 selected as regions of interest. These regions of interest include problem regions 710 and 715 shown in Figure 8. In some embodiments, the regions of interest may be input by user. The input may be made by "dragging and dropping" preconfigured shape (as shown by region of interest 1010 in which an ellipse is used) over a region of the image or by use of an I/O device to draw a shape around the region of interest (as shown by region 1005 which is an amorphous shape drawn around problem region 710). Alternatively, a process may select a region of interest based upon either a feature finding process or a process that looks for contiguous groups of pixels having substantially different pixel color. In either of the above embodiments, the regions of interest should be of sufficient size to contain most if not all of the problem patches of skin shown in an image. The exact number of pixels needed for such a sample being left as a design choice or implementation choice of those skilled in the art.
Process 1100 then analyzes the region(s) of interest in step 1120. The analysis performed detects and classifies various skin disorders, including but not limited to types of acne. In particular, the types of acne that may be classified include, but are not limited to, papules; pustules; open and closed comedones; scars; and pores. Furthermore, this analysis performed may also detect skin disorders that may be identified using prior art methods including but not limited to pigmentation variations; pores; wrinkles; color tone variation; blood and melanin distribution; sun damage; and skin cancer. The process used to perform the analysis is discussed below in respect to Figures 14-18. Process 1100 then ends after generating and storing the results of the analysis in step 1125. A complete description and method for displaying the results are described below with respect to Figure 13.
Figure 12 illustrates a flow diagram for capturing images in accordance with this invention. The captured images are then transmitted to computer 100 from a digital imager such as a camera in step 1105 of process 1100. Process 1200 begins by having a patient adjusts a position of his face or other body part to cause the image presented by the camera to a display to align with a template or other alignment indicator also displayed in step 1205. This may be done by making minor adjustments to a head or other body part placed in image compartment 110 in some embodiments of this invention.
After the head or other body part is properly aligned, specular reflections are removed from the skin being imaged if needed in step 1210. Specular reflections are removed to facilitate the analysis of skin color. Step 1210 may be performed by moving and/or adjusting optical filters; or changing the illumination provided by a light source. In particular, the removal of specular reflections may be performed using two cross polarizing filters oriented perpendicular to one another. If the skin surface features are to be analysed, the 2 polarized filters are oriented parallel to one another. A frontal image of the face or other body part is then captured in step 1215.
In step 1220, the components are configured for to capture a split image. Step 1120 may include moving a prism device and optical filter into a first position and adjusting the position of mirrors. A split image of the left oblique and right oblique of the face or other body part is captured in step 1225. Process 1200 then ends.
Figure 13 illustrates a process 1300 for reporting results in accordance with an embodiment of this invention. Process 1300 begins in 1305 with reporting of identified problem areas in the regions of interest. Preferably this is done by a visual display with either indicia around identified problem areas or a color scheme of the image of the region of interest indicating various features such as problem areas and/or different types of problem areas.
In step 1310, the classifications of the skin disorders in the identified problem areas are reported. This may be through a graphical display to show a patient or as a textual display or printed display. In step 1315, the amount of each skin disorder identified is quantified. The disorders may be quantified in many different ways, including but not limited to, number of affected skin areas, percentage of total skin area in image and/or region; and/or comparison to reports generated for previous images of the body part of the patient. The quantified amount of skin disorders detected on the body part of the patient is provided in step 1320. This may be done through graphical display of charts, color coded images, or any other way desired by a designer of the system. In step 1325, the current and previous images are aligned and compared. In accordance with some embodiments, the alignment may be an overlay of the current and previous images, color coded images distinguishing differences, or any other presentation that a design may want to provide to convey the difference between images to a user and/or patient. Process 1300 then ends after step 1327 when all of the presentation and/or images analyzed are stored to a connected memory for record keeping and future use.
Figure 14 illustrates process 1400 for classifying skin disorders in regions of interest in an image in accordance with embodiments of this invention. Process 1400 begins in 1402 by normalizing the image. This may be done by flattening the brightness or value V in the Hue-Saturation-Value (HSV) color space to a predefined value, such as, but not limited to, 0.8. After the image has been normalized, a natural pixel color is determined from the pixels in the reference region previously received or determined in step 1405.
Once the natural pixel color is determined from the reference region, the color difference of each pixel from the natural pixel color is determined in step 1410. In accordance with some embodiments of the invention, the color difference of a pixel is the absolute value of the natural pixel color subtracted from color value of the pixel
(|Pixel__Color_Value - Natural_Pixel_Color_Value|). However, other calculations can be used without departing from this invention. The color difference for each pixel is stored for future use.
In step 1415, the pixels having a pixel value above an interest threshold are identified. The interest threshold is a predefined value that is dependent upon the skin disorders being studied, the quality of the images, and quality of the disorder data and is therefore left as a design choice of a system in accordance with this invention. One method for determining threshold and identifying pixels is described with reference to Figure 15 below.
The identified pixels are then grouped into groups of like identified pixels in step 1430. The grouping into like identified groups may be performed using a conventional connected component analysis algorithm. After the identified pixels are grouped, sample squares are selected for analysis to determine the skin disorder shown in step 1435. Method for determining the sample squares are described with respect to Figures 16 and 17 below. The skin disorders are then identified or classified from the sample squares in step 1440. A process for classifying the skin disorders from the sample squares is described with respect to Figure 18 below. Process 1400 then ends.
Figure 15 illustrates a flow diagram of process 1500 for classifying pixels of interest in a region of interest in accordance with an embodiment of this invention. Process 1500 begins in step 1505 by constructing a histogram of the color differences of the pixels in the region(s) of interest. In step 1510, a distribution model is then fitted over the histogram. Preferably a Gaussian distribution model is used. However, other distribution models may be used without departing from this invention. From the distribution model, a high threshold and a low threshold are determined in step 1515. For example, the high and low threshold may be the values of a predetermined standard deviation from the mean. However, one skilled in the art can use any number of methods for determining these thresholds without departing from the invention,
After the high and low thresholds are determined, each pixel having a pixel color difference greater than the high threshold is identified as large color difference pixels in step 1520. In step 1525, each pixel in the region(s) of interest having a pixel color difference that is greater then the low threshold and less than the high threshold is identified as a medium color difference pixel. Each pixel having a pixel color difference less than the low threshold or each remaining pixel is then identified as a small color difference pixel in step 1530 and process 1500 ends.
Figures 16 and 17 illustrate processes for selecting sample squares of pixels for analysis after process 1500 classifies the pixels in accordance with two embodiments of this invention. Figure 16 illustrates process 1600 which is a first process for determining sample squares for analysis in accordance with an embodiment of this invention. Process 1600 begins in step 1605 by defining a primary minimum bounding box (B) that encloses a group of pixels defined as a connected component as discussed in 1430 of process 1400 (Figure 14). A morphological operator is then applied to the primary minimum bounding box in step 1610 to dilate the boundaries of the group.
In step 1615, all identified neighboring medium color difference pixels in the primary boundary box are connected into connected components using a conventional connected component analysis algorithm. All identified large color difference pixels in primary bounding box are then connected into connected components using a conventional connected component analysis algorithm in step 1620. A secondary minimum bounding box (b) is then formed for each connected component formed in steps 1615 and 1620. A sample square is then generated and returned for each minimum bounding box determined to be entirely within primary bounding box B in step 1630. The sample square may be a difference image within a square proportional to the secondary minimum bounding box. Process 1600 then determines if there are other groups remaining in the image in step 1635 and process 1600 is repeated for each group until no more groups remain to process. Process 1600 then ends.
Figure 17 illustrates a flow diagram of a second process 1700 for determining sample squares in accordance with another embodiment of this invention. Process 1700 begins in step 1705 by defining a minimum bounding box B that encloses a group of pixels defined as a connected component as discussed in step 1430 of process 1400 (Figure 14). In step 1710, process 1700 places a square box of a predetermined size at each location in the minimum bounding box. It is left to those skilled in the art to determine locations for placement of the boxes. However, the location of the boxes may be determined as a predetermined set of pixels within minimum bounding box, a calculated center of a connected component, a calculated center of a group like components, or in any other manner.
The ratio of pixels of interest compared to total number of pixels in each square box is then determined in step 1715. The pixels of interest may be pixels identified as large color difference pixels and/or pixels identified as medium color difference pixels. The exact pixels of interest are left as a design choice.
In step 1720, the square box b with the largest ratio is selected. In step 1725 the ratio of the selected square is compared with a threshold. In step 1727, if the ratio is greater than the threshold, then the difference image inside the square box b is output as a sample square, and the pixels in the square box are removed from the difference image within the minimum bounding box B. Thus, any pixels in a set of pixels shared with an overlapping square box are removed from the overlapping box. Next, process 1700 is repeated from 1715 to determine new ratios for the square boxes. If the largest ratios is not greater than the threshold, process 1700 proceeds to step 1730 and determines whether another set of squares of a different predetermined size has yet to be processed. If so, process 1700 progresses to step 1740 to set the predetermined size and is repeated from 1710 with a new set of squares. If not, process 1700 proceeds to step 1735 and determines if another connected component remains. If so, process 1700 repeats from step 1705 with the connected component. If not, process 1700 ends.
Figure 18 illustrates a flow diagram of process 1800 for classifying skin disorders in the sample squares. Process 1800 begins in step 1805 by extracting features from a sample square. Some examples of features that may be extracted include, but are not limited to, color difference histograms and color difference co-occurrence matrices. The extracted features are then used to classify a disorder from the extracted features using classifier algorithm in step 1810. Some examples of classifier algorithms that may be used include, but are not limited to, Bayesian classifiers and Support Vector Machines. After the sample square is classified, process 1800 determines in step 1805 whether there is another sample square to classify. If so, process 1800 is repeated from step 1805 for a new sample square. Otherwise, process 1800 ends.
The above embodiments provide a description of features and advantages of this invention, (t is envisioned those skilled in the art can and will design alternative systems that infringe on this invention as set forth in the following claims.

Claims

CLAIMS:
1. A system for analysing images to treat skin disorders comprising: an image compartment partially enclosing a cavity; a light source in said cavity to adequately light a body part; a camera in said cavity for capturing an image of said body part; a processing system for receiving said image from said camera wherein said processing system includes a processor unit and a memory readable by said processor; and instructions stored by said memory that direct said processing unit to: receive said image, determine a reference region from said image representing a natural skin color of said body part, determine a region of interest of said body part in said image, detect a problem area in said region of interest using said reference area, classify disorder of said problem area, and generate results.
2. The system of claim 1 further comprising: a prism device configured in said cavity to be between said camera and said body part; and a plurality of mirrors in said cavity configured to reflect images of said body part onto said prism device wherein said camera, said plurality of mirrors and said prism device are configured to cause said camera to capture a split image showing a first side and a second side of said body part.
3. The system of claim 2 wherein said prism device comprises: a prism that refracts images from said plurality of mirrors to said camera.
4. The system of claim 2 wherein said prism device comprises:
An array of mirrors configured to reflect images from said plurality of mirror onto said camera.
5. The system of claim 2 wherein said prism is adjustable to allow said camera to capture one of a single image of said body part and said split image of said body part.
6. The system of claim 2 further comprising: a rest stand in said cavity configured to place said body part in a substantially constant position in said cavity with relation to said camera.
7. The system of claim 2 wherein said body part is a face of a subject and said rest stand comprises: a chin rest defined as an indenture in said rest stand configured to allow chin to rest in said indenture.
8. The system of claim 2 wherein said compartment comprises: a back side wall; a first side wall; a second side wall; a first one of said plurality of mirrors affixed to said first side wall; and a second one of said plurality of mirrors affixed to said second side wall.
9. The system of claim 8 wherein said first and said second ones of said plurality of mirrors are adjustable.
10. The system of claim 9 further comprising: a first hinge connecting said first side wall and said back side wall to make said first one of said plurality of mirrors adjustable; and a second hinge connecting said first side wall and said back side wall to make said second one of said plurality of mirrors adjustable.
11. The system of claim 10 further comprising: a first actuator that rotates said first hinge.
12. The system of claim 11 further comprising: a second actuator that rotates said second hinge.
13. The system of claim 8 further comprising: a cover over said back, said first, and said second side walls of said compartment.
14. The system of claim 13 wherein said cover comprises: • a plurality of movable panels.
15. The system of claim 13 further comprises: a head guide affixed to said cover and extending into said compartment to aid in aligning of a head of a subject to capture said image of a face of said subject.
16. The system of claim 1 further comprising: a plurality of cameras including said camera in said image compartment each position to capture an image from a particular view point.
17. The system of claim 16 wherein one of said plurality of cameras captures an image of a frontal view of said body part.
18. The system of claim 16 wherein one of said plurality of cameras captures an image of a left oblique of said body part.
19. The system of claim 16 wherein one of said plurality of cameras captures an image of a right oblique of said body part.
20. The system of claim 1 further comprising: an optical filter between said body part and said camera to allow particular wavelengths of light to pass to said camera.
21. The system of claim 1 further comprising: an optical filter between said body part and said camera that polarizes said light to control a polarization of state of light reaching said camera.
22. The system of claim 1 wherein said light source provides ultraviolet light and said camera captures an ultraviolet image of said body part.
23. The system of claim 1 wherein said light source provides infrared light and said camera captures an infrared image of said body part.
24. The system of claim 1 wherein said instructions to determine said reference region comprise: instructions to receive an input identifying an area of pixels of said image for use as said reference region.
25. The system of claim 1 wherein said instructions to determine said reference region comprise: instructions to determine an area of said image having a consistent pixel color for use as said reference region.
26. The system of claim 1 wherein said instructions to determine said region of interest comprise: instructions to receive an input of a portion of said image for use as said region of interest.
27. The system of claim 1 wherein said instructions to determine said region of interest comprise: instructions to determine a region of said image between known features of said body part.
28. The system of claim 1 wherein said instructions to detect a problem area comprise: instructions to: determine a natural pixel color from said reference region, compute a color difference of each pixel in said region of interest and said natural pixel color, identify each said pixel in said region of interest with said color difference that is greater than or equal to a threshold, determine a like identified group of a plurality of pixels from said identified pixels using connected component analysis, and identify an area as a plurality of pixels in a set of connected like identified groups.
29. The system of claim of 28 wherein said instructions to determine said like identified group comprises: instructions to: construct a color difference histogram for said pixels in said region of interest, fix a distribution model over said histogram determine a high threshold from said distribution model, determine a low threshold from said distribution model, identify a pixel in said region of interest as a large color difference pixel if said color difference of said pixel is greater than said high threshold, identify a pixel in said region of interest as a medium color difference pixel if said color difference of said pixel is less than said high threshold and greater than said low threshold, identify a pixel in said region of interest a small color difference pixel if said color difference is less then said low threshold, and wherein said like identified group includes pixels having the same identified color difference.
30. The system of claim 29 wherein said instructions to classify said area comprise: instructions to: divide said area into a plurality of sample squares, classify each of said plurality of sample squares as a disorder.
31. The system of claim 30 wherein said instructions to classify comprise: instructions to: extract features from said sample square; and classify said features using a classifier process.
32. The system of claim 31 wherein said classifier process is a Bayesian classifier.
33. The system of claim 31 wherein said classifier process is a support vector machine.
34. The system of claim 30 wherein said instructions to divide said area into a plurality of sample squares comprises: instructions to: determine a primary minimum bounding box including all pixels in said area, apply a morphological operator to dilate said boundary of said area, connect each group of neighboring medium color difference pixels in said primary minimum bounding box into a connected component, connect each group of neighboring large color difference pixels in said primary minimum bounding box into a connected component, determine a secondary minimum bounding box for each connected component, determine each secondary minimum bounding box entirely within said primary minimum bounding box, and return each said secondary minimum bounding box determined to be entirely within said primary minimum bounding box as one of said plurality of sample squares.
35. The system of claim 30 wherein said instructions to divide said area into a plurality of sample squares comprises: instructions to:
1. determine a primary minimum bounding box including all pixels in said area,
2. arrange said pixels in said primary minimum bounding box into groupings of a plurality of squares include an equal number of said pixels wherein said number is one of a plurality of predetermined sizes ,
3. determine a ratio of pixels of interest in each square to a total number of pixels in said plurality of sets of squares,
4. determine said one of said plurality of squares having a greatest ratio, 5. determine whether said ratio of said one of said plurality of squares is greater than, and
6. include said one of said plurality of squares in said plurality of sample squares responsive to a determination said ratio is greater than said threshold,
7. remove said pixels in said one of said plurality of squares from other ones of said plurality of squares, and
8. repeat steps 3-7.
36. The system of claim 35 wherein said instructions to divide said area into a plurality of sample squares further comprise: instructions to:
9. determine steps 2-8 have been performed for a plurality of squares for each of a plurality of sizes, and
10. repeat steps 2-9 for another one of said plurality of sizes.
37. The system of claim 35 wherein said plurality of squares are arranged to overlap one another in said minimum bounding box and overlapping ones of said plurality of squares include an overlapping set of pixels.
38. The system of claim 35 wherein said pixels of interest are large color difference pixels.
39. The system of claim 35 wherein said pixels of interest are medium color difference pixels.
40. The system of claim 35 wherein said pixels of interest are large and medium color difference pixels.
41. A method for analyzing a skin disorder with a processing system comprising: capturing an image of a skin of a body part with a camera; determining a reference region of a first plurality of pixels from said image; determining a region of interest of a second plurality of pixels from said image; analyzing said region of interest using said reference region; and generating results.
42. The method of claim 41 further comprising: capturing a plurality of images of said skin of said body parts with said camera wherein said image is one of said plurality of images.
43. The method of claim 41 further comprising: capturing a plurality of images using a plurality of cameras wherein said image is one of said plurality of images.
44. The method of claim 41 wherein said step of capturing said image comprises: aligning an initial image of said body part being displayed from said camera with a template on a display.
45. The method of claim 41 wherein said step of capturing said image comprises: removing specular reflections from said image.
46. The method of claim 41 wherein said image is an image of said body part from a single point of view.
47. The method of claim 46 wherein said single point of view is a frontal view.
48. The method of claim 41 wherein said image is a split image is an image showing said body part from a plurality of points of view.
49. The method of claim 48 wherein said split image includes a left oblique view and a right oblique view of said body part.
50. The method of claim 48 wherein said step of capturing said image further comprising: adjusting angles of a plurality of mirrors with reference to a prism in front of said camera and said body part to provide said split image.
51. The method of claim 41 wherein said step of generating said results comprises: detecting a problem area in said image;
52. The method of claim 50 wherein said step of generating said results comprising: classifying a skin disorder in said problem area from said image.
53. The method of claim 50 wherein said step of generating said results comprising: classifying a plurality of skin disorders in said problem area from said image.
54. The method of claim 50 wherein said step of generating said results further comprises: quantifying an amount of a skin disorder in said problem area.
55. The method of claim 54 wherein said step of generating said results further comprises: comparing said amount of said skin disorder in said problem area with amounts of said skin disorder in said problem area from previous images to show changes in said problem area.
56. The method of claim 41 wherein said step of generating said results comprises: aligning said image with a previous image on a display for a visual comparison.
57. The method of claim 41 wherein said step of capturing said image comprises: capturing an ultraviolet image of said body part.
58. The method of claim 41 wherein said step of capturing said image comprises: capturing an infrared image of said body part.
59. The method of claim 41 wherein said step of determining said reference region comprises: receiving an input identifying portion of said image for use as a reference region.
60. The method of claim 41 wherein said step of determining said reference region comprises: determining an area of said image including a plurality of pixels having a consistent pixel color for use as said reference region.
61. The method of claim 41 wherein said step of determining said region of interest comprises: receiving an input of a portion of said image to use as said region of interest.
62. The method of claim 41 wherein said step of determining said region of interest comprises: determining a region of said image between known features of said body part.
63. The method of claim 41 wherein said step of analyzing said region of interest comprises: determining a natural color from colors of said first plurality of pixels in said reference region; determining a color difference from each pixel of said second plurality of pixels from said natural color; identifying each of said second plurality of pixels having a color difference greater than a threshold; generating a group for each plurality of neighboring pixels from said identified ones of said second plurality of pixels; determining a plurality of sample squares for each said group generated; and classifying a skin disorder represented by each of said plurality sample squares.
64. The method of claim 63 further comprising: normalizing a color of each of said first plurality of pixels in said reference region prior to determining said natural color.
65. The method of claim 63 further comprising: normalizing a color of each of said second plurality of pixels in said region of interest prior to determining said color difference for each of said second plurality of pixels.
66. The method of claim 63 wherein said step of determining said sample squares for said group comprises: constructing a color difference histogram of color differences from each of said identified ones of said second plurality of pixels in said region of interest; fixing a distribution model over said histogram; determining a high threshold from said distribution model; determining a low threshold from said distribution model; categorizing an identified one of said second plurality of pixels in said region of interest as a large color difference pixel if said identified one pixel of said color difference of said pixel is greater than said high threshold; categorizing an identified one of said second plurality of pixels in said region of interest as a medium color difference pixel if said color difference of said identified one pixel is less than said high threshold and greater than said low threshold; categorizing an identified one of said second plurality of pixels in said region of interest a small color difference pixel if said color difference of said identified one pixel is less then said low threshold.
67. The method of claim 66 wherein said step of determining said plurality of sample squares for said group further comprises: determining a primary minimum bounding box including all pixels in said group; applying a morphological operator to dilate said boundary of said group; connecting each group of neighboring medium color difference pixels in said primary minimum bounding box into a connected component; connecting each group of neighboring large color difference pixels in said primary minimum bounding box into a connected component, determining a secondary minimum bounding box for each connected component, , determining each said secondary minimum bounding box entirely within said primary bounding box, and returning each said secondary minimum bounding box determined to be entirely within said primary minimum bounding box as one of said plurality of sample squares.
68. The method of claim 63 wherein said step of determining said plurality of sample squares for said group comprises:
1. determining a primary minimum bounding box including all pixels in said area;
2. arranging said pixels in said primary minimum bounding box into groupings of a plurality of squares include an equal number of said pixels wherein said number is one of a plurality of predetermined sizes;
3. determining a ratio of pixels of interest in each square to a total number of pixels in said plurality of sets of squares;
4. determining said one of said plurality of squares having a greatest ratio, 5. determining whether said ratio of said one of said plurality of squares is greater than a threshold;
6. including said one of said plurality of squares in said plurality of sample squares responsive to a determination said ratio is greater than said threshold;
7. removing said pixeis in said one of said plurality of squares from other ones of said plurality of squares; and
8. repeating steps 3-7.
69. The method of claim 68 wherein said step of determining said plurality of sample squares further comprises: 9. determining steps 2-8 have been performed for a plurality of squares for each of a plurality of sizes in response to a determination said ratio of said one of said plurality of squares not greater than said threshold; and
10. repeating steps 2-9 for another one of said plurality of sizes.
70. The method of claim 68 wherein said plurality of squares are arranged to overlap one another in said minimum bounding box and overlapping ones of said plurality of squares include an overlapping set of pixels.
71. The method of claim 68 wherein said pixels of interest are large color difference pixels.
72. The method of claim 68 wherein said pixels of interest are medium color difference pixels.
73. The method of claim 68 wherein said pixels of interest are large and medium color difference pixels.
74. The method of claim 68 wherein said step of classifying comprises: extracting features from each of said plurality of sample squares; and classifying said features of each of said plurality of sample squares using a classifier process.
75. The method of claim 74 wherein said classifier process is a Bayesian classifier.
76. The method of claim 74 wherein said classifier process is a support vector machine.
77. A product for analyzing an image to treat a skin disorder comprising: instructions for directing a processing unit to: receive an image of a skin of a body part with a camera, determine a reference region of a first plurality of pixels from said image, determine a region of interest of a second plurality of pixels from said image, analyze said region of interest using said reference region, and generate results; and a media readable by said processing unit to store said results.
78. The product of claim 77 wherein said instructions to generate said results comprises: instructions to detect a problem area in said image.
79. The product of claim 78 wherein said instructions to generating said results comprise: instructions to classify a skin disorder in said problem area from said image.
80. The product of claim 78 wherein said instructions to generate said results comprise: instructions to classify a plurality of skin disorders in said problem area from said image.
81. The product of claim 78 wherein said instructions to generate said results further comprise: instructions to quantify an amount of a skin disorder in said problem area.
82. The product of claim 81 wherein said instructions to generate said results further comprise: instructions to compare said amount of said skin disorder in said area with amounts of said skin disorder in said area from previous images to show changes in said problem area.
83. The product of claim 77 wherein said instruction to generate said results comprise: instructions to align said image with a previous image on a display for a visual comparison.
84. The product of claim 77 wherein said instructions to determine said reference region comprise: instructions to receive an input identifying an area of pixels of said image for use as said reference region.
85. The product of claim 77 wherein said instructions to determine said reference region comprise: instructions to determine an area of said image having a consistent pixel color for use as said reference region.
86. The product of claim 77 wherein said instructions to determine said region of interest comprise: instructions to receive an input of an area of pixels of said image to use as said region of interest.
87. The product of claim 77 wherein said instructions to determine said region of interest comprise: instructions to determine said region of interest in said image as pixels identified by between known features of said body part in said image.
88. The product of claim 77 wherein said instructions analyze said region of interest comprise: instructions to: determine a natural color from colors of said first plurality of pixels in said reference region, determine a color difference from each pixel of said second plurality of pixels from said natural color, identify each of said second plurality of pixels having a color difference greater than a threshold, generate a group for each plurality of neighboring pixels from said identified ones of said second plurality of pixels, determine a plurality of sample squares for each said group generated, and classify a disorder represented by each of said plurality sample squares.
89. The product of claim 88 wherein said instructions further comprise: instructions to normalize a color of each of said first plurality of pixels in said reference region prior to determining said natural color.
90. The product of claim 88 wherein said instructions further comprise: normalize a color of each of said second plurality of pixels in said region of interest prior to determining said color difference for each of said second plurality of pixels.
91. The product of claim 88 wherein said instructions to determine said sample squares for one of each said group comprise: instructions to: construct a color difference histogram of color differences of each identified one of said second plurality of pixels in said region of interest, fix a distribution model over said histogram, determine a high threshold from said distribution model, determine a low threshold from said distribution model, categorize an identified one of said second plurality of pixel in said region of interest as a large color difference pixel responsive to said identified one pixel of said color difference of said pixel being greater than said high threshold, categorize an identified one of said second plurality of pixels in said region of interest as a medium color difference pixel responsive to said color difference of said identified one pixel being less than said high threshold and greater than said low threshold, and categorize an identified one of said second plurality of pixels in said region of interest a small color difference pixel responsive to said color difference of said identified one pixel being less then said low threshold.
92. The product of ciaim 91 wherein said instructions to determine a plurality of sample squares for one of each said group comprise: instructions to: determine a primary minimum bounding box including all pixels in said group, apply a morphological operator to dilate said boundary of said group, connect each group of neighboring medium color difference pixels in said primary minimum bounding box into a connected component, connect each group of neighboring large color difference pixels in said primary bounding box into a connected component, determine a secondary minimum bounding box for each said connected component, determine each said secondary minimum bounding box entirely within said primary bounding box, and return each said secondary minimum bounding box determined to be entirely within said primary minimum bounding box as one of said plurality of sample squares.
93. The product of claim 91 wherein said instructions to determine plurality of sample squares for one of each said group comprise:
Instructions to: 1. determine a primary minimum bounding box including all pixels in said group,
2. arrange said pixels in said primary minimum bounding box into groupings of a plurality of squares include an equal number of said pixels wherein said number is one of a plurality of predetermined sizes, 3. determine a ratio of pixels of interest in each square to a total number of pixels in said plurality of sets of squares,
4. determine said one of said plurality of squares having a greatest ratio,
5. determine whether said ratio of said one of said plurality of squares is greater than, and
6. include said one of said plurality of squares in said plurality of sample squares responsive to a determination said ratio is greater than said threshold,
7. remove said pixels in said one of said plurality of squares from other ones of said plurality of squares, and
8. repeat steps 3-7.
94. The product of claim 93 wherein said instructions to divide said area into a plurality of sample squares further comprise: instructions to:
9. determine steps 2-8 have been performed for a plurality of squares for each of a plurality of sizes, and 10. repeat steps 2-9 for another one of said plurality of sizes.
■ 95. The product of claim 93 wherein said plurality of squares are arranged to overlap one another in said minimum bounding box and overlapping ones of said plurality of squares include an overlapping set of pixels.
96. The product of claim 93 wherein said pixels of interest are large color difference pixels.
97. The product of claim 93 wherein said pixels of interest are medium color difference pfxeis.
98. The product of claim 93 wherein said pixels of interest are large and medium color difference pixels.
99. The product of claim 88 wherein said instructions to classify comprise: instructions to: extract features from each of said plurality of sample squares, and classify said features of each of said plurality of sample squares using a classifier process.
100. The product of claim 99 wherein said classifier process is a Bayesian classifier.
101. The product of claim 99 wherein said classifier process is a support vector machine.
PCT/SG2009/000190 2008-05-29 2009-05-29 Method of analysing skin images using a reference region to diagnose a skin disorder WO2009145735A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US5688408P 2008-05-29 2008-05-29
US61/056,884 2008-05-29

Publications (1)

Publication Number Publication Date
WO2009145735A1 true WO2009145735A1 (en) 2009-12-03

Family

ID=41377359

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2009/000190 WO2009145735A1 (en) 2008-05-29 2009-05-29 Method of analysing skin images using a reference region to diagnose a skin disorder

Country Status (1)

Country Link
WO (1) WO2009145735A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011106033A1 (en) * 2010-02-26 2011-09-01 Empire Technology Development Llc Multidirectional scan and algorithmic skin health analysis
US8591413B2 (en) 2010-02-26 2013-11-26 Empire Technology Development Llc Echogram detection of skin conditions
WO2015199560A1 (en) * 2014-06-28 2015-12-30 Ktg Sp. Z O.O. A method for diagnosing birthmarks on the skin
ITUB20152522A1 (en) * 2015-07-27 2017-01-27 Linkverse S R L Apparatus and method for the detection, quantification and classification of epidermal lesions
WO2018111069A1 (en) * 2016-12-15 2018-06-21 Castro Baldenebro Brayan Gamaniel Method for identifying skin lesions caused by acne by means of multispectral-image capture with prior cooling
CN111150369A (en) * 2020-01-02 2020-05-15 京东方科技集团股份有限公司 Medical assistance apparatus, medical assistance detection apparatus and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020016539A1 (en) * 2000-05-03 2002-02-07 Bernd Michaelis Method and apparatus for measuring and classifying optically observable changes in skin and mucous membrane
US20040218810A1 (en) * 2003-04-29 2004-11-04 Inforward, Inc. Methods and systems for computer analysis of skin image
WO2004095372A1 (en) * 2003-04-22 2004-11-04 Provincia Italiana Della Congregazione Dei Figli Dell'immacolata Concezione - Instituto Dermopatico Dell'immacolata Automatic detection of skin lesions
EP1512372A1 (en) * 2003-09-05 2005-03-09 DERMING S.r.l. Method and device for quantidying the extension of a colour-altered skin or nail area
US20080008370A1 (en) * 2006-06-20 2008-01-10 Shiu-Shin Chio Method and apparatus for diagnosing conditions using tissue color

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020016539A1 (en) * 2000-05-03 2002-02-07 Bernd Michaelis Method and apparatus for measuring and classifying optically observable changes in skin and mucous membrane
WO2004095372A1 (en) * 2003-04-22 2004-11-04 Provincia Italiana Della Congregazione Dei Figli Dell'immacolata Concezione - Instituto Dermopatico Dell'immacolata Automatic detection of skin lesions
US20040218810A1 (en) * 2003-04-29 2004-11-04 Inforward, Inc. Methods and systems for computer analysis of skin image
EP1512372A1 (en) * 2003-09-05 2005-03-09 DERMING S.r.l. Method and device for quantidying the extension of a colour-altered skin or nail area
US20080008370A1 (en) * 2006-06-20 2008-01-10 Shiu-Shin Chio Method and apparatus for diagnosing conditions using tissue color

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011106033A1 (en) * 2010-02-26 2011-09-01 Empire Technology Development Llc Multidirectional scan and algorithmic skin health analysis
US8591413B2 (en) 2010-02-26 2013-11-26 Empire Technology Development Llc Echogram detection of skin conditions
US8855751B2 (en) 2010-02-26 2014-10-07 Empire Technology Development Llc Multidirectional scan and algorithmic skin health analysis
WO2015199560A1 (en) * 2014-06-28 2015-12-30 Ktg Sp. Z O.O. A method for diagnosing birthmarks on the skin
ITUB20152522A1 (en) * 2015-07-27 2017-01-27 Linkverse S R L Apparatus and method for the detection, quantification and classification of epidermal lesions
WO2017017590A1 (en) * 2015-07-27 2017-02-02 Linkverse S.R.L. Apparatus and method for detection, quantification and classification of epidermal lesions
WO2018111069A1 (en) * 2016-12-15 2018-06-21 Castro Baldenebro Brayan Gamaniel Method for identifying skin lesions caused by acne by means of multispectral-image capture with prior cooling
CN111150369A (en) * 2020-01-02 2020-05-15 京东方科技集团股份有限公司 Medical assistance apparatus, medical assistance detection apparatus and method

Similar Documents

Publication Publication Date Title
JP4485837B2 (en) Method and system for computer analysis of skin images
US7564990B2 (en) Imaging system and method for physical feature analysis
US11769265B2 (en) Skin assessment using image fusion
US5836872A (en) Digital optical visualization, enhancement, quantification, and classification of surface and subsurface features of body surfaces
US6902935B2 (en) Methods of monitoring effects of chemical agents on a sample
EP1433418A1 (en) Skin diagnostic imaging method and apparatus
AU755385B2 (en) Systems and methods for the multispectral imaging and characterization of skin tissue
US20120206587A1 (en) System and method for scanning a human body
WO2009145735A1 (en) Method of analysing skin images using a reference region to diagnose a skin disorder
CN111128382B (en) Artificial intelligence multimode imaging analysis device
WO2015035229A2 (en) Apparatuses and methods for mobile imaging and analysis
Min et al. Development and evaluation of an automatic acne lesion detection program using digital image processing
Fabelo et al. A novel use of hyperspectral images for human brain cancer detection using in-vivo samples
US20190239752A1 (en) Hyperspectral imaging system and method of using the same
CN113159227A (en) Acne image recognition method, system and device based on neural network
JP2015500722A (en) Method and apparatus for detecting and quantifying skin symptoms in a skin zone
KR102413404B1 (en) skin condition analyzing and skin disease diagnosis device
US11478145B2 (en) Multispectral and hyperspectral meibography
CN111528807A (en) Face image analysis system and method based on multispectral and 3D model reconstruction
Akbari et al. Blood vessel detection and artery-vein differentiation using hyperspectral imaging
US20230363697A1 (en) Acne severity grading methods and apparatuses
JP2006081846A (en) Method and apparatus for estimating facial wrinkle
US11896345B2 (en) Device for producing an image of depth-dependent morphological structures of skin lesions
JP5408527B2 (en) Creating a melanoma diagnostic image
EP4075385A1 (en) Method and system for anonymizing facial images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09755166

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09755166

Country of ref document: EP

Kind code of ref document: A1