CN118436316B - Method for a dermatological mirror and dermatological mirror system - Google Patents

Method for a dermatological mirror and dermatological mirror system Download PDF

Info

Publication number
CN118436316B
CN118436316B CN202410895452.4A CN202410895452A CN118436316B CN 118436316 B CN118436316 B CN 118436316B CN 202410895452 A CN202410895452 A CN 202410895452A CN 118436316 B CN118436316 B CN 118436316B
Authority
CN
China
Prior art keywords
image
area
dermatoscope
head skin
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410895452.4A
Other languages
Chinese (zh)
Other versions
CN118436316A (en
Inventor
王艺萌
吴希
李薇薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University Third Hospital Peking University Third Clinical Medical College
Original Assignee
Peking University Third Hospital Peking University Third Clinical Medical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Third Hospital Peking University Third Clinical Medical College filed Critical Peking University Third Hospital Peking University Third Clinical Medical College
Priority to CN202410895452.4A priority Critical patent/CN118436316B/en
Publication of CN118436316A publication Critical patent/CN118436316A/en
Application granted granted Critical
Publication of CN118436316B publication Critical patent/CN118436316B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides a method for a dermatoscope and a dermatoscope system. The method comprises the following steps: acquiring a first head skin image from the dermatoscope; detecting a hair follicle position from the first head skin image as a first keypoint; drawing a first image by using the first key point; selecting and marking an asymmetric region on the first image as a first positioning region to be matched; storing the first head skin image and the first localized area; acquiring a second head skin image from the dermatoscope; detecting hair follicle locations from the second head skin image as second keypoints; drawing a second image by using the second key point; detecting and marking a second positioning area identical to the first positioning area in the second image; and superposing the second image marked with the second positioning area on the first image marked with the first positioning area. The invention does not use an interventional way for positioning and is friendly to patients.

Description

Method for a dermatological mirror and dermatological mirror system
Technical Field
The invention relates to the technical field of medical equipment, in particular to a method for a dermatoscope, a dermatoscope system and computing equipment.
Background
The skin mirror is used as a noninvasive inspection means, can clearly observe the structural changes of scalp, follicular orifice, hair and the like in the skin damage area, avoids directly pulling scalp and hair samples, and provides a certain basis for diagnosis. More and more studies confirm that dermatoscopes play an important role in the diagnosis of alopecia.
In some situations, for example, in the case of androgenetic alopecia, it is necessary to acquire an initial image of the alopecia area at the initial stage of diagnosis and treatment and an image of the alopecia area after one stage of treatment by using a dermatoscope, so as to evaluate and analyze the diagnosis and treatment effect. In order to enable subsequent acquisition of an image of the same location as the initial image, it is now common to use medical tattoos (see red marks in fig. 1). Medical tattoos, also known as "medical marks" or "surgical marks", are a technique for marking specific locations on a patient's body, mainly for ensuring accuracy and repeatability in medical procedures. The dyes used are specifically designed for medical purposes, ensuring no harm to the skin, while at the same time enabling a clear indication of the marking. These dyes are generally non-toxic and less allergenic to skin. Medical tattoos may last for a period of time, but over time and natural renewal of the skin, they eventually fade. Medical tattooing is an interventional procedure because it involves the use of a needle to penetrate the surface of the skin. Although this is an opposing surface process, it still requires penetration of the skin barrier and is therefore a minimally invasive procedure, which is of concern to some alopecia patients.
Therefore, there is a need for a non-invasive method of positioning a dermatological scope when acquiring an image of the skin of the head with the dermatological scope, ensuring accuracy and repeatability of the positioning in two times.
Disclosure of Invention
The present invention aims to provide a method for a dermoscope, a dermoscope system and a computing device, which can realize accuracy and repeatability in two positioning in a non-invasive manner.
According to an aspect of the present invention, there is provided a method for a dermatoscope for positioning the dermatoscope when acquiring a head skin image with the dermatoscope, the method comprising:
Acquiring a first head skin image from the dermatoscope;
Detecting a hair follicle position from the first head skin image as a first keypoint;
drawing a first image by using the first key point;
Selecting and marking an asymmetric region on the first image as a first positioning region to be matched;
storing the first head skin image and the first localized area;
Acquiring a second head skin image from the dermatoscope;
Detecting hair follicle locations from the second head skin image as second keypoints;
drawing a second image by using the second key point;
detecting and marking a second positioning area identical to the first positioning area in the second image;
the second image marked with the second positioning area is overlapped on the first image marked with the first positioning area in a superposition mode, so that an operator is guided to move the dermatoscope, and the second positioning area is overlapped with the first positioning area to determine a target image.
According to some embodiments, detecting hair follicle locations from the first head skin image as a first keypoint comprises:
the first head skin image is segmented for hair follicle position using a pre-trained image semantic segmentation model.
According to some embodiments, the asymmetric region is a triangular region or a polygonal region.
According to some embodiments, a center point of the first image is located within the first positioning area.
According to some embodiments, detecting and marking a second localization area in the second image that is identical to the first localization area comprises:
And detecting and marking a second positioning area which is the same as the first positioning area in the second image by using a target detection method based on a pre-training neural network.
According to some embodiments, overlapping the second image marked with the second positioning area over the first image marked with the first positioning area comprises:
And superposing a second image drawn with the outline of the second positioning area only on the first image drawn with the outline of the first positioning area.
According to some embodiments, the foregoing method further comprises:
The second head skin image when the second positioning region coincides with the first positioning region is determined as the target image.
According to another aspect of the present application, there is provided a dermatological system comprising:
A dermatoscope for acquiring a head skin image;
an image server in communication with the dermatoscope for receiving and processing the head skin image from the dermatoscope, wherein the image server comprises:
an image acquisition module for acquiring a first head skin image and a second head skin image from the dermatoscope;
a keypoint detection module for detecting a hair follicle position from the first head skin image as a first keypoint and from the second head skin image as a second keypoint;
The drawing module is used for drawing a first image by using the first key points and drawing a second image by using the second key points;
The marking module is used for selecting and marking an asymmetric area on the first image as a first positioning area to be matched;
the matching module is used for detecting and marking a second positioning area which is the same as the first positioning area in the second image;
A superposition module for superposing the second image marked with the second positioning area on the first image marked with the first positioning area, thereby guiding an operator to move the dermatoscope so that the second positioning area coincides with the first positioning area to determine a target image;
A storage module for storing the first head skin image and the first localization area and the target image.
According to another aspect of the present application, there is provided a computer program product comprising: computer program which, when executed by a processor, implements a method as claimed in any one of the preceding claims.
According to another aspect of the invention there is provided a computing device comprising a processor, and a memory storing a computer program which, when executed by the processor, causes the processor to perform the method of any of the above.
According to another aspect of the invention there is provided a non-transitory computer readable storage medium having stored thereon computer readable instructions which, when executed by a processor, cause the processor to perform the method of any of the above.
According to an exemplary embodiment, in order to position the dermatoscope when acquiring the head skin image a plurality of times using the dermatoscope, the technical solution of the present invention extracts a positioning area in which a previously determined positioning area is detected after the subsequent acquisition of the image, so that an operator can move the dermatoscope until the second positioning area coincides with the first positioning area. At this time, the image obtained by the dermatoscope is an image obtained at a previous position, and thus can be determined as the target image obtained at this time. Thus, even if the interventional marking mode is not adopted, consistency of two positioning can be realized. The technical scheme of the invention has strong adaptability, is friendly to patients and can assist doctors to make more accurate diagnosis and treatment evaluation.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the description of the embodiments will be briefly described below.
Fig. 1 shows an example image of a medical tattoo for the skin of the head.
Fig. 2 shows a flow chart of a method for a dermatological mirror in accordance with an example embodiment.
Fig. 3 schematically shows a first image drawn with a first keypoint.
Fig. 4 schematically shows a first positioning area on a first image.
Fig. 5 schematically shows a second image with a second positioning area.
Fig. 6 shows a schematic representation of a second image superimposed on a first image.
Fig. 7 shows a dermoscope system according to an example embodiment.
FIG. 8 illustrates a block diagram of a computing device according to an example embodiment of the invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another element. Accordingly, a first component discussed below could be termed a second component without departing from the teachings of the present inventive concept. As used herein, the term "and/or" includes any one of the associated listed items and all combinations of one or more.
The user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present invention are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of related data is required to comply with the relevant laws and regulations and standards of the relevant country and region, and is provided with corresponding operation entries for the user to select authorization or rejection.
Those skilled in the art will appreciate that the drawings are schematic representations of example embodiments and that the modules or flows in the drawings are not necessarily required to practice the invention and therefore should not be taken to limit the scope of the invention.
In order to acquire the skin image of the head at the same position by using a dermatoscope for evaluating and analyzing the diagnosis and treatment effect, a medical tattoo is generally adopted at present. Medical tattoos, although a procedure on the opposite surface, still belong to minimally invasive procedures for which some patients suffering from alopecia are more conscious.
To this end, the invention proposes a solution, in which the localization area is extracted using neural network image processing techniques, in which the previously determined localization area is detected after the subsequent acquisition of the image, so that the operator can move the dermatoscope until the second localization area coincides with the first localization area. At this time, the image obtained by the dermatoscope is an image obtained at a previous position, and thus can be determined as the target image obtained at this time. According to the technical scheme of the invention, even if an interventional marking mode is not adopted, the consistency of two positioning can be realized, and the method is more friendly to patients.
The technical scheme of the invention is described in detail below with reference to the accompanying drawings.
Fig. 2 shows a flow chart of a method for a dermatological mirror in accordance with an example embodiment.
The method shown in fig. 2 is used to position a dermatoscope when acquiring an image of the skin of the head with the dermatoscope.
Referring to fig. 2, at S201, a first head skin image from the dermatoscope is acquired.
The skin microscope, also called a light-emitting microscope, is a rapid, convenient and noninvasive inspection means. The technical principle is that the reflection, refraction and diffraction of light on the stratum corneum of the skin are reduced through a certain medium (generally mineral oil, ethanol, water and the like) so that the stratum corneum becomes semitransparent; then, by means of the magnifying function of the dermoscope, the structures in the epidermis, at the interface of the epidermis and the dermis and the superficial dermis are seen, and the morphological characteristics which cannot be seen by naked eyes are displayed. Since the first description of the dermatological mirror in the twentieth century, 80, important technical improvements and innovations have emerged. All second generation devices improve the optics of illumination using common light emitting diodes. Currently, there are two modes of dermatoscopes available: one is unpolarized light requiring immersion with a medium, and the other is polarized light (contact and non-contact skin mirrors). Compared with a polarized-light dermatoscope, the color of unpolarized light under direct contact is sharper, and the distortion is smaller, so that the illumination and resolution are higher. Skin mirrors are used to examine the scalp and hair, also known as hair mirrors. The dermatoscope is connected with a computer, and the obtained image can be transmitted and stored to the computer for subsequent analysis. According to an example embodiment, a first head skin image obtained through a dermatoscope, such as an image of an androgenic alopecia site, may be transmitted to an image server communicatively coupled to the dermatoscope for image processing.
At S203, a hair follicle position is detected from the first head skin image as a first keypoint.
According to some embodiments, the first head skin image may be segmented for hair follicle locations using a pre-trained image semantic segmentation model, with the resulting hair follicle locations as first keypoints. For example, segmentation of hair follicle locations can be performed using a pre-trained U-Net deep learning model, which can be pre-trained using a dataset containing hair follicle markers.
U-Net is a Convolutional Neural Network (CNN) characterized by a structure called a "U" consisting of a contracted path for capturing context information and a symmetrical expanded path for pinpointing. In the shrink path, the network gradually reduces the spatial size of the feature map through successive rolling and pooling layers while increasing the number of feature channels. This allows more context information to be captured. In the extended path, the network gradually restores the spatial dimensions of the feature map through the upsampling and convolution layers while reducing the number of feature channels. This allows a more accurate localization of the target. An important feature of U-Net is the jump connection (skip connections) which connects the profile in the contracted path with the profile in the expanded path. Thus, the up-sampling feature map in the extension path can obtain features with higher resolution, so that the segmentation accuracy is improved.
It is easy to understand that the technical scheme of the application can adopt other semantic segmentation neural network models except U-Net, such as FCN, deep Lab, mask R-CNN and the like.
At S205, a first image is rendered using the first keypoint.
According to an exemplary embodiment, the keypoint image is redrawn as a first image (see fig. 3) with the segmented first keypoints for positioning the dermatome when subsequently acquiring the image.
In S207, an asymmetric region is selected and marked on the first image as a first positioning region to be matched.
According to some embodiments, the asymmetric region is a triangular region or a polygonal region.
According to some embodiments, a center point of the first image is located within the first positioning area. For example, one way to select the asymmetric area is to select four points on the coordinate axes with the center of the first image as the origin, respectively, resulting in an asymmetric quadrangle around the origin (see fig. 4).
At S209, the first head skin image and the first localization area are stored.
According to an example embodiment, the first head skin image and the first localization area are stored for subsequent efficacy contrast analysis and localization when images are acquired for a dermatoscope.
According to some embodiments, the first location area is saved as an image. According to further embodiments, the first location area is saved as a set of coordinate points.
At S211, a second head skin image from the dermatoscope is acquired.
According to an example embodiment, a second head skin image from the dermatoscope may be acquired after a certain time of treatment such as androgenetic alopecia.
At S213, hair follicle locations are detected from the second head skin image as second keypoints.
Referring to S203, according to an example embodiment, the second head skin image may be segmented for hair follicle positions using a pre-trained image semantic segmentation model, with the resulting hair follicle positions as second keypoints.
At S215, a second image is rendered using the second keypoint.
Referring to S205, according to an example embodiment, a keypoint image is redrawn as a second image using the segmented second keypoints according to an example embodiment.
At S217, a second location area identical to the first location area is detected and marked in the second image.
According to some embodiments, a target detection method based on a pre-trained neural network may be used to detect and mark a second localization area in the second image that is the same as the first localization area (see fig. 5). For example, a pre-trained YOLOV neural network model may be utilized for first location area detection. YOLO is a target detection algorithm that has been used to detect traffic signals, examination proctories, game sights, and various industrial automation tools. It is readily understood that other methods for first localized region detection may be employed by those skilled in the art, such as detection by a neural network model or KD-Tree such as RCNN, fast RCNN, FASTER RCNN, mask RCNN, etc.
At S219, the second image marked with the second positioning area is superimposed on the first image marked with the first positioning area.
According to an example embodiment, by overlapping the second image marked with the second positioning area on the first image marked with the first positioning area (see fig. 6), an operator is instructed to move the dermatoscope according to a positional relationship so that the second positioning area overlaps with the first positioning area to determine a target image.
According to some embodiments, in order to avoid interference and influence of image details on an operator, when the second image marked with the second positioning area is superimposed on the first image marked with the first positioning area, the second image only drawn with the outline of the second positioning area is superimposed on the first image drawn with the outline of the first positioning area, other information than the outline is eliminated (see fig. 6).
For the accuracy of the evaluation, it may in fact be required that subsequent images of the same location are acquired, i.e. that the first and second head skin images should correspond to the same location of the head skin. According to the technical scheme, the positioning area is extracted, and after the subsequent image acquisition, the previously determined positioning area is detected, so that an operator can move the dermatoscope until the second positioning area coincides with the first positioning area. At this time, the image obtained by the dermatoscope is an image obtained at a previous position, and thus can be determined as the target image obtained at this time. Thus, even if the interventional marking mode is not adopted, consistency of two positioning can be realized.
Fig. 7 shows a dermoscope system according to an example embodiment.
Referring to fig. 7, a skin mirror system according to an example embodiment includes a skin mirror 710 and a graphics server 720. The dermatoscope 710 is used to acquire an image of the skin of the head. An image server 720 is communicatively coupled to the skin mirror 710 for receiving and processing the head skin image from the skin mirror 710.
As shown in fig. 7, the image server 720 includes an image acquisition module 701, a keypoint detection module 703, a rendering module 705, a labeling module 707, a matching module 709, an superimposition module 711, and a storage module 713.
According to an example embodiment, the image acquisition module 701 is for acquiring a first head skin image and a second head skin image from the dermatoscope. For example, the image acquisition module 701 obtains a first head skin image, such as an image of an androgenic alopecia site, through a dermatoscope; and a second image of the head skin is acquired, for example, after a certain time of treatment for androgenetic alopecia.
The keypoint detection module 703 is configured to detect a hair follicle position from the first head skin image as a first keypoint and from the second head skin image as a second keypoint. As previously described, according to some embodiments, the first and second head skin images may be segmented for hair follicle positions using a pre-trained image semantic segmentation model, with the resulting hair follicle positions as the first and second keypoints. For example, a pre-trained U-Net deep learning model can be used to segment hair follicle locations.
The drawing module 705 is configured to draw a first image using the first keypoint and draw a second image using the second keypoint.
The marking module 707 is configured to select and mark an asymmetric area on the first image as a first positioning area to be matched. As previously described, according to some embodiments, the asymmetric region is a triangular region or a polygonal region. According to some embodiments, a center point of the first image is located within the first positioning area.
The matching module 709 is configured to detect and mark a second positioning area identical to the first positioning area in the second image. According to some embodiments, a second localization area identical to the first localization area may be detected and marked in the second image using a pre-trained neural network based target detection method. For example, a pre-trained YOLOV neural network model may be utilized for first location area detection.
The superimposing module 711 is configured to superimpose the second image marked with the second positioning area on the first image marked with the first positioning area, so as to instruct an operator to move the dermatoscope, so that the second positioning area coincides with the first positioning area to determine a target image, which is not described herein.
The storage module 713 is configured to store the first head skin image, the first positioning area, and the target image. As previously described, according to some embodiments, the first location area is saved as an image. According to further embodiments, the first location area is saved as a set of coordinate points.
FIG. 8 illustrates a block diagram of a computing device according to an example embodiment of the invention.
As shown in fig. 8, computing device 30 includes processor 12 and memory 14. Computing device 30 may also include a bus 22, a network interface 16, and an I/O interface 18. The processor 12, memory 14, network interface 16, and I/O interface 18 may communicate with each other via a bus 22.
The processor 12 may include one or more general purpose CPUs (Central Processing Unit, processors), microprocessors, or application specific integrated circuits, etc. for executing associated program instructions. According to some embodiments, computing device 30 may also include a high performance display adapter (GPU) 20 that accelerates processor 12.
Memory 14 may include machine-system-readable media in the form of volatile memory, such as Random Access Memory (RAM), read Only Memory (ROM), and/or cache memory. Memory 14 is used to store one or more programs including instructions as well as data. The processor 12 may read instructions stored in the memory 14 to perform the methods according to embodiments of the invention described above.
Computing device 30 may also communicate with one or more networks through network interface 16. The network interface 16 may be a wireless network interface.
Bus 22 may be a bus including an address bus, a data bus, a control bus, etc. Bus 22 provides a path for exchanging information between the components.
It should be noted that, in the implementation, the computing device 30 may further include other components necessary to achieve normal operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method. The computer readable storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, DVDs, CD-ROMs, micro-drives, and magneto-optical disks, ROM, RAM, EPROM, EEPROM, DRAM, VRAM, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), network storage devices, cloud storage devices, or any type of media or device suitable for storing instructions and/or data.
Embodiments of the present invention also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above.
It will be clear to a person skilled in the art that the solution according to the invention can be implemented by means of software and/or hardware. "Unit" and "module" in this specification refer to software and/or hardware capable of performing a specific function, either alone or in combination with other components, where the hardware may be, for example, a field programmable gate array, an integrated circuit, or the like.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present invention.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The exemplary embodiments of the present invention have been particularly shown and described above. It is to be understood that this invention is not limited to the precise arrangements, instrumentalities and instrumentalities described herein; on the contrary, the invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. A method for a dermatoscope for positioning the dermatoscope while acquiring a head skin image with the dermatoscope, the method comprising:
Acquiring a first head skin image from the dermatoscope;
Detecting a hair follicle position from the first head skin image as a first keypoint;
drawing a first image by using the first key point;
Selecting and marking an asymmetric region on the first image as a first positioning region to be matched;
storing the first head skin image and the first localized area;
Acquiring a second head skin image from the dermatoscope;
Detecting hair follicle locations from the second head skin image as second keypoints;
drawing a second image by using the second key point;
detecting and marking a second positioning area identical to the first positioning area in the second image;
the second image marked with the second positioning area is overlapped on the first image marked with the first positioning area in a superposition mode, so that an operator is guided to move the dermatoscope, and the second positioning area is overlapped with the first positioning area to determine a target image.
2. The method of claim 1, wherein detecting hair follicle locations from the first head skin image as a first keypoint comprises:
the first head skin image is segmented for hair follicle position using a pre-trained image semantic segmentation model.
3. The method of claim 1, wherein the asymmetric region is a triangular region or a polygonal region.
4. A method according to claim 3, wherein the centre point of the first image is located within the first location area.
5. The method of claim 1, wherein detecting and marking a second location area in the second image that is the same as the first location area comprises:
a second localization area identical to the first localization area is detected and marked in the second image using a target detection method based on a pre-trained neural network.
6. The method of claim 1, wherein overlapping the second image marked with the second positioning region over the first image marked with the first positioning region comprises:
And superposing a second image drawn with the outline of the second positioning area only on the first image drawn with the outline of the first positioning area.
7. The method as recited in claim 1, further comprising:
The second head skin image when the second positioning region coincides with the first positioning region is determined as the target image.
8. A dermatological system, comprising a pair of optical fibers, characterized by comprising the following steps:
A dermatoscope for acquiring a head skin image;
an image server in communication with the dermatoscope for receiving and processing the head skin image from the dermatoscope, wherein the image server comprises:
an image acquisition module for acquiring a first head skin image and a second head skin image from the dermatoscope;
a keypoint detection module for detecting a hair follicle position from the first head skin image as a first keypoint and from the second head skin image as a second keypoint;
The drawing module is used for drawing a first image by using the first key points and drawing a second image by using the second key points;
The marking module is used for selecting and marking an asymmetric area on the first image as a first positioning area to be matched;
the matching module is used for detecting and marking a second positioning area which is the same as the first positioning area in the second image;
A superposition module for superposing the second image marked with the second positioning area on the first image marked with the first positioning area, thereby guiding an operator to move the dermatoscope so that the second positioning area coincides with the first positioning area to determine a target image;
A storage module for storing the first head skin image and the first localization area and the target image.
9. A computer program product comprising a computer program which, when executed by a processor, implements the method of any of claims 1-7.
10. A computing device, comprising:
A processor; and
A memory storing a computer program which, when executed by the processor, causes the processor to perform the method of any one of claims 1-7.
CN202410895452.4A 2024-07-05 2024-07-05 Method for a dermatological mirror and dermatological mirror system Active CN118436316B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410895452.4A CN118436316B (en) 2024-07-05 2024-07-05 Method for a dermatological mirror and dermatological mirror system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410895452.4A CN118436316B (en) 2024-07-05 2024-07-05 Method for a dermatological mirror and dermatological mirror system

Publications (2)

Publication Number Publication Date
CN118436316A CN118436316A (en) 2024-08-06
CN118436316B true CN118436316B (en) 2024-08-27

Family

ID=92312757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410895452.4A Active CN118436316B (en) 2024-07-05 2024-07-05 Method for a dermatological mirror and dermatological mirror system

Country Status (1)

Country Link
CN (1) CN118436316B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578953A (en) * 2013-07-22 2016-05-11 洛克菲勒大学 Optical detection of skin disease
CN108510502A (en) * 2018-03-08 2018-09-07 华南理工大学 Melanoma picture tissue segmentation methods based on deep neural network and system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102009044962A1 (en) * 2009-09-24 2011-04-07 W.O.M. World Of Medicine Ag Dermatoscope and elevation measuring device
CN109363640A (en) * 2018-12-04 2019-02-22 北京贝叶科技有限公司 Recognition methods and system based on dermal pathology image
CN112308827A (en) * 2020-10-23 2021-02-02 复旦大学 Hair follicle detection method based on deep convolutional neural network
US20230009364A1 (en) * 2021-07-06 2023-01-12 Welch Allyn, Inc. Image capture systems and methods for identifying abnormalities using multispectral imaging
CN115100107B (en) * 2022-05-17 2024-08-20 重庆师范大学 Method and system for dividing skin mirror image
CN115731226A (en) * 2022-12-01 2023-03-03 中国科学院长春光学精密机械与物理研究所 Method for segmenting focus in skin mirror image
CN117893545A (en) * 2023-12-12 2024-04-16 中南大学 Skin lesion image segmentation method, system, equipment and storage medium
CN118236034A (en) * 2024-03-26 2024-06-25 深圳市利孚医疗技术有限公司 Subcutaneous hair detection technology based on infrared transdermal imaging

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578953A (en) * 2013-07-22 2016-05-11 洛克菲勒大学 Optical detection of skin disease
CN108510502A (en) * 2018-03-08 2018-09-07 华南理工大学 Melanoma picture tissue segmentation methods based on deep neural network and system

Also Published As

Publication number Publication date
CN118436316A (en) 2024-08-06

Similar Documents

Publication Publication Date Title
CN106952347B (en) Ultrasonic surgery auxiliary navigation system based on binocular vision
Monnier et al. In vivo characterization of healthy human skin with a novel, non‐invasive imaging technique: line‐field confocal optical coherence tomography
Sereno et al. Population coding of visual space: comparison of spatial representations in dorsal and ventral pathways
CN107689045B (en) Image display method, device and system for endoscope minimally invasive surgery navigation
Lee et al. Ultrasound needle segmentation and trajectory prediction using excitation network
US20180189976A1 (en) Analysis unit and system for assessment of hair condition
Zhong et al. Three-dimensional reconstruction of peripheral nerve internal fascicular groups
CN109215104B (en) Brain structure image display method and device for transcranial stimulation treatment
CN116228787A (en) Image sketching method, device, computer equipment and storage medium
CN102764124B (en) Magnetic resonance imaging-based perforator flap blood vessel positioning and measurement method
CN114300095A (en) Image processing apparatus, image processing method, image processing device, image processing apparatus, and storage medium
CN118436316B (en) Method for a dermatological mirror and dermatological mirror system
KR20130045544A (en) Method and apparatus for analyzing magnetic resonance imaging, and recording medium for executing the method
Al-Battal et al. Multi-path decoder U-Net: A weakly trained real-time segmentation network for object detection and localization in ultrasound scans
CN112489051B (en) Liver cutting method and system based on blood vessels and lesion areas
JP6471559B2 (en) Diagnostic device, image processing method, image processing system, and program for the diagnostic device
Tian et al. Non-tumorous facial pigmentation classification based on multi-view convolutional neural network with attention mechanism
US9386908B2 (en) Navigation using a pre-acquired image
JP6944492B2 (en) Image acquisition method, related equipment and readable storage medium
US20230200930A1 (en) Intelligent Surgical Marker
CN115937096A (en) Cerebral hemorrhage analysis method and system based on image registration
Cudek et al. Automatic system for classification of melanocytic skin lesions based on images recognition
CN113744234A (en) Multi-modal brain image registration method based on GAN
El Hadramy et al. Intraoperative CT augmentation for needle-based liver interventions
Balicki et al. Interactive OCT annotation and visualization for vitreoretinal surgery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant