CN115601811A - Facial acne detection method and device - Google Patents

Facial acne detection method and device Download PDF

Info

Publication number
CN115601811A
CN115601811A CN202211268368.7A CN202211268368A CN115601811A CN 115601811 A CN115601811 A CN 115601811A CN 202211268368 A CN202211268368 A CN 202211268368A CN 115601811 A CN115601811 A CN 115601811A
Authority
CN
China
Prior art keywords
face
picture
facial
area
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211268368.7A
Other languages
Chinese (zh)
Inventor
刘潇龙
赵俊
黄思晋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Tuoxian Technology Co Ltd
Original Assignee
Beijing Jingdong Tuoxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Tuoxian Technology Co Ltd filed Critical Beijing Jingdong Tuoxian Technology Co Ltd
Priority to CN202211268368.7A priority Critical patent/CN115601811A/en
Publication of CN115601811A publication Critical patent/CN115601811A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • A61B5/445Evaluating skin irritation or skin trauma, e.g. rash, eczema, wound, bed sore
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)
  • Artificial Intelligence (AREA)
  • Dermatology (AREA)
  • Computing Systems (AREA)

Abstract

The invention discloses a method and a device for detecting facial acne, and relates to the technical field of computer vision and intelligent medical treatment. One embodiment of the method comprises: collecting face pictures at a plurality of angles; respectively carrying out face detection and picture cutting on each face picture to obtain a face area picture corresponding to each face picture; according to the angle of each face picture, carrying out face partition processing on the face area picture corresponding to each face picture to obtain a detection area of each face area picture; and performing target detection on the detection area of each face area picture by using a pre-established target detection model to obtain the position of the facial acne. According to the embodiment, the facial acne detection can be performed through the multi-angle facial picture, so that the comprehensiveness and the accuracy of the facial acne detection are improved; for the face region pictures corresponding to the face pictures at different angles, a main and clear region is selected as a detection region, and the accuracy of a facial acne detection result is improved.

Description

Facial acne detection method and device
Technical Field
The invention relates to the technical field of computer vision and intelligent medical treatment, in particular to a method and a device for detecting facial acne.
Background
Acne is a common skin disease, which is mostly found on the face of teenagers and is mainly clinically manifested as comedo, papule, pustule, nodule and cyst. In clinical practice, different identification and judgment results can be made for the same patient according to different clinical experiences of doctors, and different treatment schemes can be selected and used. These treatment regimens may be inconsistent and may even cause some harm to the patient. Therefore, the method is particularly important for scientifically, normatively and accurately identifying and evaluating the facial acne, and has great significance.
In recent years, as the integration of artificial intelligence technology, which is an important component of computer vision, with the medical field has increased, it has become feasible to perform detection, identification, and evaluation of facial acne based on artificial intelligence technology. The existing facial acne detection, identification and evaluation technology mostly collects a front picture of a user, and carries out acne detection, positioning, grading evaluation and the like based on the picture.
In the process of implementing the invention, the inventor finds that: the scheme of facial acne detection by collecting a single front image has incomplete and accurate acne recognition results.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for detecting facial acne, which can perform facial acne detection through a multi-angle facial picture, so as to improve the comprehensiveness and accuracy of facial acne detection; for the face region pictures corresponding to the face pictures at different angles, a main and clear region is selected as a detection region, and other region interference is removed, so that the area ratio of the face acne region in the detection picture is enlarged, and the accuracy of the face acne detection result is improved.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a method of detecting facial acne, including:
collecting face pictures at a plurality of angles;
respectively carrying out face detection and picture cutting on each face picture to obtain a face area picture corresponding to each face picture;
according to the angle of each face picture, carrying out face partition processing on the face area picture corresponding to each face picture to obtain a detection area of each face area picture;
and carrying out target detection on the detection area of each face area picture by using a pre-established target detection model to obtain the position of the facial acne.
Optionally, each face picture is acquired by: carrying out face detection on a picture to be acquired to obtain the position of a face in the picture to be acquired; under the condition that the human face is in the designated area in the picture to be collected, carrying out facial skin damage area detection on the human face through a real-time target detection model; and under the condition that the definition of the detected facial skin damage area meets the set requirement, acquiring the picture according to the picture to be acquired.
Optionally, under the condition that the face is not in the designated area in the picture to be acquired, prompting the user to move the face until the face is displayed in the designated area in the picture to be acquired; and under the condition that the definition of the detected facial damage area does not meet the set requirement, prompting the user to refocus until the definition of the detected facial damage area meets the set requirement.
Optionally, it is determined whether the sharpness of the detected facial lesion region satisfies a set requirement by: performing convolution operation on the gray level picture of the detected facial skin damage area through a Laplace operator to obtain a texture boundary matrix of the facial skin damage area; calculating a variance of the texture boundary matrix; and comparing the variance with a set threshold value to judge whether the definition of the facial skin damage area meets the set requirement.
Optionally, performing face partition processing on the face region picture corresponding to each face picture according to the angle of each face picture to obtain a detection region of each face region picture, including: performing face key point detection on the face region picture corresponding to each face picture to obtain face key point coordinates; performing face region decomposition on the face region picture corresponding to each face picture to obtain a mask matrix of each face region; and for the face area picture corresponding to each face picture, obtaining the detection area of the face area picture according to the angle of the face picture, the face key point coordinates of the face area picture corresponding to the face picture and the mask matrix of each face area.
Optionally, the facial pictures at the multiple angles include a front face picture, a left face picture and a right face picture; performing face region decomposition on the face region picture corresponding to each face picture, including: if the facial picture is a front face picture, performing facial region decomposition on a facial region picture corresponding to the facial picture based on a real-time semantic segmentation model; and if the facial picture is a left side face picture or a right side face picture, carrying out facial region decomposition on a facial region picture corresponding to the facial picture based on a boundary frame transformation model.
Optionally, the method further comprises: performing target detection on the detection area of each face area picture by using a pre-established target detection model to obtain the level of the facial acne; and carrying out face three-dimensional modeling according to the face region picture corresponding to each face picture to obtain a face three-dimensional model, and labeling the position and the level of the facial acne to the face three-dimensional model.
According to another aspect of embodiments of the present invention, there is provided an apparatus for detecting facial acne, including:
the face picture acquisition module is used for acquiring face pictures at a plurality of angles;
the image detection and cutting module is used for respectively carrying out face detection and image cutting on each facial image to obtain a facial area image corresponding to each facial image;
the face partition processing module is used for carrying out face partition processing on the face area picture corresponding to each face picture according to the angle of each face picture to obtain the detection area of each face area picture;
and the region target detection module is used for carrying out target detection on the detection region of each face region picture by using a pre-established target detection model to obtain the position of the facial acne.
According to still another aspect of an embodiment of the present invention, there is provided an electronic apparatus including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method for detecting facial acne provided by embodiments of the present invention.
According to a further aspect of the embodiments of the present invention, there is provided a computer-readable medium on which a computer program is stored, the program, when executed by a processor, implementing the method for detecting facial acne provided by the embodiments of the present invention.
One embodiment of the above invention has the following advantages or benefits: acquiring facial pictures of a plurality of angles; respectively carrying out face detection and picture cutting on each face picture to obtain a face area picture corresponding to each face picture; according to the angle of each face picture, carrying out face partition processing on the face area picture corresponding to each face picture to obtain a detection area of each face area picture; the technical scheme that the target detection is carried out on the detection area of each face area picture by using the pre-established target detection model to obtain the position of facial acne can be used for carrying out facial acne detection through the multi-angle face pictures, so that the comprehensiveness and the accuracy of facial acne detection are improved; for the face region pictures corresponding to the face pictures at different angles, a main and clear region is selected as a detection region, and interference of other regions is removed, so that the area proportion of the face acne region in the detection picture is enlarged, and the accuracy of a face acne detection result is improved.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
fig. 1 is a schematic diagram of the main steps of a method for detecting facial acne according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a facial picture capture process according to one embodiment of the present invention;
FIG. 3 is a schematic diagram of a detection region determination process according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the results of face segmentation for one embodiment of the present invention;
FIG. 5 is a schematic view of a facial acne detection process according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of the main blocks of a facial acne detection apparatus according to an embodiment of the present invention;
FIG. 7 is an exemplary system architecture diagram in which embodiments of the present invention may be applied;
fig. 8 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the technical scheme of the invention, the data acquisition, storage, use, processing and the like all conform to relevant regulations of national laws and regulations.
Most of the existing facial acne detection technologies collect a picture of the front of a user, and perform evaluation such as detection, positioning, grading and the like of acne on the basis of the picture. The scheme of facial acne detection by collecting a single front image has incomplete and accurate acne recognition results. The main manifestations are as follows:
(1) Because the flexibility of the current picture shooting process is insufficient, only the user is prompted to place the face in the preset outline, and the face skin damage area of the patient is difficult to be shot clearly, so that the accuracy of subsequent evaluation and analysis is influenced;
(2) Due to the limited viewing angle of the front picture, the acne condition on the side of the face of a serious patient cannot be comprehensively and accurately evaluated, and the patient with acne skin damage on the side is common to the acne patient with serious illness;
(3) The accuracy of the current acne detection based on the whole image needs to be improved, and due to the interference of light, visual angle and irrelevant areas, the proportion of an acne area is smaller than that of the whole image, so that the difficulty of accurate acne detection is increased;
(4) Some patients have small but dense facial acne, and it is difficult to accurately detect and visually display many and small lesions with a single picture.
In order to solve the technical problem, the invention provides a facial acne detection method which can perform interactive multi-angle facial acne partition processing and display. The problems that the side acne is difficult to clearly shoot, accurately evaluate, clearly display and the like in the existing single front image acne detection scheme are mainly solved. Meanwhile, the invention also provides a technology for carrying out facial partition aiming at pictures with different angles so as to more accurately carry out facial acne detection and display.
Fig. 1 is a schematic diagram of main steps of a facial acne detection method according to an embodiment of the present invention. As shown in fig. 1, the method for detecting facial acne according to the embodiment of the present invention mainly includes steps S101 to S104 as follows.
Step S101: facial pictures of a plurality of angles are collected. In order to avoid the problem that the detection result is not comprehensive and accurate due to the fact that only one front picture is subjected to facial acne detection, the facial acne detection method can collect facial pictures of multiple angles and respectively perform facial acne detection on each facial picture, and therefore comprehensiveness and accuracy of the facial acne detection result are greatly improved.
In the embodiment of the present invention, the facial picture is collected by a camera of a mobile phone, a computer, or another mobile terminal, for example. When the facial picture is collected, the facial picture meeting the requirements can be collected after the facial picture is interacted with a user. According to one embodiment of the invention, each facial picture is captured by: carrying out face detection on a picture to be acquired to obtain the position of a face in the picture to be acquired; under the condition that the human face is in a specified area in the picture to be collected, carrying out facial skin damage area detection on the human face through a real-time target detection model; and under the condition that the definition of the detected facial skin damage area meets the set requirement, acquiring the picture according to the picture to be acquired. In addition, in other embodiments of the present invention, under the condition that the face is not in the designated area in the picture to be collected, the user is prompted to perform face movement until the face is displayed in the designated area in the picture to be collected; and under the condition that the definition of the detected facial damage area does not meet the set requirement, prompting the user to refocus until the definition of the detected facial damage area meets the set requirement.
The picture to be collected is a real-time facial picture acquired by the camera, and the shooting button is not clicked for collection. The face detection can be completed based on an open-source face detection deep learning model, and a position box bbox of the face in the picture to be collected is obtained from an output result of the face detection model, wherein the position box of the face can be represented by two coordinate points of the upper left corner and the lower right corner of a box rectangle. Judging whether the area where the face is located is in the specified area of the picture to be collected through the position of the position box bbox based on the face in the whole picture to be collected, for example: whether the image is in a proper area of the center position of the image to be acquired. If not, the user is prompted to move up, down, left and right to enter the appropriate area. Specifically, when judging whether the face position is appropriate according to the face position frame, it is possible to judge whether the coordinate point (x 1, y 1) at the upper left corner and the coordinate point (x 2, y 2) at the lower right corner of the face position frame and the width w and the height h of the picture to be acquired satisfy the following conditions: 0 woven fabric (n) x1/w <0.25,0 woven fabric (n) y1/h <0.25;0.75 yarn-dyed yarns are woven in a manner of being less than 0.75 yarn-dyed yarns x2/w <1, and 0.75 yarn-dyed yarns are woven in a manner of being less than y2/h <1. If the condition is met, the face is in the designated area in the picture to be collected, otherwise, the face is not in the designated area in the picture to be collected. In addition, when the user is prompted to move, whether the distance between the face of the user and the camera is proper or not can be judged based on the ratio of the area of the face position box bbox to the area of the whole image to be collected and an experience threshold value, if not, the user is prompted to approach or leave the camera to obtain the face image with the proper ratio, and the like.
And under the condition that the face is in the specified area in the picture to be collected, carrying out facial skin damage area detection on the face through a real-time target detection model. The real-time target detection model is, for example, a YOLO series model, such as: YOLO-MobileNet, YOLOv7, and the like. And positioning and detecting the facial skin damage area through a real-time target detection model to obtain a position box bbox of the facial skin damage area, and expressing the position box bbox by using two coordinate points of the upper left corner and the lower right corner of a rectangle. After the facial skin damage area is detected, whether the definition of the detected facial skin damage area meets the set requirement or not is judged.
In one embodiment of the invention, whether the definition of the detected facial damage region meets the set requirement is judged by the following way: performing convolution operation on the gray level picture of the detected face skin damage area through a Laplacian operator to obtain a texture boundary matrix of the face skin damage area; calculating a variance of the texture boundary matrix; and comparing the variance with a set threshold value to judge whether the definition of the facial skin damage region meets a set requirement. Specifically, a gray level picture of the facial skin damage region can be subjected to convolution operation through a 3x3 Laplacian operator to obtain a texture boundary matrix of the facial skin damage region image, then the variance of the boundary matrix is calculated, if the variance is larger than a certain threshold (for example, 100), the facial skin damage region is judged to be clear, and otherwise, the facial skin damage region is judged to be insufficient in definition. And if the definition is judged to be insufficient, prompting a user to finely adjust the position of the mobile phone to refocus so as to obtain a clear picture of the facial skin damage area. At this point, the user may be prompted to press the "take picture" button to complete the picture acquisition.
Fig. 2 is a schematic diagram of a facial picture capture process according to an embodiment of the present invention. As shown in fig. 2, when a face picture is collected, a camera is turned on first, and then a user is prompted to place a face in a preset box, so that a real-time picture can be obtained. And then, carrying out face detection and positioning on the real-time picture, and judging whether a face exists and whether the face is in a specified area. If so, taking the real-time picture as a proper face picture; otherwise, prompting the user to place the face in the preset frame again, and repeating the steps until a proper face picture is obtained. Then, carrying out facial skin damage area detection on the appropriate face picture, and judging whether the definition of the detected facial skin damage area meets the requirement or not; if yes, picture collection is carried out; otherwise, prompting the user to refocus, detecting the facial skin damage area again until the definition of the detected facial skin damage area meets the set requirement, and then collecting the picture.
According to the method, a plurality of face pictures can be collected, and each face picture has different angles. The multiple-angle face pictures may include, for example, a front face picture, a left face picture, a right face picture, and the like.
According to the technical scheme of the embodiment of the invention, the interactive image acquisition method based on the facial skin damage area as the guide can dynamically adjust the shooting requirements (definition, resolution ratio and the like) of different areas by identifying the type and degree of the facial skin damage, so that the facial skin damage area can be more concerned, the picture quality of the shot facial skin damage area is improved, and missed diagnosis and misdiagnosis caused by the unclear image of the facial skin damage area are prevented.
Step S102: and respectively carrying out face detection and picture cutting on each facial picture to obtain a face region picture corresponding to each facial picture. In order to further improve the accuracy of facial acne detection, face detection can be performed on each facial picture, and picture cropping (for example, matting) is performed based on the face detection result to remove non-face regions and only reserve face regions, so as to obtain a face region picture corresponding to each facial picture.
Step S103: and according to the angle of each face picture, carrying out face partition processing on the face area picture corresponding to each face picture to obtain the detection area of each face area picture. In order to more accurately perform facial acne detection, a partial region may be determined from each facial picture as a detection region according to the angle of the facial picture to perform facial acne detection from the detection region. For the face region pictures corresponding to the face pictures at different angles, a main and clear region is selected as a detection region, and other region interference is removed, so that the area ratio of the acne region in the detection picture is enlarged, and the accuracy of an acne detection model is improved.
According to an embodiment of the present invention, performing face partition processing on a face region picture corresponding to each face picture according to an angle of each face picture to obtain a detection region of each face region picture, includes: performing face key point detection on the face region picture corresponding to each face picture to obtain face key point coordinates; performing face region decomposition on the face region picture corresponding to each face picture to obtain a mask matrix of each face region; and for the face area picture corresponding to each face picture, obtaining the detection area of the face area picture according to the angle of the face picture, the face key point coordinates of the face area picture corresponding to the face picture and the mask matrix of each face area.
In the embodiment of the invention, when the face key point detection is performed on the face region picture corresponding to each face picture, the face key point detection can be performed based on the HRNet face key point detection model, so as to obtain the face contour and the coordinates of the facial feature key points.
In an embodiment of the present invention, the face pictures of the plurality of angles include a front face picture, a left side face picture, and a right side face picture. Carrying out facial region decomposition on the facial region picture corresponding to each facial picture, wherein the facial region decomposition comprises the following steps: if the facial picture is a front face picture, performing facial region decomposition on a facial region picture corresponding to the facial picture based on a real-time semantic segmentation model; and if the facial picture is a left side face picture or a right side face picture, carrying out facial region decomposition on a facial region picture corresponding to the facial picture based on a boundary frame transformation model. Specifically, for a front face picture, performing front face region decomposition based on a BiseNet face decomposition model; and for the side face picture, performing facial region decomposition of the side face based on a Tanh-polar transformation model. Thereby obtaining the Mask of the skin area and each five sense organ area of the human face. The Mask is a 0-1 binary matrix consistent with the width and height of the face picture, and only has two values of 0 and 1. For example, the Mask of the eye is a matrix with the same size as the face picture, the part of the corresponding eye area of the face picture takes a value of 1, and the rest areas take a value of 0. The mask may be used to separately capture a certain area, such as an eye area.
And then, for the face region picture corresponding to each face picture, obtaining the detection region of the face region picture according to the angle of the face picture, the face key point coordinates of the face region picture corresponding to the face picture and the mask matrix of each face region. Specifically, for a front face picture and a side face picture (including a left side face picture and a right side face picture), a skin area of a human face may be obtained as a detection area through a skin Mask.
Fig. 3 is a schematic diagram of a detection area determining process according to an embodiment of the present invention. As shown in fig. 3, in one embodiment of the present invention, when determining the detection region, the face key point extraction and the face partition processing may be performed simultaneously for each face region picture. When the face partition processing is carried out, firstly, whether the face region picture is a front face picture is judged, if yes, the face region of the front face picture is decomposed on the basis of a real-time semantic segmentation model; and if not, the image is the side face image, and the facial region decomposition of the side face image is carried out on the human face region image based on the bounding box transformation model. Then, decomposing the face key points and the face area of the front face picture to obtain a detection area corresponding to the front face picture; and decomposing the facial key points and the facial regions of the side face picture based on the extraction to obtain the detection regions corresponding to the side face picture.
FIG. 4 is a schematic diagram of the results of face segmentation for one embodiment of the present invention. As shown in fig. 4, detection regions obtained by face partitioning of the front face picture, the left face picture, and the right face picture are shown. Specifically, when a face part area is performed, firstly, eyebrow coordinates are determined according to key points of the face, and an area from the eyebrow to a hairline above the eyebrow is defined as a forehead area, wherein the hairline is determined by an upper boundary of a skin Mask. And then, determining the eyebrow coordinates and the canthus coordinates according to the key points of the face, and acquiring the region from the nasal root to the nasal wing to the nasal tip according to the nasal Mask. The forehead area and the nose area below the glabella in the front face picture as shown in fig. 4 are defined as detection areas corresponding to the front face picture. The eye area can be determined through the face key points of the eye area and the eye Mask, the eyebrow area and the mouth area can be determined in the same way, and the detection area corresponding to the front face picture can be obtained by deleting the areas from the front face picture. For the left and right side face pictures, the vertical line from the center of the eyebrow to the center of the person is used as a boundary line, only the skin area of the side face on the current side is taken, and then the eyebrow, eye, mouth, forehead and nose areas are subtracted from the skin area, so that the side face partition shown in fig. 4 can be obtained.
Step S104: and carrying out target detection on the detection area of each face area picture by using a pre-established target detection model to obtain the position of the facial acne. The target detection model can be obtained by training in advance, specifically, a doctor marks each acne position and grading information of 1000 facial acne pictures, wherein the acne positions are identified by rectangular frames of two coordinate points, and the grading information is identified by numbers of 0-3 for 4 grades of medical acne definition. The 1000 facial acne pictures and the marked positions and grading information are used as a data set training model. The data set is divided into a training set, a validation set and a test set in a ratio of 8. The data set is used for training an acne target detection and grading model, and models such as fast RCNN, YOLOX and the like can be selected for training. The training method is the same as the classical method and is not described in detail here.
According to an embodiment of the present invention, a pre-established target detection model is used to perform target detection on the detection area of each face area picture, and the level of the facial acne can also be obtained. And carrying out face three-dimensional modeling according to the face region picture corresponding to each face picture to obtain a face three-dimensional model, and labeling the position and the level of the facial acne to the face three-dimensional model. Specifically, the face three-dimensional modeling can be performed according to the face region picture corresponding to each face picture obtained in step S102 to obtain a face three-dimensional model, the face acne display can be performed based on the face three-dimensional model, the three-dimensional face reconstruction can be performed based on a plurality of face region pictures at different angles, the result is more accurate than the three-dimensional model reconstruction performed based on a single face picture, and the face acne conditions in different regions can be displayed more clearly by adjusting the angles of the three-dimensional face. In addition, the grade of the facial acne can be displayed through different colors, the severity degree is identified, and the facial acne detection result can be displayed more clearly.
Fig. 5 is a schematic diagram of a facial acne detection process according to an embodiment of the present invention. In the embodiment of the invention, firstly, the image acquisition method described in the previous embodiment is adopted to carry out interactive multi-angle facial image acquisition; and carrying out face detection and picture cutting on the collected face pictures to obtain a face area picture corresponding to each face picture. On one hand, facial partitioning is carried out on each face region picture to obtain a detection region of each face region picture; performing target detection on the detection area of each face area picture by using a target detection model to obtain the position and the level of facial acne; and on the other hand, carrying out face three-dimensional modeling based on the plurality of face area pictures to obtain a face three-dimensional model. And then, marking the position and the level of the facial acne on the human face three-dimensional model, and displaying the facial acne through the human face three-dimensional model.
Fig. 6 is a schematic diagram of main blocks of a facial acne detection apparatus according to an embodiment of the present invention. As shown in fig. 6, the facial acne detection apparatus 600 according to the embodiment of the present invention mainly includes:
a face picture collecting module 601, configured to collect face pictures from multiple angles;
the image detection and clipping module 602 is configured to perform face detection and image clipping on each facial image, respectively, to obtain a facial area image corresponding to each facial image;
a face partition processing module 603, configured to perform face partition processing on a face area picture corresponding to each face picture according to the angle of each face picture, so as to obtain a detection area of each face area picture;
and the area target detection module 604 is configured to perform target detection on the detection area of each face area picture by using a pre-established target detection model, so as to obtain the position of facial acne.
According to an embodiment of the present invention, the facial picture capturing module 601 may be further configured to: capturing said each facial picture by: carrying out face detection on a picture to be acquired to obtain the position of a face in the picture to be acquired; under the condition that the human face is in the designated area in the picture to be collected, carrying out facial skin damage area detection on the human face through a real-time target detection model; and under the condition that the definition of the detected facial skin damage area meets the set requirement, acquiring the picture according to the picture to be acquired.
According to another embodiment of the present invention, the facial picture capturing module 601 may be further configured to: prompting a user to move the face until the face is displayed in the designated area in the picture to be collected under the condition that the face is not in the designated area in the picture to be collected; and under the condition that the definition of the detected facial damage area does not meet the set requirement, prompting the user to refocus until the definition of the detected facial damage area meets the set requirement.
According to another embodiment of the present invention, the facial image capturing module 601 determines whether the definition of the detected facial skin damage region meets the set requirement by: performing convolution operation on the gray level picture of the detected facial skin damage area through a Laplace operator to obtain a texture boundary matrix of the facial skin damage area; calculating a variance of the texture boundary matrix; and comparing the variance with a set threshold value to judge whether the definition of the facial skin damage area meets the set requirement.
According to yet another embodiment of the invention, the facial-section processing module 603 may be further configured to: performing face key point detection on the face region picture corresponding to each face picture to obtain face key point coordinates; performing face region decomposition on the face region picture corresponding to each face picture to obtain a mask matrix of each face region; and for the face area picture corresponding to each face picture, obtaining the detection area of the face area picture according to the angle of the face picture, the face key point coordinates of the face area picture corresponding to the face picture and the mask matrix of each face area.
According to still another embodiment of the present invention, the face pictures of the plurality of angles include a front face picture, a left face picture, and a right face picture; when the face region processing module 603 performs face region decomposition on the face region picture corresponding to each face picture, the face region processing module 603 may further be configured to: if the facial picture is a front face picture, performing facial region decomposition on a facial region picture corresponding to the facial picture based on a real-time semantic segmentation model; and if the facial picture is a left side face picture or a right side face picture, carrying out facial region decomposition on a facial region picture corresponding to the facial picture based on a boundary frame transformation model.
According to yet another embodiment of the invention, the regional target detection module 604 may be further configured to: performing target detection on the detection area of each face area picture by using a pre-established target detection model to obtain the level of the facial acne; and, the facial acne detection apparatus 600 may further include an acne detection result presentation module (not shown in the figure) for: and carrying out face three-dimensional modeling according to the face region picture corresponding to each face picture to obtain a face three-dimensional model, and marking the position and the level of the facial acne on the face three-dimensional model.
According to the technical scheme of the embodiment of the invention, facial pictures of a plurality of angles are collected; respectively carrying out face detection and picture cutting on each face picture to obtain a face area picture corresponding to each face picture; according to the angle of each face picture, carrying out face partition processing on the face area picture corresponding to each face picture to obtain a detection area of each face area picture; the technical scheme that the target detection is carried out on the detection area of each face area picture by using the pre-established target detection model to obtain the position of facial acne can be used for carrying out facial acne detection through the multi-angle face pictures, so that the comprehensiveness and the accuracy of facial acne detection are improved; for the face region pictures corresponding to the face pictures at different angles, a main and clear region is selected as a detection region, and other region interference is removed, so that the area ratio of the face acne region in the detection picture is enlarged, and the accuracy of the face acne detection result is improved.
Fig. 7 shows an exemplary system architecture 700 of a method of detecting facial acne or a device for detecting facial acne to which embodiments of the present invention may be applied.
As shown in fig. 7, the system architecture 700 may include terminal devices 701, 702, 703, a network 704, and a server 705. The network 704 serves to provide a medium for communication links between the terminal devices 701, 702, 703 and the server 705. Network 704 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 701, 702, 703 to interact with a server 705 over a network 704, to receive or send messages or the like. Various communication client applications, such as an image acquisition application, a camera application, a picture processing application, etc. (for example only), may be installed on the terminal devices 701, 702, 703.
The terminal devices 701, 702, 703 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 705 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 701, 702, 703. The background management server can collect facial pictures from a plurality of angles for the received data such as the facial acne detection request and the like; respectively carrying out face detection and picture cutting on each face picture to obtain a face area picture corresponding to each face picture; according to the angle of each face picture, carrying out face partition processing on the face area picture corresponding to each face picture to obtain a detection area of each face area picture; and performing target detection on the detection area of each face area picture by using a pre-established target detection model to obtain the position of facial acne and the like, and feeding back a processing result (such as the position of the facial acne, for example only) to the terminal equipment.
It should be noted that the facial acne detection method provided by the embodiment of the present invention is generally executed by the server 705, and accordingly, the facial acne detection apparatus is generally disposed in the server 705.
It should be understood that the number of terminal devices, networks, and servers in fig. 7 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for an implementation.
Referring now to FIG. 8, shown is a block diagram of a computer system 800 suitable for use with a terminal device or server implementing embodiments of the present invention. The terminal device or the server shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU) 801 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the system 800 are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
In particular, according to embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program executes the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware. The described units or modules may also be provided in a processor, and may be described as: a processor includes a facial picture capture module, a picture detection cropping module, a facial region processing module, and a region object detection module. Where the names of such units or modules do not in some cases constitute a limitation of the unit or module itself, for example, the facial picture capture module may also be described as a "module for capturing facial pictures from multiple angles".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: collecting face pictures at a plurality of angles; respectively carrying out face detection and picture cutting on each face picture to obtain a face area picture corresponding to each face picture; according to the angle of each face picture, carrying out face partition processing on the face area picture corresponding to each face picture to obtain a detection area of each face area picture; and carrying out target detection on the detection area of each face area picture by using a pre-established target detection model to obtain the position of the facial acne.
According to the technical scheme of the embodiment of the invention, facial pictures of a plurality of angles are collected; respectively carrying out face detection and picture cutting on each face picture to obtain a face area picture corresponding to each face picture; according to the angle of each face picture, carrying out face partition processing on the face area picture corresponding to each face picture to obtain a detection area of each face area picture; the technical scheme that the target detection is carried out on the detection area of each face area picture by using the pre-established target detection model to obtain the position of facial acne can be used for carrying out facial acne detection through the multi-angle face pictures, so that the comprehensiveness and the accuracy of facial acne detection are improved; for the face region pictures corresponding to the face pictures at different angles, a main and clear region is selected as a detection region, and interference of other regions is removed, so that the area proportion of the face acne region in the detection picture is enlarged, and the accuracy of a face acne detection result is improved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may occur depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for detecting facial acne, comprising:
collecting face pictures at a plurality of angles;
respectively carrying out face detection and picture cutting on each face picture to obtain a face area picture corresponding to each face picture;
according to the angle of each face picture, carrying out face partition processing on the face area picture corresponding to each face picture to obtain a detection area of each face area picture;
and performing target detection on the detection area of each face area picture by using a pre-established target detection model to obtain the position of the facial acne.
2. The method of claim 1, wherein each facial picture is captured by:
carrying out face detection on a picture to be acquired to obtain the position of a face in the picture to be acquired;
under the condition that the human face is in the designated area in the picture to be collected, carrying out facial skin damage area detection on the human face through a real-time target detection model;
and under the condition that the definition of the detected facial skin damage area meets the set requirement, acquiring the picture according to the picture to be acquired.
3. The method of claim 2,
prompting a user to move the face until the face is displayed in the designated area in the picture to be collected under the condition that the face is not in the designated area in the picture to be collected;
and under the condition that the definition of the detected facial skin damage region does not meet the set requirement, prompting the user to refocus until the definition of the detected facial skin damage region meets the set requirement.
4. The method according to claim 2 or 3, characterized in that whether the definition of the detected facial skin damage region meets the set requirement is judged by:
performing convolution operation on the gray level picture of the detected facial skin damage area through a Laplace operator to obtain a texture boundary matrix of the facial skin damage area;
calculating a variance of the texture boundary matrix;
and comparing the variance with a set threshold value to judge whether the definition of the facial skin damage area meets the set requirement.
5. The method according to claim 1, wherein performing face partition processing on the face region picture corresponding to each face picture according to the angle of each face picture to obtain the detection region of each face region picture, includes:
performing face key point detection on the face region picture corresponding to each face picture to obtain face key point coordinates;
performing face region decomposition on the face region picture corresponding to each face picture to obtain a mask matrix of each face region;
and for the face area picture corresponding to each face picture, obtaining the detection area of the face area picture according to the angle of the face picture, the face key point coordinates of the face area picture corresponding to the face picture and the mask matrix of each face area.
6. The method of claim 5, wherein the plurality of angular face pictures comprise a front face picture, a left face picture, and a right face picture;
performing face region decomposition on the face region picture corresponding to each face picture, including:
if the facial picture is a front face picture, performing facial region decomposition on a facial region picture corresponding to the facial picture based on a real-time semantic segmentation model;
and if the facial picture is a left side face picture or a right side face picture, carrying out facial region decomposition on a facial region picture corresponding to the facial picture based on a boundary frame transformation model.
7. The method of claim 1, further comprising:
performing target detection on the detection area of each face area picture by using a pre-established target detection model to obtain the level of the facial acne;
and carrying out face three-dimensional modeling according to the face region picture corresponding to each face picture to obtain a face three-dimensional model, and labeling the position and the level of the facial acne to the face three-dimensional model.
8. A facial acne detection device, comprising:
the face picture acquisition module is used for acquiring face pictures at a plurality of angles;
the image detection and cutting module is used for respectively carrying out face detection and image cutting on each facial image to obtain a facial area image corresponding to each facial image;
the face partition processing module is used for carrying out face partition processing on the face area picture corresponding to each face picture according to the angle of each face picture to obtain the detection area of each face area picture;
and the region target detection module is used for carrying out target detection on the detection region of each face region picture by using a pre-established target detection model to obtain the position of the facial acne.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202211268368.7A 2022-10-17 2022-10-17 Facial acne detection method and device Pending CN115601811A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211268368.7A CN115601811A (en) 2022-10-17 2022-10-17 Facial acne detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211268368.7A CN115601811A (en) 2022-10-17 2022-10-17 Facial acne detection method and device

Publications (1)

Publication Number Publication Date
CN115601811A true CN115601811A (en) 2023-01-13

Family

ID=84846344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211268368.7A Pending CN115601811A (en) 2022-10-17 2022-10-17 Facial acne detection method and device

Country Status (1)

Country Link
CN (1) CN115601811A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116202890A (en) * 2023-05-05 2023-06-02 山东路达试验仪器有限公司 Intelligent measuring system and method for elongation of steel bar based on machine vision
CN116913519A (en) * 2023-07-24 2023-10-20 东莞莱姆森科技建材有限公司 Health monitoring method, device, equipment and storage medium based on intelligent mirror

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110189340A (en) * 2019-06-03 2019-08-30 北京达佳互联信息技术有限公司 Image partition method, device, electronic equipment and storage medium
CN111524080A (en) * 2020-04-22 2020-08-11 杭州夭灵夭智能科技有限公司 Face skin feature identification method, terminal and computer equipment
CN112037162A (en) * 2019-05-17 2020-12-04 华为技术有限公司 Facial acne detection method and equipment
CN112669197A (en) * 2019-10-16 2021-04-16 顺丰科技有限公司 Image processing method, image processing device, mobile terminal and storage medium
CN112967285A (en) * 2021-05-18 2021-06-15 中国医学科学院皮肤病医院(中国医学科学院皮肤病研究所) Chloasma image recognition method, system and device based on deep learning
CN113205568A (en) * 2021-04-30 2021-08-03 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113435400A (en) * 2021-07-14 2021-09-24 世邦通信股份有限公司 Screen-free face recognition calibration method and device, screen-free face recognition equipment and medium
CN113469049A (en) * 2021-06-30 2021-10-01 平安科技(深圳)有限公司 Disease information identification method, system, device and storage medium
CN114445427A (en) * 2022-01-28 2022-05-06 北京大甜绵白糖科技有限公司 Image processing method, image processing device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037162A (en) * 2019-05-17 2020-12-04 华为技术有限公司 Facial acne detection method and equipment
CN110189340A (en) * 2019-06-03 2019-08-30 北京达佳互联信息技术有限公司 Image partition method, device, electronic equipment and storage medium
CN112669197A (en) * 2019-10-16 2021-04-16 顺丰科技有限公司 Image processing method, image processing device, mobile terminal and storage medium
CN111524080A (en) * 2020-04-22 2020-08-11 杭州夭灵夭智能科技有限公司 Face skin feature identification method, terminal and computer equipment
CN113205568A (en) * 2021-04-30 2021-08-03 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112967285A (en) * 2021-05-18 2021-06-15 中国医学科学院皮肤病医院(中国医学科学院皮肤病研究所) Chloasma image recognition method, system and device based on deep learning
CN113469049A (en) * 2021-06-30 2021-10-01 平安科技(深圳)有限公司 Disease information identification method, system, device and storage medium
CN113435400A (en) * 2021-07-14 2021-09-24 世邦通信股份有限公司 Screen-free face recognition calibration method and device, screen-free face recognition equipment and medium
CN114445427A (en) * 2022-01-28 2022-05-06 北京大甜绵白糖科技有限公司 Image processing method, image processing device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116202890A (en) * 2023-05-05 2023-06-02 山东路达试验仪器有限公司 Intelligent measuring system and method for elongation of steel bar based on machine vision
CN116913519A (en) * 2023-07-24 2023-10-20 东莞莱姆森科技建材有限公司 Health monitoring method, device, equipment and storage medium based on intelligent mirror

Similar Documents

Publication Publication Date Title
JP7058373B2 (en) Lesion detection and positioning methods, devices, devices, and storage media for medical images
Wang et al. Smartphone-based wound assessment system for patients with diabetes
CN115601811A (en) Facial acne detection method and device
CN112184705B (en) Human body acupuncture point identification, positioning and application system based on computer vision technology
US8270688B2 (en) Method for intelligent qualitative and quantitative analysis assisting digital or digitized radiography softcopy reading
JP4860749B2 (en) Apparatus, system, and method for determining compatibility with positioning instruction in person in image
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
CN109887077B (en) Method and apparatus for generating three-dimensional model
US20160253799A1 (en) Context Based Algorithmic Framework for Identifying and Classifying Embedded Images of Follicle Units
CN109087261B (en) Face correction method based on unlimited acquisition scene
CN110610127B (en) Face recognition method and device, storage medium and electronic equipment
US20220383661A1 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
CN110942447B (en) OCT image segmentation method, OCT image segmentation device, OCT image segmentation equipment and storage medium
CN111401318B (en) Action recognition method and device
WO2021068781A1 (en) Fatigue state identification method, apparatus and device
CN111583385B (en) Personalized deformation method and system for deformable digital human anatomy model
Ribeiro et al. Handling inter-annotator agreement for automated skin lesion segmentation
CN111814569A (en) Method and system for detecting human face shielding area
CN112785591B (en) Method and device for detecting and segmenting rib fracture in CT image
CN108388889A (en) Method and apparatus for analyzing facial image
Mussi et al. A novel ear elements segmentation algorithm on depth map images
CN114549557A (en) Portrait segmentation network training method, device, equipment and medium
CN113160153A (en) Lung nodule screening method and system based on deep learning technology
CN111797773A (en) Method, device and equipment for detecting occlusion of key parts of human face
WO2021037790A1 (en) Image processing for stroke characterization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination