WO2019095118A1 - 一种皮肤瑕疵点分类方法及电子设备 - Google Patents

一种皮肤瑕疵点分类方法及电子设备 Download PDF

Info

Publication number
WO2019095118A1
WO2019095118A1 PCT/CN2017/110952 CN2017110952W WO2019095118A1 WO 2019095118 A1 WO2019095118 A1 WO 2019095118A1 CN 2017110952 W CN2017110952 W CN 2017110952W WO 2019095118 A1 WO2019095118 A1 WO 2019095118A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
skin
defect
model
defects
Prior art date
Application number
PCT/CN2017/110952
Other languages
English (en)
French (fr)
Inventor
林丽梅
Original Assignee
深圳和而泰智能控制股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳和而泰智能控制股份有限公司 filed Critical 深圳和而泰智能控制股份有限公司
Priority to CN201780009000.XA priority Critical patent/CN108780497B/zh
Priority to PCT/CN2017/110952 priority patent/WO2019095118A1/zh
Publication of WO2019095118A1 publication Critical patent/WO2019095118A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the present application relates to the field of skin detection technology, and in particular, to a skin defect classification method and an electronic device.
  • spots may be sun spots, freckles, age spots, etc. due to local skin pigmentation, and may also be acne marks, acne pits, etc. .
  • the conventional technology has at least the following problems: the conventional technology can only count the number of defects, but can not identify the category of the defects, thereby failing to provide the user with a skin care method for the drug. .
  • An object of the embodiments of the present application is to provide a skin defect classification method and an electronic device, which solve the technical problem that the conventional technology fails to recognize the category of skin defects.
  • the embodiment of the present application provides the following technical solutions:
  • an embodiment of the present application provides a skin defect classification method, including: acquiring a target skin image including a defect; and classifying a defect of the target skin image according to an image classification algorithm.
  • the categorizing the defect of the target skin image according to an image classification algorithm includes: acquiring an image classification model; inputting a defect of the target skin image into the image classification model to classify the image The model classifies the defects.
  • the image classification model includes a convolutional neural network framework model; before acquiring the image classification model, the method further includes: collecting training data, where the training data includes a plurality of sample skin images including defects; The training data to The sample skin image containing the defects is classified; the convolutional neural network framework model is constructed and configured; the labeled training data is input into the configured convolutional neural network framework model to train and save the training data.
  • the training data includes a plurality of sample skin images including defects
  • the training data to The sample skin image containing the defects is classified
  • the convolutional neural network framework model is constructed and configured
  • the labeled training data is input into the configured convolutional neural network framework model to train and save the training data.
  • the convolutional neural network framework model includes a LetNet-5 model; the constructing the convolutional neural network framework model includes constructing the LetNet-5 model, and the constructing the LetNet-5 model specifically includes: The input layer, the first convolutional layer, the first sampling layer, the second convolutional layer, the second sampling layer, the third convolutional layer, the first fully connected layer, and the second fully connected layer are sequentially constructed.
  • the configuring the convolutional neural network framework model includes configuring the LetNet-5 model, and configuring the LetNet-5 model includes: configuring a loss function and an optimization function; and configuring a training amount of a sample image of each batch of samples With the number of iterations.
  • the loss function is a cross entropy loss function.
  • the method further includes: counting the number of each type of defect; determining the target skin image according to the number of each type of defect The severity of the defect.
  • the method further includes: marking the defect name at the defect of the target skin image.
  • an embodiment of the present application provides a skin defect classification device, including: an acquisition module, configured to acquire a target skin image including a defect; and a classification module, configured to classify the target skin image according to an image classification algorithm Awkward points.
  • the classification module includes: an acquiring unit, configured to acquire an image classification model; and an input unit, configured to input a defect of the target skin image into the image classification model, so that the image classification model classification office Describe the point.
  • the image classification model includes a convolutional neural network framework model; the device further includes: an acquisition module, configured to collect training data, where the training data includes a plurality of sample skin images including defects; a module for labeling the training data to classify the plurality of sample skin images including defects; constructing a configuration module for constructing and configuring the convolutional neural network frame model; and inputting a module for The labeled training data is input into the configured convolutional neural network framework model to train and save the training number according to.
  • an acquisition module configured to collect training data, where the training data includes a plurality of sample skin images including defects
  • a module for labeling the training data to classify the plurality of sample skin images including defects constructing a configuration module for constructing and configuring the convolutional neural network frame model
  • inputting a module for The labeled training data is input into the configured convolutional neural network framework model to train and save the training number according to.
  • the convolutional neural network framework model includes a LetNet-5 model; the setup configuration module is specifically configured to: sequentially construct an input layer, a first convolution layer, a first sampling layer, a second convolution layer, and a first a second sampling layer, a third convolutional layer, a first fully connected layer, and a second fully connected layer.
  • the setup configuration module is specifically configured to: sequentially construct an input layer, a first convolution layer, a first sampling layer, a second convolution layer, and a first a second sampling layer, a third convolutional layer, a first fully connected layer, and a second fully connected layer.
  • the setup configuration module is specifically configured to: configure a loss function and an optimization function; and configure a training amount and an iteration number of the skin image of each batch of samples.
  • the loss function is a cross entropy loss function.
  • the device further includes: a statistics module, configured to count the number of each type of defect; and a determining module, configured to determine a defect severity of the target skin image according to the number of each type of defect.
  • a statistics module configured to count the number of each type of defect
  • a determining module configured to determine a defect severity of the target skin image according to the number of each type of defect.
  • the device further includes: a second labeling module, configured to mark the point name at the defect of the target skin image.
  • an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory is stored by the at least one processor Executing instructions executed by the at least one processor to enable the at least one processor to perform the skin defect classification method of any of the above.
  • an embodiment of the present application provides a non-transitory computer readable storage medium, where the non-transitory computer readable storage medium stores computer executable instructions for causing an electronic device to execute A skin defect classification method according to any one of the preceding claims.
  • the defect of the target skin image is classified according to the image classification algorithm by acquiring the target skin image including the defect. Therefore, it is able to identify the category of skin defects so that the user can make a wise choice for subsequent skin care treatment.
  • FIG. 1 is a schematic diagram of classifying a target skin image of an input protection defect based on an image classification model according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of an embodiment of the present application for extracting each picture including a defect from a sample skin image
  • FIG. 3 is a schematic diagram of various representations of a black spot according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a framework of a LetNet-5 model according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a skin defect classification device according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a classification module according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a skin defect classification device according to another embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a skin defect classification device according to still another embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a skin defect classification device according to still another embodiment of the present application.
  • FIG. 11 is a schematic flow chart of a method for classifying skin defects according to an embodiment of the present application.
  • FIG. 12 is a schematic flowchart of a step 72 according to an embodiment of the present application.
  • FIG. 13 is a schematic flowchart diagram of a method for classifying skin defects according to another embodiment of the present application.
  • FIG. 14 is a schematic flow chart of a method for classifying skin defects according to still another embodiment of the present application.
  • FIG. 15 is a schematic flow chart of a method for classifying skin defects according to still another embodiment of the present application.
  • the defect detection technique proposed by the conventional technique can detect the number of defects, but cannot identify the category of defects.
  • the embodiments of the present application provide an electronic device capable of recognizing the category of skin defects, so that the user can make a sensible choice for subsequent skin care treatment.
  • the electronic device acquires a target skin image containing defects.
  • the defects have different types, which may include acne, acne marks, acne pits, dark spots, freckles, chloasma, butterfly spots, age spots, black sputum, and other defects.
  • other defects are defects other than the nine categories mentioned above, such as scars and the like.
  • the target skin image may be a facial skin image, or may be a surface skin image of other parts of the human body, such as an elbow skin image, a leg skin image, and the like.
  • the user can operate the electronic device to cause the camera module to capture the target skin containing the defect, so that the electronic device acquires the target skin image.
  • the user can operate the electronic device, so that the electronic device connects to the cloud through the communication module, and captures the target skin image from the cloud, so that the electronic device acquires the target skin image.
  • the cloud can be a pre-configured cloud server or an Internet.
  • the cloud server may be a physical server or a logical server formed by virtualizing multiple physical servers, or may be a server group composed of multiple interconnectable servers.
  • the electronic device is connected to the slave device via, for example, Code Division Multiple Access (CDMA), Global System for Mobile Communications (GSM), Three communication networks selected from a plurality of communication methods of Wideband Code Division Multiple Access (WCDMA) or Wireless Broadband (Wibro), wherein the long-distance communication module includes a mobile communication module, a wireless internet module, and the like.
  • CDMA Code Division Multiple Access
  • GSM Global System for Mobile Communications
  • WCDMA Wideband Code Division Multiple Access
  • Wibro Wireless Broadband
  • the proximity communication module can employ Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wide Band (UWB) or ZigBee short-range communication technology.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wide Band
  • ZigBee short-range communication technology can employ Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wide Band (UWB) or ZigBee short-range communication technology.
  • the embodiment of the present application is not limited to the manner in which the electronic device shown in the foregoing embodiments obtains the target skin image including the defect.
  • the person skilled in the art can select the acquisition mode according to the service requirement, and details are not described herein.
  • the electronic device classifies the defect of the target skin image according to the image classification algorithm.
  • the image classification algorithm is used to instruct the electronic device to process the target skin image, thereby identifying defects in the target skin image, and classifying the defects.
  • the logic function corresponding to the image classification algorithm may be performed by a pre-built image classification model, for example, in some embodiments, the electronic device may acquire an image during the process of classifying the target skin image.
  • the classification model inputs the defect points of the target skin image into the image classification model to classify the image classification model into defects.
  • the image classification model has pre-stored a plurality of trained defect classification data, and when the image classification model receives the defect of the target skin image, the image classification model calls the defect classification data to the defect of the target skin image. Perform deep learning to output the classification result of the defect.
  • the image classification model 10 receives and processes a target skin image 11 including a defect, and the classification result output by the image classification model 10 is: the defect is a black spot.
  • the user may write image classification code logic within the electronic device to classify the defects of the target skin image.
  • the electronic device needs to train the image classification model, so that the image classification model can automatically analyze the defect type of the input target skin image through deep learning.
  • the image classification model is a convolutional neural network framework model (Convolutional In the Neural Network (CNN)
  • CNN convolutional In the Neural Network
  • the engineer causes the electronic device to collect training data by operating the electronic device, wherein the training data includes a plurality of sample skin images including defects.
  • each sample skin image can include multiple types of defects, for example, as shown in FIG. 2, the sample skin image 20 is a face image.
  • the electronic device acquires the sample skin image 20, the electronic device intercepts 9 pictures containing the defects from the sample skin image 20, wherein the defects may include acne, acne marks, acne pits, dark spots, freckles, Chloasma, butterfly spots, age spots, black sputum and other defects.
  • the electronic device may first perform binarization processing on the sample skin image 20 to obtain a first binary image. Further, the electronic device filters the background noise in the first binary image to obtain a second binary image. Further, the electronic device performs an expansion process on the second binary image to obtain a third binary image. Further, the electronic device selects a black pixel block that satisfies the threshold condition in the third binary image, thereby obtaining a defect in the sample skin image 20. Still further, the electronic device intercepts the picture containing the defect from the defect location in the sample skin image 20.
  • the defects of the same category have different expressions. As shown in Fig. 3, when the defects are dark spots, the black spots have at least 18 expressions.
  • the number of defects of the sample skin image can at least enable the image classification model to learn the rule of "how to classify defects according to the target skin image containing the defects".
  • the training data corresponding to each category is 1000 pictures, and for example: when the point is acne, the training data corresponding to acne is 1000 pictures containing only acne.
  • the image classification model needs to train 10000.
  • the training data is such that the image classification model can learn the rule of "how to classify defects according to the target skin image containing defects", and therefore, the electronic device needs to collect at least 10,000 pieces of training data.
  • the amount of training data required for the electronic device is determined by the depth learning of the image classification model, it is not limited to the number of training data shown in this embodiment.
  • the electronic device collects the training data, it starts to label the training data to classify the sample skin image containing the defects. For example: the user operates the electronic device, right 10000 pieces of training data are labeled and classified, wherein the training data corresponding to each defect type is 1000 sheets, for example, the training data corresponding to the labeled acne is 1000 sheets.
  • the user needs to build and configure the convolutional neural network framework model in the electronic device to train the above training data, so as to perform deep learning on the target skin image input by the user and classify the target skin image. ready.
  • the convolutional neural network framework model may be LetNet-5, AlexNet, VGGNet, InceptionNet, ResNet, and the like. Different convolutional neural network framework models are selected, and the convolutional neural network framework model is different. For example, when the convolutional neural network framework model selected in this embodiment is the LetNet-5 model, as shown in FIG.
  • the frame of the LetNet-5 model is specifically: input layer - first convolution layer - first sampling layer - second convolution layer - second sampling layer - third convolution layer - first fully connected layer - Two fully connected layers.
  • each convolution layer is used for convolution operation, which can enhance the original image signal characteristics and reduce noise.
  • Each sampling layer is used to subsample the image using the principle of local correlation of the image, reducing the amount of data processing while retaining useful information.
  • the LetNet-5 model in order to enable the LetNet-5 model to learn more deeply, it may be based on the above-mentioned LetNet-5 model, adding a convolution layer or a sampling layer, etc., and is not limited thereto.
  • the electronic device can complete the construction of the convolutional neural network framework model using the artificial intelligence learning system on the operating system, and configure various training parameters for the convolutional neural network framework model.
  • the artificial intelligence learning system may be Tensorflow, Caffe (Convolutional Architecture for Fast Feature Embedding), MXnet, and the like.
  • the training parameters configured for the LetNet-5 model include: loss function, optimization function, and each batch. The amount of training of the subsample skin image and the number of iterations.
  • the loss function is a cross-entropy loss function
  • the optimization function is a function of calling tf.train.AdamOptimizer() in Tensorflow, wherein the learning rate parameter is set to 0.001, and the training amount of the skin image of each batch of samples is batch_size. :100, number of iterations: 3000.
  • the operating system supported by the electronic device may be a UNIX system, a Linux system, a Mac OS X system, a Windows system, an iOS system, an Android system, a WP system, a Chrome OS system, or the like.
  • the electronic device runs the code, and the labeled training data is input into the configured convolutional neural network framework model to train and save the training data.
  • the electronic device passes the labeled training data to the LetNet-5 model, and the LetNet-5 model trains and saves the above training data to call the model for prediction.
  • the electronic device receives a target skin image containing the defect, and the electronic device inputs the target skin image into the LetNet-5 model. Since the LetNet-5 model is trained, the LetNet-5 model invokes training during classification. The subsequent types of data perform deep learning on the target skin image, thereby outputting the classification result: the defect of the target skin image is a black spot.
  • the electronic device After the electronic device classifies each defect of the target skin image, in order to evaluate the severity of the user's skin, the electronic device counts the number of each type of defect, and determines the target skin image according to the number of each type of defect.
  • the severity of the defect includes a general rating, a comparative severity rating, a very severe rating, and an extreme severity rating. For example, after classification, the number of acne in the face image of user A is 5, which is greater than the general level 3 and less than the comparative severity level 8. Therefore, the severity of acne of user A is a relatively serious level.
  • the electronic device classifies each defect of the target skin image, in order to let the user know the type of each defect on the skin, so that the user can more sensiblely prescribe the medicine, then the electronic device is in the target skin image. Mark the point name at the point. Therefore, the user is more aware of the type of each point of the skin in order to more scientifically select the treatment.
  • various control logics for implementing skin defect classification may form a series of application installation packages in the form of instruction codes, and the application installation package may be installed on various network application download markets, or may be in various network platforms. Or on the website.
  • the application installation package can be downloaded to the local network from the network, and the installation of the application installation package is completed locally, and the skin defect classification is completed locally through the application. .
  • users only need to take one Face photos can identify the distribution of various facial defects, which is more convenient than specialized medical instruments, and can also test the skin care effect of skin care products by tracking the severity of defects.
  • the electronic device may be a portable telephone, a smart phone, a tablet, a notebook, a tablet PC, a laptop computer, a digital broadcast terminal, a personal digital assistant (PAD), or the like.
  • the electronic device 50 includes: at least one processor 51 and a memory 52 communicatively coupled to the at least one processor 51; wherein, in FIG. 5, a processor 51 is taken as an example.
  • the processor 51 and the memory 52 can be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 52 stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor 51 to perform the control of the skin defect classification described above logic.
  • an embodiment of the present application provides a skin defect classification device, which is applied to an electronic device.
  • the skin defect classification device acts as a software system that can be stored in the electronic device illustrated in FIG.
  • the skin defect classification device includes a plurality of instructions stored in a memory, the processor can access the memory, and invoke instructions to perform the skin defect classification device.
  • the skin defect classification device 60 includes an acquisition module 61 and a classification module 62.
  • the acquisition module 61 is configured to acquire a target skin image including a defect.
  • the classification module 62 is configured to classify the defects of the target skin image according to the image classification algorithm.
  • the classification module 62 includes an acquisition unit 621 and an input unit 622.
  • the obtaining unit 621 is configured to acquire an image classification model.
  • the input unit 622 is configured to input a defect of the target skin image into the image classification model to classify the image classification model into defects.
  • the image classification model includes a convolutional neural network box. Frame model.
  • the skin defect classification device 60 further includes an acquisition module 63, a first annotation module 64, a setup configuration module 65, and an input module 66.
  • the acquisition module 63 is configured to collect training data, and the training data includes a plurality of sample skin images including defects.
  • the first labeling module 64 is used to label the training data to categorize a plurality of sample skin images containing defects.
  • the setup configuration module 65 is used to build and configure a convolutional neural network framework model.
  • the input module 66 is configured to input the labeled training data into the configured convolutional neural network framework model to train and save the training data.
  • the convolutional neural network framework model includes a LetNet-5 model.
  • the setup configuration module 65 is specifically configured to: sequentially construct an input layer, a first convolution layer, a first sampling layer, a second convolution layer, a second sampling layer, a third convolution layer, a first fully connected layer, and a second Fully connected layer.
  • the build configuration module 65 is specifically configured to: configure the loss function, the optimization function, the amount of training per sample skin image, and the number of iterations.
  • the loss function is a cross entropy loss function.
  • the skin defect classification device 60 further includes a statistics module 67 and a determination module 68.
  • the statistics module 67 is used to count the number of defects of each type.
  • the determination module 68 is configured to determine the severity of the defect of the target skin image based on the number of defects per type.
  • the skin defect classification device 60 further includes a second labeling module 69.
  • the second labeling module 69 is configured to mark the point name at the defect of the target skin image.
  • the device embodiment and the foregoing embodiments are based on the same concept, and the content of the device embodiment may refer to the foregoing embodiments, and the details are not described herein.
  • the embodiment of the present application provides a skin defect classification method.
  • the function of the skin defect classification method of the embodiment of the present application is in addition to the above
  • the software system of the skin defect classification device described in FIGS. 6 to 10 is executed, which can also be executed by means of a hardware platform.
  • the skin defect classification method can be performed in an electronic device of a suitable type of processor having a computing capability, such as a single chip microcomputer, a digital signal processing (DSP), and a programmable logic controller (PLC). and many more.
  • DSP digital signal processing
  • PLC programmable logic controller
  • the functions corresponding to the picture association method of each of the following embodiments are stored in the form of instructions on the memory of the electronic device, and the processor access of the electronic device is performed when the function corresponding to the skin defect classification method of each of the following embodiments is to be executed.
  • the memory retrieves and executes the corresponding instructions to implement the functions corresponding to the skin defect classification method of each of the following embodiments.
  • the memory is a non-volatile computer readable storage medium that can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as the program corresponding to the skin defect sorting device 60 in the above embodiment.
  • the instructions/modules e.g., the various modules and units described in Figures 6-10), or the steps corresponding to the skin defect classification method of the following embodiments.
  • the processor executes various functional applications and data processing of the skin defect sorting device 60 by executing non-volatile software programs, instructions, and modules stored in the memory, that is, implementing the skin defect sorting device 60 of the following embodiment.
  • the memory may include a high speed random access memory, and may also include a non-volatile memory such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • the memory optionally includes a memory remotely located relative to the processor, the remote memory being connectable to the processor over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the program instructions/modules are stored in the memory, and when executed by the one or more processors, perform a skin defect classification method in any of the above method embodiments, for example, performing the diagrams described in the following embodiments 11 to the various steps shown in FIG. 15; the functions of the various modules and units described in FIGS. 6 to 10 can also be implemented.
  • the skin defect classification method 70 includes:
  • Step 71 Acquire an image of a target skin including a defect
  • Step 72 Sort the defects of the target skin image according to the image classification algorithm.
  • the step 72 includes:
  • Step 721 Acquire an image classification model.
  • Step 722 Enter a defect image of the target skin image into the image classification model to classify the image classification model into defects.
  • the image classification model comprises a convolutional neural network framework model.
  • the skin defect classification method 70 further includes:
  • Step 73 Collect training data, where the training data includes a plurality of sample skin images including defects;
  • Step 74 Label training data to classify a plurality of sample skin images including defects
  • Step 75 construct and configure a convolutional neural network framework model
  • Step 76 Input the labeled training data into the configured convolutional neural network framework model to train and save the training data.
  • the convolutional neural network framework model includes the LetNet-5 model.
  • the LetNet-5 model When constructing the LetNet-5 model, it sequentially constructs an input layer, a first convolutional layer, a first sampling layer, a second convolutional layer, a second sampling layer, a third convolutional layer, a first fully connected layer, and a second Fully connected layer.
  • the LetNet-5 model When configuring the LetNet-5 model, it configures the loss function, the optimization function, the amount of training for each batch of sample skin images, and the number of iterations.
  • the loss function is a cross entropy loss function.
  • the skin defect classification method 70 further includes:
  • Step 77 Count the number of each type of defect
  • Step 78 Determine the severity of the defect of the target skin image according to the number of each type of defect.
  • the skin defect classification method 70 further includes:
  • Step 79 Mark the defect name at the defect of the target skin image.
  • the device embodiment and the method embodiment are based on the same concept, and the content of the method embodiment may refer to the device embodiment, and details are not described herein.
  • an embodiment of the present application provides a non-transitory computer readable storage medium storing computer executable instructions, the computer executable instructions A method for causing an electronic device to perform the skin defect classification method according to any one of the preceding claims.

Abstract

本申请涉及皮肤检测技术领域,特别是涉及一种皮肤瑕疵点分类方法及电子设备。该皮肤瑕疵点分类方法包括:获取包含瑕疵点的目标皮肤图像;根据图像分类算法,分类目标皮肤图像的瑕疵点。因此,其能够识别出皮肤瑕疵点的类别,以便用户后续进行护肤处理作出明智选择。

Description

一种皮肤瑕疵点分类方法及电子设备 技术领域
本申请涉及皮肤检测技术领域,特别是涉及一种皮肤瑕疵点分类方法及电子设备。
背景技术
随着岁月的增长与生理的变化,人类的皮肤出现一些瑕疵点,瑕疵点可以是由于局部皮肤色素沉着而出现的晒斑、雀斑、老年斑等等,亦可以是青春痘印、痘坑等等。
传统技术能够通过皮肤图像而统计出皮肤的黑色素(瑕疵点)的数量,以便知悉皮肤的瑕疵点严重程度。
申请人在实现本申请的过程中,发现传统技术至少存在以下问题:传统技术只能够统计出瑕疵点的数量,却未能够识别出瑕疵点的类别,从而未能够对症下药地为用户提供护肤的方法。
申请内容
本申请实施例一个目的旨在提供一种皮肤瑕疵点分类方法及电子设备,其解决了传统技术未能够识别出皮肤瑕疵点的类别的技术问题。
为解决上述技术问题,本申请实施例提供以下技术方案:
在第一方面,本申请实施例提供一种皮肤瑕疵点分类方法,包括:获取包含瑕疵点的目标皮肤图像;根据图像分类算法,分类所述目标皮肤图像的瑕疵点。
可选地,所述根据图像分类算法,分类所述目标皮肤图像的瑕疵点,包括:获取图像分类模型;将所述目标皮肤图像的瑕疵点输入所述图像分类模型,以使所述图像分类模型分类所述瑕疵点。
可选地,所述图像分类模型包括卷积神经网络框架模型;在获取图像分类模型之前,所述方法还包括:采集训练数据,所述训练数据包括若干张包含瑕疵点的样本皮肤图像;标注所述训练数据,以将所述若干 张包含瑕疵点的样本皮肤图像进行归类;搭建并配置所述卷积神经网络框架模型;将标注后的训练数据输入配置后的卷积神经网络框架模型以训练并保存所述训练数据。
可选地,所述卷积神经网络框架模型包括LetNet-5模型;所述搭建所述卷积神经网络框架模型包括搭建所述LetNet-5模型,所述搭建所述LetNet-5模型具体包括:依次搭建输入层、第一卷积层、第一采样层、第二卷积层、第二采样层、第三卷积层、第一全连接层以及第二全连接层。
可选地,所述配置所述卷积神经网络框架模型包括配置所述LetNet-5模型,配置所述LetNet-5模型包括:配置损失函数与优化函数;配置每批次样本皮肤图像的训练量与迭代次数。
可选地,所述损失函数为交叉熵损失函数。
可选地,在根据图像分类算法,分类所述目标皮肤图像的瑕疵点之后,所述方法还包括:统计每类瑕疵点的数量;根据所述每类瑕疵点的数量确定所述目标皮肤图像的瑕疵点严重程度。
可选地,在根据图像分类算法,分类所述目标皮肤图像的瑕疵点之后,所述方法还包括:在所述目标皮肤图像的瑕疵点处标注瑕疵点名称。
在第二方面,本申请实施例提供一种皮肤瑕疵点分类装置,包括:获取模块,用于获取包含瑕疵点的目标皮肤图像;分类模块,用于根据图像分类算法,分类所述目标皮肤图像的瑕疵点。
可选地,所述分类模块包括:获取单元,用于获取图像分类模型;输入单元,用于将所述目标皮肤图像的瑕疵点输入所述图像分类模型,以使所述图像分类模型分类所述瑕疵点。
可选地,所述图像分类模型包括卷积神经网络框架模型;所述装置还包括:采集模块,用于采集训练数据,所述训练数据包括若干张包含瑕疵点的样本皮肤图像;第一标注模块,用于标注所述训练数据,以将所述若干张包含瑕疵点的样本皮肤图像进行归类;搭建配置模块,用于搭建并配置所述卷积神经网络框架模型;输入模块,用于将标注后的训练数据输入配置后的卷积神经网络框架模型以训练并保存所述训练数 据。
可选地,所述卷积神经网络框架模型包括LetNet-5模型;所述搭建配置模块具体用于:依次搭建输入层、第一卷积层、第一采样层、第二卷积层、第二采样层、第三卷积层、第一全连接层以及第二全连接层。
可选地,所述搭建配置模块具体用于:配置损失函数与优化函数;配置每批次样本皮肤图像的训练量与迭代次数。
可选地,所述损失函数为交叉熵损失函数。
可选地,所述装置还包括:统计模块,用于统计每类瑕疵点的数量;确定模块,用于根据所述每类瑕疵点的数量确定所述目标皮肤图像的瑕疵点严重程度。
可选地,所述装置还包括:第二标注模块,用于在所述目标皮肤图像的瑕疵点处标注瑕疵点名称。
在第三方面,本申请实施例提供一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够用于执行任一项所述的皮肤瑕疵点分类方法。
在第四方面,本申请实施例提供一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使电子设备执行任一项所述的皮肤瑕疵点分类方法。
在本申请各个实施例中,通过获取包含瑕疵点的目标皮肤图像,根据图像分类算法,分类目标皮肤图像的瑕疵点。因此,其能够识别出皮肤瑕疵点的类别,以便用户后续进行护肤处理作出明智选择。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本申请实施例提供一种基于图像分类模型对输入保护瑕疵点的目标皮肤图像进行分类的示意图;
图2是本申请实施例提供一种从样本皮肤图像截取出包含瑕疵点的各个图片的示意图;
图3是本申请实施例提供一种瑕疵点为黑斑的各个表现形态示意图;
图4是本申请实施例提供一种LetNet-5模型的框架示意图;
图5是本申请实施例提供一种电子设备的结构示意图;
图6是本申请实施例提供一种皮肤瑕疵点分类装置的结构示意图;
图7是本申请实施例提供一种分类模块的结构示意图;
图8是本申请另一实施例提供一种皮肤瑕疵点分类装置的结构示意图;
图9是本申请又另一实施例提供一种皮肤瑕疵点分类装置的结构示意图;
图10是本申请又另一实施例提供一种皮肤瑕疵点分类装置的结构示意图;
图11是本申请实施例提供一种皮肤瑕疵点分类方法的流程示意图;
图12是本申请实施例提供一种步骤72的流程示意图;
图13是本申请另一实施例提供一种皮肤瑕疵点分类方法的流程示意图;
图14是本申请又另一实施例提供一种皮肤瑕疵点分类方法的流程示意图;
图15是本申请又另一实施例提供一种皮肤瑕疵点分类方法的流程示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
随着生活水平的提高,人们对美容护肤的关注度日益增强。当人们需要对皮肤上出现的瑕疵点进行分类时,大部分人们求助于专业临床皮肤医师。专业临床皮肤医师根据专业知识,主观性地辨认皮肤上的瑕疵点的类型以及给予出专业治疗意见。然而,瑕疵点的类型的分类与医师的专业水平能力具有正相关性,当医师的专业水平能力一般时,其给予出的专业治疗意见是不够准确的,因此,此类瑕疵点分类方法存在一定的风险。
为了提供客观的瑕疵点判断数据,如前所述,传统技术提出的瑕疵点检测技术能够检测出瑕疵点的数量,但是未能够识别出瑕疵点的类别。
基于上述的各类缺陷,本申请实施例提供一种电子设备,其能够识别出皮肤瑕疵点的类别,以便用户后续进行护肤处理作出明智选择。
首先,电子设备获取包含瑕疵点的目标皮肤图像。
在本实施例中,瑕疵点具有不同的种类,其可以包括痤疮、痘印、痘坑、黑斑、雀斑、黄褐斑、蝴蝶斑、老年斑、黑痣以及其它瑕疵点。其中,其它瑕疵点是除了前面所述的九个类别之外的瑕疵点,例如:伤疤等等。
进一步的,目标皮肤图像可以为人脸皮肤图像,亦可以为人类其它部位的表面皮肤图像,例如:手肘皮肤图像、腿部皮肤图像等等。
当电子设备配置摄像头模组时,用户可以操作电子设备,使摄像头模组拍摄包含瑕疵点的目标皮肤,从而使电子设备获取到目标皮肤图像。
当电子设备配置远距离通讯模块时,用户可以操作电子设备,使电子设备通过该通讯模块连接云端,从云端抓取目标皮肤图像,从而使电子设备获取到目标皮肤图像。该云端可以为用户预先配置的云端服务器,亦可以为互联网。其中,该云端服务器可以是一个物理服务器或者多个物理服务器虚拟而成的一个逻辑服务器,亦可以是多个可互联通信的服务器组成的服务器群。在一些实施例中,电子设备通过该远距离通讯模块连接到从包括例如码分多址(CDMA)、全球移动通信系统(GSM)、 宽带码分多址(WCDMA)或无线宽带(Wibro)的多个通信方法中选择的三种通信网络,其中,该远距离通讯模块包括移动通信模块、无线互联网模块等等。
当电子设备配置近距离通讯模块时,用户可以操作电子设备,使电子设备通过该近距离通讯模块与对方电子设备连接,接收对方电子设备发送的目标皮肤图像,从而使电子设备获取到目标皮肤图像。该近距离通讯模块可以采用蓝牙(Bluetooth)、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)或ZigBee的近距离通信技术。
此处,本申请实施例并不局限于上述各个实施例所示的电子设备获取包含瑕疵点的目标皮肤图像的方式,本领域技术人员可以根据业务需求自行选择获取方式,在此不赘述。
然后,电子设备根据图像分类算法,分类目标皮肤图像的瑕疵点。图像分类算法用于指示电子设备处理目标皮肤图像,从而在目标皮肤图像中识别出瑕疵点,并且对该瑕疵点进行分类。在一些实施例中,图像分类算法对应的逻辑功能可以是由预先构建好的图像分类模型来执行,例如:在一些实施例中,电子设备分类目标皮肤图像的瑕疵点过程中,其可以获取图像分类模型,将目标皮肤图像的瑕疵点输入图像分类模型,以使图像分类模型分类瑕疵点。其中,该图像分类模型已预存多种训练出的瑕疵点分类数据,当图像分类模型接收到目标皮肤图像的瑕疵点时,该图像分类模型调用该瑕疵点分类数据对该目标皮肤图像的瑕疵点进行深度学习,从而输出该瑕疵点的分类结果。如图1所示,图像分类模型10接收并处理包含瑕疵点的目标皮肤图像11,图像分类模型10输出的分类结果为:该瑕疵点为黑斑。
在另一些实施例中,用户可以在电子设备内编写图像分类代码逻辑,以将目标皮肤图像的瑕疵点分类出。
如前所述,电子设备在获取该图像分类模型之前,其需要训练该图像分类模型,以便该图像分类模型能够通过深度学习自动分析出输入的目标皮肤图像的瑕疵点类型。
例如:当该图像分类模型为卷积神经网络框架模型(Convolutional  Neural Network,CNN)时,首先,工程师通过操作电子设备,使电子设备采集训练数据,其中,该训练数据包括若干张包含瑕疵点的样本皮肤图像。可以理解的是:每张样本皮肤图像可以包括多种类型的瑕疵点,例如:如图2所示,该样本皮肤图像20为人脸图像。当电子设备获取到该样本皮肤图像20时,电子设备从样本皮肤图像20中截取出9张包含瑕疵点的图片,其中,该瑕疵点可以包括痤疮、痘印、痘坑、黑斑、雀斑、黄褐斑、蝴蝶斑、老年斑、黑痣以及其它瑕疵点。在截取过程中,电子设备可以先对该样本皮肤图像20进行二值化处理,获得第一二值图像。进一步的,电子设备再过滤该第一二值图像中的背景噪音,获得第二二值图像。进一步的,电子设备对该第二二值图像进行膨胀处理,获得第三二值图像。再进一步的,电子设备在该第三二值图像中筛选出满足阈值条件的黑色像素区块,从而获得样本皮肤图像20中的瑕疵点。再进一步的,电子设备从样本皮肤图像20中瑕疵点位置截取包含瑕疵点的图片。
可以理解的是:同一类别的瑕疵点具有不同的表现形态,如图3所示,当瑕疵点为黑斑时,黑斑至少具有18种表现形态。
可以理解的是:样本皮肤图像的瑕疵点数量至少可以使得该图像分类模型能够学习到“如何根据包含瑕疵点的目标皮肤图像分类出瑕疵点”的规律。例如:每一类别瑕疵点对应的训练数据为1000张图片,再例如:当瑕疵点为痤疮时,痤疮对应的训练数据为只包含痤疮的1000张图片。当该图像分类模型需要深度学习上述所述的十大类别的瑕疵点(痤疮、痘印等十大类别),并且每一瑕疵点类别对应1000张训练数据时,该图像分类模型需要训练出10000张训练数据,以使该图像分类模型能够学习到“如何根据包含瑕疵点的目标皮肤图像分类出瑕疵点”的规律,因此,电子设备至少需要采集10000张训练数据。一般的,当电子设备需要训练数据的采集量视图像分类模型的深度学习的需要来定,在此并不局限于本实施例所示的训练数据的数量。
其次,当电子设备采集完训练数据后,便开始标注训练数据,以将包含瑕疵点的样本皮肤图像进行归类。例如:用户操作电子设备,对 10000张训练数据进行标注归类,其中,每一瑕疵点类型对应的训练数据为1000张,例如:标注后的痤疮对应的训练数据为1000张。
再次,用户需要在电子设备内搭建并配置该卷积神经网络框架模型,以便训练上述的训练数据,从而为后期对用户输入的目标皮肤图像进行深度学习并分类出目标皮肤图像的瑕疵点做好准备。
其中,该卷积神经网络框架模型可以为LetNet-5、AlexNet、VGGNet、InceptionNet、ResNet等等。选择不同的卷积神经网络框架模型,所搭建的卷积神经网络框架模型是不同的,例如:当本实施例所选择的卷积神经网络框架模型为LetNet-5模型时,如图4所示,所搭建LetNet-5模型的框架具体为:输入层-第一卷积层-第一采样层-第二卷积层-第二采样层-第三卷积层-第一全连接层-第二全连接层。其中,各个卷积层用于通过卷积运算,可以使原图像信号特征增强,并且降低噪音。各个采样层用于利用图像局部相关性的原理,对图像进行子抽样,减少数据处理量同时保留有用信息。
一般地,在一些实施例中,为了使LetNet-5模型能够更加深度地进行学习,其可以在上述LetNet-5模型的基础上,增加卷积层或采样层等等,在此并不局限于本实施例所示的LetNet-5模型的框架。
当用户搭建好卷积神经网络框架模型后,电子设备可以在操作系统上使用人工智能学习系统完成卷积神经网络框架模型的搭建,以及为卷积神经网络框架模型配置各类训练参数。该人工智能学习系统可以为Tensorflow、Caffe(Convolutional Architecture for Fast Feature Embedding,卷积神经网络框架),MXnet等等。例如:如前所述,当要训练10000张训练数据,并且选择LetNet-5模型,人工智能学习系统为Tensorflow时,为LetNet-5模型所配置的训练参数包括:损失函数、优化函数、每批次样本皮肤图像的训练量以及迭代次数。其中,损失函数为交叉熵(Cross-Entropy)损失函数,优化函数为调用Tensorflow中的tf.train.AdamOptimizer()函数,其中学习速率参数设置为0.001,每批次样本皮肤图像的训练量为batch_size:100,迭代次数:3000。
在一些实施例中,电子设备支持的操作系统可以为UNIX系统、Linux系统、Mac OS X系统、Windows系统、iOS系统、Android系统、WP系统、Chrome OS系统等等。
当各个训练参数设置好后,电子设备运行代码,将标注后的训练数据输入配置后的卷积神经网络框架模型以训练并保存训练数据。例如:电子设备将标注后的训练数据传递给LetNet-5模型,LetNet-5模型训练并保存上述的训练数据,以便调用模型作预测。再例如:电子设备接收到一张包含瑕疵点的目标皮肤图像,于是,电子设备将该目标皮肤图像输入LetNet-5模型,由于LetNet-5模型经过训练,LetNet-5模型在分类时,调用训练后的各类数据对该目标皮肤图像进行深度学习,从而输出分类结果:该目标皮肤图像的瑕疵点为黑斑。
当电子设备将目标皮肤图像的各个瑕疵点分类完毕后,为了评价用户的皮肤的瑕疵点严重程度,于是,电子设备统计每类瑕疵点的数量,根据每类瑕疵点的数量确定目标皮肤图像的瑕疵点严重程度。在一些实施例中,瑕疵点严重程度包括一般等级、比较严重等级、很严重等级以及极度严重等级。例如:经过分类后,用户A的人脸图像的痤疮的数量为5,其大于一般等级3,小于比较严重等级8,因此,用户A的痤疮严重程度属于比较严重等级。
在一些实施例中,当电子设备将目标皮肤图像的各个瑕疵点分类完毕后,为了让用户知悉皮肤上各个瑕疵点的类型,以便用户更加明智地对症下药,于是,电子设备在目标皮肤图像的瑕疵点处标注瑕疵点名称。因此,用户更加清楚皮肤各个瑕疵点的所属类型,以便更加科学地选择治疗方式。
在上述各个实施例中,实现皮肤瑕疵点分类的各种控制逻辑可以以指令代码形式组成一系列应用程序安装包,该应用程序安装包可以上架在各个网络应用下载市场,亦可以在各个网络平台或网站上。当用户需要使用皮肤瑕疵点分类功能时,可以从网络上将该应用程序安装包下载至本地,并且在本地完成该应用程序安装包的安装,以及,在本地通过该应用程序完成皮肤瑕疵点分类。相对于传统技术,用户只需要拍一张 脸部照片便可以识别脸部各种瑕疵点的分布情况,相对于专门的医疗仪器,其便捷性更高,并且还可以通过跟踪瑕疵点的严重程度的变化,测试护肤品的护肤效果。
在上述各个实施例中,电子设备可以便携式电话、智能电话、平板电脑、笔记本、平板式PC、膝上型计算机、数字广播终端、个人数字助理(PAD)等等。如图5所示,该电子设备50包括:至少一个处理器51以及与所述至少一个处理器51通信连接的存储器52;其中,图5中以一个处理器51为例。处理器51和存储器52可以通过总线或者其他方式连接,图5中以通过总线连接为例。
其中,存储器52存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器51能够用于执行上述皮肤瑕疵点分类的控制逻辑。
作为本申请实施例的另一方面,本申请实施例提供一种皮肤瑕疵点分类装置,应用于电子设备。该皮肤瑕疵点分类装置作为软件系统,其可以存储在图5所阐述电子设备内。该皮肤瑕疵点分类装置包括若干指令,该若干指令存储于存储器内,处理器可以访问该存储器,调用指令进行执行,以完成上述皮肤瑕疵点分类装置。
如图6所示,该皮肤瑕疵点分类装置60包括:获取模块61与分类模块62。
获取模块61用于获取包含瑕疵点的目标皮肤图像。
分类模块62用于根据图像分类算法,分类目标皮肤图像的瑕疵点。
在本实施例中,其能够识别出皮肤瑕疵点的类别,以便用户后续进行护肤处理作出明智选择。
在一些实施例中,如图7所示,该分类模块62包括:获取单元621与输入单元622。
获取单元621用于获取图像分类模型。
输入单元622用于将目标皮肤图像的瑕疵点输入图像分类模型,以使图像分类模型分类瑕疵点。
在一些实施例中,如图8所示,图像分类模型包括卷积神经网络框 架模型。该皮肤瑕疵点分类装置60还包括:采集模块63、第一标注模块64、搭建配置模块65以及输入模块66。
采集模块63用于采集训练数据,该训练数据包括若干张包含瑕疵点的样本皮肤图像。
第一标注模块64用于标注训练数据,以将若干张包含瑕疵点的样本皮肤图像进行归类。
搭建配置模块65用于搭建并配置卷积神经网络框架模型。
输入模块66用于将标注后的训练数据输入配置后的卷积神经网络框架模型以训练并保存训练数据。
在一些实施例中,该卷积神经网络框架模型包括LetNet-5模型。该搭建配置模块65具体用于:依次搭建输入层、第一卷积层、第一采样层、第二卷积层、第二采样层、第三卷积层、第一全连接层以及第二全连接层。
在一些实施例中,搭建配置模块65具体用于:配置损失函数、优化函数、每批次样本皮肤图像的训练量以及迭代次数。可选地,该损失函数为交叉熵损失函数。
在一些实施例中,如图9所示,该皮肤瑕疵点分类装置60还包括:统计模块67与确定模块68。
统计模块67用于统计每类瑕疵点的数量。
确定模块68用于根据每类瑕疵点的数量确定目标皮肤图像的瑕疵点严重程度。
在一些实施例中,如图10所示,该皮肤瑕疵点分类装置60还包括:第二标注模块69。第二标注模块69用于在目标皮肤图像的瑕疵点处标注瑕疵点名称。
由于装置实施例和上述各个实施例是基于同一构思,在内容不互相冲突的前提下,装置实施例的内容可以引用上述各个实施例的,在此不赘述。
作为本申请实施例的又另一方面,本申请实施例提供一种皮肤瑕疵点分类方法。本申请实施例的皮肤瑕疵点分类方法的功能除了借助上述 图6至图10所述的皮肤瑕疵点分类装置的软件系统来执行,其亦可以借助硬件平台来执行。例如:皮肤瑕疵点分类方法可以在合适类型具有运算能力的处理器的电子设备中执行,例如:单片机、数字处理器(Digital Signal Processing,DSP)、可编程逻辑控制器(Programmable Logic Controller,PLC)等等。
下述各个实施例的图片关联方法对应的功能是以指令的形式存储在电子设备的存储器上,当要执行下述各个实施例的皮肤瑕疵点分类方法对应的功能时,电子设备的处理器访问存储器,调取并执行对应的指令,以实现下述各个实施例的皮肤瑕疵点分类方法对应的功能。
存储器作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如上述实施例中的皮肤瑕疵点分类装置60对应的程序指令/模块(例如,图6至图10所述的各个模块和单元),或者下述实施例皮肤瑕疵点分类方法对应的步骤。处理器通过运行存储在存储器中的非易失性软件程序、指令以及模块,从而执行皮肤瑕疵点分类装置60的各种功能应用以及数据处理,即实现下述实施例皮肤瑕疵点分类装置60的各个模块与单元的功能,或者下述实施例皮肤瑕疵点分类方法对应的步骤的功能。
存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器可选包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至处理器。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述程序指令/模块存储在所述存储器中,当被所述一个或者多个处理器执行时,执行上述任意方法实施例中的皮肤瑕疵点分类方法,例如,执行下述实施例描述的图11至图15所示的各个步骤;也可实现附图6至图10所述的各个模块和单元的功能。
如图11所示,该皮肤瑕疵点分类方法70包括:
步骤71、获取包含瑕疵点的目标皮肤图像;
步骤72、根据图像分类算法,分类目标皮肤图像的瑕疵点。
在本实施例中,其能够识别出皮肤瑕疵点的类别,以便用户后续进行护肤处理作出明智选择。
在一些实施例中,如图12所示,该步骤72包括:
步骤721、获取图像分类模型;
步骤722、将目标皮肤图像的瑕疵点输入图像分类模型,以使图像分类模型分类瑕疵点。
在一些实施例中,该图像分类模型包括卷积神经网络框架模型。在执行步骤721之前,如图13所示,该皮肤瑕疵点分类方法70还包括:
步骤73、采集训练数据,训练数据包括若干张包含瑕疵点的样本皮肤图像;
步骤74、标注训练数据,以将若干张包含瑕疵点的样本皮肤图像进行归类;
步骤75、搭建并配置卷积神经网络框架模型;
步骤76、将标注后的训练数据输入配置后的卷积神经网络框架模型以训练并保存训练数据。
在一些实施例中,卷积神经网络框架模型包括LetNet-5模型。在搭建LetNet-5模型时,其依次搭建输入层、第一卷积层、第一采样层、第二卷积层、第二采样层、第三卷积层、第一全连接层以及第二全连接层。在配置LetNet-5模型时,其分别配置损失函数、优化函数、每批次样本皮肤图像的训练量以及迭代次数。其中,该损失函数为交叉熵损失函数。
与上述各个实施例不同点在于,如图14所示,在步骤72之后,该皮肤瑕疵点分类方法70还包括:
步骤77、统计每类瑕疵点的数量;
步骤78、根据每类瑕疵点的数量确定目标皮肤图像的瑕疵点严重程度。
与上述各个实施例不同点在于,如图15所示,在步骤72之后,该皮肤瑕疵点分类方法70还包括:
步骤79、在目标皮肤图像的瑕疵点处标注瑕疵点名称。
由于装置实施例和方法实施例是基于同一构思,在内容不互相冲突的前提下,方法实施例的内容可以引用装置实施例的,在此不赘述。
作为本申请实施例的又另一方面,本申请实施例提供一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使电子设备执行如上任一项所述的皮肤瑕疵点分类方法。
在本实施例中,其能够识别出皮肤瑕疵点的类别,以便用户后续进行护肤处理作出明智选择。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (10)

  1. 一种皮肤瑕疵点分类方法,其特征在于,包括:
    获取包含瑕疵点的目标皮肤图像;
    根据图像分类算法,分类所述目标皮肤图像的瑕疵点。
  2. 根据权利要求1所述的方法,其特征在于,所述根据图像分类算法,分类所述目标皮肤图像的瑕疵点,包括:
    获取图像分类模型;
    将所述目标皮肤图像的瑕疵点输入所述图像分类模型,以使所述图像分类模型分类所述瑕疵点。
  3. 根据权利要求2所述的方法,其特征在于,所述图像分类模型包括卷积神经网络框架模型;
    在获取图像分类模型之前,所述方法还包括:
    采集训练数据,所述训练数据包括若干张包含瑕疵点的样本皮肤图像;
    标注所述训练数据,以将所述若干张包含瑕疵点的样本皮肤图像进行归类;
    搭建并配置所述卷积神经网络框架模型;
    将标注后的训练数据输入配置后的卷积神经网络框架模型以训练并保存所述训练数据。
  4. 根据权利要求3所述的方法,其特征在于,所述卷积神经网络框架模型包括LetNet-5模型;
    所述搭建所述卷积神经网络框架模型包括搭建所述LetNet-5模型,所述搭建所述LetNet-5模型具体包括:
    依次搭建输入层、第一卷积层、第一采样层、第二卷积层、第二采样层、第三卷积层、第一全连接层以及第二全连接层。
  5. 根据权利要求4所述的方法,其特征在于,所述配置所述卷积神经网络框架模型包括配置所述LetNet-5模型,配置所述LetNet-5模型具体包括:
    配置损失函数与优化函数;
    配置每批次样本皮肤图像的训练量与迭代次数。
  6. 根据权利要求5所述的方法,其特征在于,所述损失函数为交叉熵损失函数。
  7. 根据权利要求1至6任一项所述的方法,其特征在于,在根据图像分类算法,分类所述目标皮肤图像的瑕疵点之后,所述方法还包括:
    统计每类瑕疵点的数量;
    根据所述每类瑕疵点的数量确定所述目标皮肤图像的瑕疵点严重程度。
  8. 根据权利要求1至6任一项所述的方法,其特征在于,在根据图像分类算法,分类所述目标皮肤图像的瑕疵点之后,所述方法还包括:
    在所述目标皮肤图像的瑕疵点处标注瑕疵点名称。
  9. 一种电子设备,其特征在于,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够用于执行如权利要求1至8任一项所述的皮肤瑕疵点分类方法。
  10. 一种非暂态计算机可读存储介质,其特征在于,所述非暂态计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使电子设备执行如权利要求1至8任一项所述的皮肤瑕疵点分类方法。
PCT/CN2017/110952 2017-11-14 2017-11-14 一种皮肤瑕疵点分类方法及电子设备 WO2019095118A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780009000.XA CN108780497B (zh) 2017-11-14 2017-11-14 一种皮肤瑕疵点分类方法及电子设备
PCT/CN2017/110952 WO2019095118A1 (zh) 2017-11-14 2017-11-14 一种皮肤瑕疵点分类方法及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/110952 WO2019095118A1 (zh) 2017-11-14 2017-11-14 一种皮肤瑕疵点分类方法及电子设备

Publications (1)

Publication Number Publication Date
WO2019095118A1 true WO2019095118A1 (zh) 2019-05-23

Family

ID=64034057

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/110952 WO2019095118A1 (zh) 2017-11-14 2017-11-14 一种皮肤瑕疵点分类方法及电子设备

Country Status (2)

Country Link
CN (1) CN108780497B (zh)
WO (1) WO2019095118A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695463A (zh) * 2020-05-29 2020-09-22 深圳数联天下智能科技有限公司 人脸面部杂质检测模型的训练方法、人脸面部杂质检测方法
CN112446398A (zh) * 2019-09-02 2021-03-05 华为技术有限公司 图像分类方法以及装置
CN113012097A (zh) * 2021-01-19 2021-06-22 富泰华工业(深圳)有限公司 图像复检方法、计算机装置及存储介质
CN113269251A (zh) * 2021-05-26 2021-08-17 安徽唯嵩光电科技有限公司 基于机器视觉和深度学习融合的水果瑕疵分类方法、装置、存储介质及计算机设备
CN113689381A (zh) * 2021-07-21 2021-11-23 航天晨光股份有限公司 波纹管内壁瑕疵检测模型及检测方法
CN113705477A (zh) * 2021-08-31 2021-11-26 平安科技(深圳)有限公司 一种医疗图像识别方法、系统、设备及介质
CN113807434A (zh) * 2021-09-16 2021-12-17 中国联合网络通信集团有限公司 布匹的瑕疵识别方法及模型训练方法
CN115035119A (zh) * 2022-08-12 2022-09-09 山东省计算中心(国家超级计算济南中心) 一种玻璃瓶底瑕疵图像检测剔除装置、系统及方法
CN117288761A (zh) * 2023-11-27 2023-12-26 天津市海迅科技发展有限公司 一种基于测试材料的瑕疵检测分类评估方法及系统

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109524111A (zh) * 2018-12-06 2019-03-26 杭州电子科技大学 一种应用于手机的七类皮肤肿瘤检测方法
CN109919029A (zh) * 2019-01-31 2019-06-21 深圳和而泰数据资源与云技术有限公司 黑眼圈类型识别方法、装置、计算机设备和存储介质
CN110059635B (zh) * 2019-04-19 2021-03-23 厦门美图之家科技有限公司 一种皮肤瑕疵检测方法及装置
CN110956623B (zh) * 2019-11-29 2023-11-07 深圳数联天下智能科技有限公司 皱纹检测方法、装置、设备及计算机可读存储介质
CN111428552B (zh) * 2019-12-31 2022-07-15 深圳数联天下智能科技有限公司 黑眼圈识别方法、装置、计算机设备和存储介质
CN111428553B (zh) * 2019-12-31 2022-07-15 深圳数联天下智能科技有限公司 人脸色素斑识别方法、装置、计算机设备和存储介质
CN111429416B (zh) * 2020-03-19 2023-10-13 深圳数联天下智能科技有限公司 一种人脸色素斑识别方法、装置及电子设备
CN114209288A (zh) * 2022-01-14 2022-03-22 平安普惠企业管理有限公司 皮肤状态预测方法、皮肤状态预测装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751559A (zh) * 2009-12-31 2010-06-23 中国科学院计算技术研究所 人脸皮肤斑痣点检测及利用皮肤斑痣识别人脸的方法
CN103745204A (zh) * 2014-01-17 2014-04-23 公安部第三研究所 一种基于斑痣点的体貌特征比对方法
CN105787929A (zh) * 2016-02-15 2016-07-20 天津大学 基于斑点检测的皮肤疹点提取方法
CN106469300A (zh) * 2016-08-31 2017-03-01 广州莱德璞检测技术有限公司 一种色斑检测识别方法
CN107122806A (zh) * 2017-05-16 2017-09-01 北京京东尚科信息技术有限公司 一种敏感图像识别方法及装置
CN107330446A (zh) * 2017-06-05 2017-11-07 浙江工业大学 一种面向图像分类的深度卷积神经网络的优化方法
CN107341518A (zh) * 2017-07-07 2017-11-10 东华理工大学 一种基于卷积神经网络的图像分类方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5080116B2 (ja) * 2007-03-23 2012-11-21 株式会社 資生堂 肌画像撮影装置
CN101916370B (zh) * 2010-08-31 2012-04-25 上海交通大学 人脸检测中非特征区域图像处理的方法
JP5794889B2 (ja) * 2011-10-25 2015-10-14 富士フイルム株式会社 シミ種別分類装置の作動方法、シミ種別分類装置およびシミ種別分類プログラム
CN104970797B (zh) * 2014-04-08 2019-09-13 花王株式会社 皮肤分类方法、化妆品的推荐方法以及皮肤分类卡
CN107230205A (zh) * 2017-05-27 2017-10-03 国网上海市电力公司 一种基于卷积神经网络的输电线路螺栓检测方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751559A (zh) * 2009-12-31 2010-06-23 中国科学院计算技术研究所 人脸皮肤斑痣点检测及利用皮肤斑痣识别人脸的方法
CN103745204A (zh) * 2014-01-17 2014-04-23 公安部第三研究所 一种基于斑痣点的体貌特征比对方法
CN105787929A (zh) * 2016-02-15 2016-07-20 天津大学 基于斑点检测的皮肤疹点提取方法
CN106469300A (zh) * 2016-08-31 2017-03-01 广州莱德璞检测技术有限公司 一种色斑检测识别方法
CN107122806A (zh) * 2017-05-16 2017-09-01 北京京东尚科信息技术有限公司 一种敏感图像识别方法及装置
CN107330446A (zh) * 2017-06-05 2017-11-07 浙江工业大学 一种面向图像分类的深度卷积神经网络的优化方法
CN107341518A (zh) * 2017-07-07 2017-11-10 东华理工大学 一种基于卷积神经网络的图像分类方法

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446398A (zh) * 2019-09-02 2021-03-05 华为技术有限公司 图像分类方法以及装置
CN111695463B (zh) * 2020-05-29 2022-07-26 深圳数联天下智能科技有限公司 人脸面部杂质检测模型的训练方法、人脸面部杂质检测方法
CN111695463A (zh) * 2020-05-29 2020-09-22 深圳数联天下智能科技有限公司 人脸面部杂质检测模型的训练方法、人脸面部杂质检测方法
CN113012097B (zh) * 2021-01-19 2023-12-29 富泰华工业(深圳)有限公司 图像复检方法、计算机装置及存储介质
CN113012097A (zh) * 2021-01-19 2021-06-22 富泰华工业(深圳)有限公司 图像复检方法、计算机装置及存储介质
CN113269251A (zh) * 2021-05-26 2021-08-17 安徽唯嵩光电科技有限公司 基于机器视觉和深度学习融合的水果瑕疵分类方法、装置、存储介质及计算机设备
CN113689381A (zh) * 2021-07-21 2021-11-23 航天晨光股份有限公司 波纹管内壁瑕疵检测模型及检测方法
CN113689381B (zh) * 2021-07-21 2024-02-27 航天晨光股份有限公司 波纹管内壁瑕疵检测模型及检测方法
CN113705477A (zh) * 2021-08-31 2021-11-26 平安科技(深圳)有限公司 一种医疗图像识别方法、系统、设备及介质
CN113705477B (zh) * 2021-08-31 2023-08-29 平安科技(深圳)有限公司 一种医疗图像识别方法、系统、设备及介质
CN113807434A (zh) * 2021-09-16 2021-12-17 中国联合网络通信集团有限公司 布匹的瑕疵识别方法及模型训练方法
CN113807434B (zh) * 2021-09-16 2023-07-25 中国联合网络通信集团有限公司 布匹的瑕疵识别方法及模型训练方法
CN115035119A (zh) * 2022-08-12 2022-09-09 山东省计算中心(国家超级计算济南中心) 一种玻璃瓶底瑕疵图像检测剔除装置、系统及方法
CN115035119B (zh) * 2022-08-12 2023-03-24 山东省计算中心(国家超级计算济南中心) 一种玻璃瓶底瑕疵图像检测剔除装置、系统及方法
CN117288761A (zh) * 2023-11-27 2023-12-26 天津市海迅科技发展有限公司 一种基于测试材料的瑕疵检测分类评估方法及系统
CN117288761B (zh) * 2023-11-27 2024-02-06 天津市海迅科技发展有限公司 一种基于测试材料的瑕疵检测分类评估方法及系统

Also Published As

Publication number Publication date
CN108780497A (zh) 2018-11-09
CN108780497B (zh) 2021-10-26

Similar Documents

Publication Publication Date Title
WO2019095118A1 (zh) 一种皮肤瑕疵点分类方法及电子设备
US10936919B2 (en) Method and apparatus for detecting human face
US11907847B2 (en) Operating machine-learning models on different platforms
Chen et al. AI-Skin: Skin disease recognition based on self-learning and wide data collection through a closed-loop framework
CN108304758B (zh) 人脸特征点跟踪方法及装置
US11393205B2 (en) Method of pushing video editing materials and intelligent mobile terminal
KR102299764B1 (ko) 전자장치, 서버 및 음성출력 방법
US10803571B2 (en) Data-analysis pipeline with visual performance feedback
US20210334604A1 (en) Facial recognition method and apparatus
JP2017536635A (ja) ピクチャーのシーンの判定方法、装置及びサーバ
US10019788B1 (en) Machine-learning measurements of quantitative feature attributes
CN108198159A (zh) 一种图像处理方法、移动终端以及计算机可读存储介质
CN104346503A (zh) 一种基于人脸图像的情感健康监控方法及手机
CN110135497B (zh) 模型训练的方法、面部动作单元强度估计的方法及装置
CN104077597B (zh) 图像分类方法及装置
CN104679967B (zh) 一种判断心理测试可靠性的方法
CN110443769A (zh) 图像处理方法、图像处理装置及终端设备
Finnegan et al. Automated method for detecting and reading seven-segment digits from images of blood glucose metres and blood pressure monitors
CN111882625B (zh) 生成动态图的方法、装置、电子设备及存储介质
CN114119948A (zh) 一种茶叶识别方法、装置、电子设备及存储介质
CN106471493A (zh) 用于管理数据的方法和装置
Shao et al. Research on automatic identification system of tobacco diseases
CN113128368B (zh) 一种人物交互关系的检测方法、装置及系统
CN110059721A (zh) 户型图区域识别方法、装置、设备及计算机可读存储介质
TW202001597A (zh) 情緒特徵擷取裝置及其方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17932058

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17932058

Country of ref document: EP

Kind code of ref document: A1