WO2022267327A1 - Pigmentation prediction method and apparatus, and device and storage medium - Google Patents

Pigmentation prediction method and apparatus, and device and storage medium Download PDF

Info

Publication number
WO2022267327A1
WO2022267327A1 PCT/CN2021/132553 CN2021132553W WO2022267327A1 WO 2022267327 A1 WO2022267327 A1 WO 2022267327A1 CN 2021132553 W CN2021132553 W CN 2021132553W WO 2022267327 A1 WO2022267327 A1 WO 2022267327A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
prediction
color
processed
Prior art date
Application number
PCT/CN2021/132553
Other languages
French (fr)
Chinese (zh)
Inventor
齐子铭
刘兴云
罗家祯
陈福兴
李志阳
Original Assignee
厦门美图宜肤科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 厦门美图宜肤科技有限公司 filed Critical 厦门美图宜肤科技有限公司
Priority to JP2022540760A priority Critical patent/JP7385046B2/en
Priority to KR1020227022201A priority patent/KR20230001005A/en
Publication of WO2022267327A1 publication Critical patent/WO2022267327A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Definitions

  • the present application relates to the technical field of image recognition and generation, and in particular, to a stain prediction method, device, equipment, and storage medium.
  • the prediction of skin changes in the prior art usually predicts the degree of skin laxity and wrinkle, and cannot realize the prediction of future changes in pigmentation. Therefore, it is urgent to provide a method that can realize future changes in pigmentation Prediction of changes, so that users can timely understand the changes of skin pigmentation in the future.
  • the present application provides a color spot prediction method, device, equipment and storage medium, which can realize the prediction of the change of color spots on the user's facial skin.
  • Some embodiments of the present application provide a method for predicting stains, which may include:
  • the color spot prediction result map is obtained through the color spot prediction model.
  • the pre-trained color spot prediction model for prediction processing may also include:
  • the color spot information may include: the position and category of the color spot;
  • the image to be predicted is preprocessed to obtain preprocessed multi-frame images
  • the multi-frame images may respectively include: an image without color speckle and an image marked with a color speckle category;
  • Inputting the image to be predicted into the pre-trained color spot prediction model for prediction processing may include:
  • the preprocessed multi-frame images are input into the pre-trained stain prediction model for prediction processing.
  • the pre-trained color spot prediction model for prediction processing may also include:
  • Obtain the target image to be processed which may include color spot information
  • a plurality of target channel images are respectively determined according to the target image to be processed, and the target channel images may include: a multi-channel image with color spot information removed and a color spot category channel image;
  • the target noise image is a randomly generated noise image.
  • determining a plurality of target channel images respectively according to the target image to be processed may include:
  • the speckle removal process is performed on the target image to be processed, and the target image to be processed with the speckle information removed is obtained.
  • determining a plurality of target channel images respectively according to the target image to be processed may include:
  • the speckle detection process is performed on the target image to be processed, and the channel image of the speckle category is obtained.
  • performing color spot detection processing on the target image to be processed to obtain a color spot category channel image may include:
  • the method may include:
  • the target input image is input into the neural network structure to be trained to obtain the trained stain prediction model.
  • a stain prediction device which may include: an acquisition module, a prediction module, and an output module;
  • An acquisition module that can be configured to acquire an image to be predicted
  • the prediction module can be configured to input the image to be predicted into a pre-trained stain prediction model for prediction processing, and the stain prediction model is a fully convolutional generation confrontation network model;
  • the output module can be configured to obtain a stain prediction result map through the stain prediction model.
  • the device may further include: a preprocessing module; the preprocessing module may be configured to determine color spot information in the image to be predicted, and the color spot information may include: the position and category of the color spot;
  • the speckle information in the image to be predicted is preprocessed to obtain multi-frame images after preprocessing.
  • the multi-frame images can include: images without speckles and images with speckle categories; another prediction module can be configured It is used to input the preprocessed multi-frame images into the pre-trained stain prediction model for prediction processing.
  • the pre-processing module may also be configured to acquire the target image to be processed, which may include color spot information; respectively determine a plurality of target channel images according to the target image to be processed, the target channel image may include: Remove the multi-channel image of the color spot information and the channel image of the color spot category; merge multiple target channel images, and input them together into the trained color spot prediction model in the neural network structure to be trained; the target noise image is a randomly generated noise image.
  • the preprocessing module can also be configured to determine color speckle information in the target to-be-processed image; perform color speckle removal processing on the target to-be-processed image according to the color speckle information in the target to-be-processed image, to obtain removed color speckles Target image to be processed with speckle information.
  • the preprocessing module may also be configured to perform color spot detection processing on the target image to be processed to obtain a color spot category channel image.
  • the preprocessing module can be configured to determine the position of each type of color spot information in the target image to be processed respectively; set the grayscale information of the type corresponding to the color spot information at the position of the color spot information to obtain the color spot information Speckle category channel image.
  • the preprocessing module can also be configured to perform normalization processing on the target channel image and the target noise image respectively to obtain the target input image; input the target input image into the training neural network structure to obtain the trained color Spot prediction model.
  • the computer device may include: a memory, a processor, a computer program that can run on the processor is stored in the memory, and when the processor executes the computer program, the above-mentioned color spots are realized The steps of the prediction method.
  • Still other embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above method for predicting color spots are realized.
  • the image to be predicted can be obtained; the image to be predicted can be input into the pre-trained stain prediction model for prediction processing, and the stain prediction model Generate an adversarial network model for the full convolution; get the color spot prediction result map through the color spot prediction model.
  • the fully convolutional generation confrontation network structure is used as the spot prediction module, which can realize the prediction and processing of the future changes of the skin spots, and then enable the user to know the change trend of the skin spots in time.
  • FIG. 1 is a schematic flow diagram of a stain prediction method provided in an embodiment of the present application
  • Fig. 2 is the second schematic flow diagram of the stain prediction method provided by the embodiment of the present application.
  • Fig. 3 is a schematic flow chart three of the color spot prediction method provided by the embodiment of the present application.
  • Fig. 4 is a schematic flow diagram 4 of the stain prediction method provided in the embodiment of the present application.
  • Fig. 5 is a schematic flow diagram five of the stain prediction method provided in the embodiment of the present application.
  • FIG. 6 is the sixth schematic flow chart of the stain prediction method provided by the embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a stain prediction device provided in an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • Fig. 1 is a schematic flow chart of the stain prediction method provided in the embodiment of the present application. Please refer to Fig. 1, the method may include:
  • the image to be predicted may be a photo of the user, an image of a human face, or any image corresponding to the skin that needs to be predicted for color spots, and the image may be a cropped image with a preset size.
  • the execution subject of this method may be a related program on the computer device, for example: a preset program of the skin predictor, a certain function of the electronic facial cleanser, etc., which are not specifically limited here and can be set according to actual needs.
  • the image to be predicted may be sent to the computer device by other devices, or may also be captured by the computer device through a shooting device, etc., and no specific limitation is set here.
  • S120 Input the image to be predicted into the pre-trained stain prediction model to perform prediction processing.
  • the stain prediction model is a fully convolutional generation confrontation network model.
  • the image to be predicted can be input to a pre-trained mottle prediction model for prediction processing, wherein the mottle prediction model can be a fully convolutional generation confrontation network model obtained after pre-training .
  • the stain prediction model may be obtained by pre-training of the computer device, or may be sent to the computer device by other electronic devices, which is not limited herein.
  • the fully convolutional generation confrontation network model can be a generation confrontation network composed of multiple convolutional neural networks, wherein the generation confrontation network can be a mutual The game learns network models that produce reasonably good outputs.
  • a color spot prediction result map can be obtained, wherein the color spot prediction result map can show the change of the color spot in the future period of time in the image to be predicted, the specific time It can be set according to actual needs, and there is no limitation here.
  • the color speckle prediction result graph may include multiple graphs, which are respectively used to represent the changes of the color spots on the facial skin in the image to be predicted after a certain period of time in the future.
  • the image to be predicted can be obtained; the image to be predicted can be input into the pre-trained stain prediction model for prediction processing, and the stain prediction model is a fully convolutional generative confrontation network Model; the color spot prediction result map is obtained through the color spot prediction model.
  • the fully convolutional generation confrontation network structure is used as the spot prediction module, which can realize the prediction and processing of the future changes of the skin spots, and then enable the user to know the change trend of the skin spots in time.
  • Fig. 2 is a schematic flow diagram 2 of the mottle prediction method provided by the embodiment of the present application. Please refer to Fig. 2, before inputting the image to be predicted into the pre-trained mottle prediction model for prediction processing, it also includes:
  • S210 Determine color speckle information in the image to be predicted.
  • the color spot information may include: the position and type of the color spot.
  • the color spot information may be the color spot information on the skin in the image to be predicted, for example, it may include: the position, category, etc. of each color spot on the skin in the image to be predicted, each color spot The position of the color spot can be recorded in the form of the range of coordinates, and the category of the color spot can be recorded in the form of identification.
  • a preset color speckle recognition algorithm may be used to acquire and determine the color speckle information in the image to be predicted.
  • S220 Perform preprocessing on the image to be predicted according to the color speckle information in the image to be predicted to obtain multi-frame images after preprocessing.
  • the multiple frames of images may respectively include: an image without color speckle and an image marked with a color speckle category.
  • the preprocessing of the image to be predicted may include color speckle removal processing and color speckle determination processing, wherein the above-mentioned image without color speckle can be obtained through the color speckle removal process, that is, the image to be predicted without color speckle information ;
  • the color speckle determination process an image marked with a color speckle category can be obtained, and different grayscale values in the image can represent different color speckle categories.
  • Inputting the image to be predicted into the pre-trained color spot prediction model for prediction processing may include:
  • S230 Input the preprocessed multi-frame images into the color speckle prediction model obtained in advance to perform prediction processing.
  • these images may be combined and input into a pre-trained stain prediction model for prediction, so as to obtain corresponding prediction results.
  • Fig. 3 is a schematic flow chart of the mottle prediction method provided in the embodiment of the present application III. Please refer to Fig. 3, before inputting the image to be predicted into the pre-trained mottle prediction model for prediction processing, it may also include:
  • the target image to be processed may include speckle information.
  • the target image to be processed may be a sample image used for training the color spot prediction model, the sample image has skin, and the skin includes color spot information.
  • the target image to be processed may be a large number of pre-collected sample images, for example, images of facial spots downloaded through the network, etc., which are not specifically limited here.
  • S320 Determine a plurality of target channel images respectively according to the target image to be processed.
  • the target channel image may include: a multi-channel image with color spot information removed and a color spot category channel image.
  • the target image to be processed can be processed separately to obtain a plurality of target channel images, wherein the multi-channel image from which color speckle information is removed can be obtained after the target image to be processed is subjected to color speckle removal processing. Images of blotches in the same way.
  • the color speckle category channel image may be obtained by identifying the color speckle on the target image to be processed, which is the same as the aforementioned method of acquiring an image marked with a color speckle category.
  • S330 Combine multiple target channel images and target noise images, and input them together into the trained color speckle prediction model in the neural network structure to be trained.
  • the target noise image is a randomly generated noise image.
  • these target channel images and pre-generated target noise images can be combined and input together into the neural network structure to be trained for training.
  • the above-mentioned stain prediction model can be obtained .
  • Fig. 4 is a schematic flow diagram 4 of the speckle prediction method provided by the embodiment of the present application. Please refer to Fig. 4, and determine a plurality of target channel images according to the target image to be processed, which may include:
  • S410 Determine color speckle information in the target image to be processed.
  • the color speckle information may be determined, specifically, the color speckle information may be determined by means of the aforementioned color speckle identification.
  • S420 Perform color speckle removal processing on the target image to be processed according to color speckle information in the target image to be processed, to obtain a target image to be processed with color speckle information removed.
  • stain removal processing may be performed according to the stain information. All color speckle information in the target image to be processed is removed to obtain a target image to be processed, wherein the target image to be processed does not have color speckle information.
  • channel processing can be performed to obtain the color channels of red, green and blue respectively, and the red channel image with color spot information removed, the green channel image with color spot information removed, and the color spot removed image can be obtained.
  • Informational blue channel image can be obtained.
  • determining a plurality of target channel images respectively according to the target image to be processed may include:
  • the speckle detection process is performed on the target image to be processed, and the channel image of the speckle category is obtained.
  • the image can be subjected to color speckle detection processing, and the image of the color speckle category channel can be obtained further.
  • the specific process is as follows:
  • Fig. 5 is a schematic flow chart of the mottle prediction method provided in the embodiment of the present application fifth, please refer to Fig. 5, perform the mottle detection process on the target image to be processed, and obtain the mottle category channel image, including:
  • S510 Determine the position of each type of color speckle information in the target image to be processed respectively.
  • the position of each type of color spot in the target image to be processed may be determined by means of color spot identification.
  • Grayscale information of a type corresponding to the color spot information may be set at the position of the color spot information to obtain a color spot category channel image.
  • the corresponding gray value can be set at the corresponding position.
  • Different gray values can be used to represent different types of stains.
  • the specific position and range of the gray value It can represent the position and size of the color spot, determine the grayscale information corresponding to each color spot information in the image and set it up, you can get the above color spot category channel image, which can be represented by different grayscales Channel images of different speckle types.
  • Fig. 6 is a schematic flow diagram of the mottle prediction method provided by the embodiment of the present application 6. Please refer to Fig. 6. Multiple target channel images and target noise images are combined and input together into the neural network structure to be trained to obtain the trained mottle prediction Before the model, the method can include:
  • S610 Perform normalization processing on the target channel image and the target noise image respectively to obtain a target input image.
  • the above-mentioned multiple target channel images and target noise images are merged and input together into the trained color speckle prediction model in the neural network structure to be trained.
  • the target channel image includes the aforementioned red channel image with color speckle information removed, The green channel image with color spot information removed, the blue channel image with color spot information removed, and the color spot category channel image, these four types of images can be combined with the target noise image to obtain a five-channel image, which is input to the neural network structure to be trained together Obtain the trained speckle prediction model.
  • the red channel image with color speckle information removed, the green channel image with color speckle information removed, the blue channel image with color speckle information removed, and the above-mentioned target noise in the target channel image can be specifically
  • the image is normalized to the (-1,1) interval
  • the stain category channel image is normalized to the (0,1) interval.
  • Im g (-1,1) (Im g*2/255)-1;
  • Img is a single-channel image in the interval 0-255
  • Im g (-1, 1) is a map in the interval (-1, 1) after normalization.
  • ClsMask (0,1) ClsMask/255;
  • ClsMask is a single-channel image in the interval of 0 to 255
  • ClsMask (0, 1) is a normalized image in the interval (0, 1).
  • S620 Input the target input image into the training neural network structure to obtain the trained stain prediction model.
  • the target input image may be input into the above neural network structure to obtain a trained stain prediction model.
  • the model adopts an encoding-decoding structure, and the upsampling of the decoding part adopts the combination of the nearest neighbor upsampling + convolution layer, and the activation function of the output layer is Tanh, and the specific structural relationship can be shown in Table 1.
  • Leakyrelu is a kind of conventional activation function in deep learning
  • negativeslope is a configuration parameter in the activation function
  • kh is the height of the convolution kernel
  • kw is the width of the convolution kernel
  • padding is the feature map for the convolution operation
  • Extended pixel value stride is the step size of convolution
  • group is the number of convolution kernel groups
  • scale_factor Mode are the parameters of the upsampling layer.
  • scale_factor means upsampling to 2 times the size
  • the model also includes a discriminant network part to distinguish real and fake images with different resolutions.
  • discriminators of three scales may be used to distinguish images with resolutions of 512x512, 256x256, and 128x128, respectively. For images of different resolutions, it can be obtained by downsampling.
  • the model can be set to 20,000 samples during the training process, and various gains can be performed on the image of each sample, such as flip, rotation, translation, affine transformation, exposure, contrast adjustment, blur, etc. to achieve Improvements in robustness.
  • the optimization algorithm for network training uses the Adam algorithm, the learning rate of the generating network is 0.0002, and the learning rate of the discriminant network is 0.0001.
  • the specific calculation of the loss function of the model is as follows:
  • L L 1 +L 2 +L vgg +L adv ;
  • Generate represents the output of the network; GT is the image generated by the target.
  • L 1 and L 2 are loss functions, L vgg is a perceptual loss function, and L adv is a generative adversarial loss function.
  • Lperceptual means perceptual loss. Perceptual loss refers to inputting the output (generated graph) generate and GT of the network into another network, extracting the feature tensor of the corresponding layer, and calculating the difference between the feature tensors. i is i-th sample.
  • Fig. 7 is a schematic structural diagram of a speckle prediction device provided in the embodiment of the present application, please refer to Fig. 7, the device includes: an acquisition module 100, a prediction module 200, and an output module 300;
  • the obtaining module 100 may be configured to obtain an image to be predicted
  • the prediction module 200 may be configured to input the image to be predicted into a pre-trained stain prediction model for prediction processing, and the stain prediction model is a fully convolutional generation confrontation network model;
  • the output module 300 may be configured to obtain a color speckle prediction result map through the color speckle prediction model.
  • the device further includes: a preprocessing module 400; the preprocessing module 400 can be configured to determine color speckle information in the image to be predicted, and the color speckle information includes: the position and category of the color speckle; according to the image to be predicted
  • the speckle information in the image to be predicted is preprocessed to obtain preprocessed multi-frame images, and the multi-frame images respectively include: images without speckles and images marked with speckle categories; another prediction module 200 can be configured It is used to input the preprocessed multi-frame images into the pre-trained stain prediction model for prediction processing.
  • the preprocessing module 400 may also be configured to acquire the target image to be processed, the target image to be processed includes color spot information; respectively determine a plurality of target channel images according to the target image to be processed, the target channel image includes: Multi-channel images of color spot information and channel images of color spot categories; multiple target channel images and target noise images are combined, and input into the trained color spot prediction model in the neural network structure to be trained together; target noise images are randomly generated noisy image.
  • the pre-processing module 400 may also be configured to determine color speckle information in the target image to be processed; perform color speckle removal processing on the target image to be processed according to the color speckle information in the target image to be processed to obtain The target image to be processed with color speckle information; channel processing is performed on the target image to be processed with color speckle information removed to obtain a multi-channel image with color speckle information removed.
  • the preprocessing module 400 may also be configured to perform color spot detection processing on the target image to be processed to obtain a color spot category channel image.
  • the preprocessing module 400 may be configured to determine the position of each type of color speckle information in the target image to be processed respectively; set grayscale information of the type corresponding to the color speckle information at the position of the color speckle information, to obtain The blob category channel image.
  • the preprocessing module 400 can also be configured to perform normalization processing on the target channel image and the target noise image respectively to obtain the target input image; input the target input image into the training neural network structure to obtain the trained Stain prediction model.
  • the above modules may be one or more integrated circuits configured to implement the above method, for example: one or more specific integrated circuits (Application Specific Integrated Circuit, referred to as ASIC), or, one or more microprocessors, or, One or more Field Programmable Gate Arrays (Field Programmable Gate Array, FPGA for short), etc.
  • ASIC Application Specific Integrated Circuit
  • microprocessors or, One or more Field Programmable Gate Arrays (Field Programmable Gate Array, FPGA for short)
  • FPGA Field Programmable Gate Array
  • the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, referred to as CPU) or other processors that can call program codes.
  • CPU central processing unit
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC for short).
  • FIG. 8 is a schematic structural diagram of a computer device provided by an embodiment of the present application. Please refer to FIG. 8 .
  • the computer device includes: a memory 500 and a processor 600.
  • the memory 500 stores a computer program that can run on the processor 600.
  • the processor 600 When the computer program is executed, the steps of the above stain prediction method are realized.
  • Some embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above method for predicting color spots are realized.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separated, and a component shown as a unit may or may not be a physical unit, that is, it may be located in one place, or may also be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the above-mentioned integrated units implemented in the form of software functional units may be stored in a computer-readable storage medium.
  • the above-mentioned software functional units are stored in a storage medium, and include several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) or a processor (English: processor) to execute the methods of the various embodiments of the present application. partial steps.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (English: Read-Only Memory, abbreviated: ROM), random access memory (English: Random Access Memory, abbreviated: RAM), magnetic disk or optical disc, etc.
  • the present application provides a stain prediction method, device, equipment and storage medium.
  • the method includes: obtaining an image to be predicted; inputting the image to be predicted into a color spot prediction model obtained in advance for prediction processing, and the color spot prediction model is a fully convolutional generation confrontation network model; obtaining a color spot prediction through the color spot prediction model Result graph.
  • This application can realize the prediction of the color spot change of the user's facial skin.
  • the stain prediction method, device, device and storage medium of the present application are reproducible and can be used in various industrial applications.
  • the stain prediction method, device, device and storage medium of the present application can be used in fields requiring image recognition processing.

Abstract

The present application belongs to the technical field of image recognition processing. Provided are a pigmentation prediction method and apparatus, and a device and a storage medium. The method comprises: acquiring an image to be subjected to prediction; inputting said image into a pigmentation prediction model, obtained by means of pre-training, for prediction processing, wherein the pigmentation prediction model is a fully convolutional generative adversarial network model; and obtaining a pigmentation prediction result map by using the pigmentation prediction model. By means of the present application, a pigmentation change situation of facial skin of a user can be predicted.

Description

一种色斑预测方法、装置、设备及存储介质A stain prediction method, device, equipment and storage medium
相关申请的交叉引用Cross References to Related Applications
本申请要求于2021年06月24日提交中国专利局的申请号为2021107071008、名称为“一种色斑预测方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 2021107071008 and titled "A Method, Device, Equipment, and Storage Medium for Stain Prediction" submitted to the China Patent Office on June 24, 2021, the entire contents of which are incorporated by reference incorporated in this application.
技术领域technical field
本申请涉及图像识别生成技术领域,具体而言,涉及一种色斑预测方法、装置、设备及存储介质。The present application relates to the technical field of image recognition and generation, and in particular, to a stain prediction method, device, equipment, and storage medium.
背景技术Background technique
随着年龄的增加,人面部皮肤会产生衰老以及疾病等问题,为了预防这些问题的发生,通常需要对人脸未来的变化进行预测,从而及早进行预防干预。With the increase of age, human facial skin will have problems such as aging and diseases. In order to prevent the occurrence of these problems, it is usually necessary to predict the future changes of the human face, so as to carry out early preventive intervention.
现有技术中对皮肤变化的预测通常是对皮肤的松弛程度、皱纹程度等进行预测,并不能够实现色斑未来变化的预测,因此,现有技术中亟需要提供一种能够实现色斑未来变化情况的预测,从而使用户及时了解到皮肤色斑未来一段时间的变化情况。The prediction of skin changes in the prior art usually predicts the degree of skin laxity and wrinkle, and cannot realize the prediction of future changes in pigmentation. Therefore, it is urgent to provide a method that can realize future changes in pigmentation Prediction of changes, so that users can timely understand the changes of skin pigmentation in the future.
发明内容Contents of the invention
本申请提供了一种色斑预测方法、装置、设备及存储介质,可以实现对用户的面部皮肤的色斑变化情况的预测。The present application provides a color spot prediction method, device, equipment and storage medium, which can realize the prediction of the change of color spots on the user's facial skin.
本申请的一些实施例提供了一种色斑预测方法,该方法可以包括:Some embodiments of the present application provide a method for predicting stains, which may include:
获取待预测图像;Obtain the image to be predicted;
将待预测图像输入至预先训练得到的色斑预测模型中进行预测处理,色斑预测模型为全卷积生成对抗网络模型;Input the image to be predicted into the pre-trained stain prediction model for prediction processing, and the stain prediction model is a fully convolutional generation confrontation network model;
通过色斑预测模型得到色斑预测结果图。The color spot prediction result map is obtained through the color spot prediction model.
可选地,将待预测图像输入至预先训练得到的色斑预测模型中进行预测处理之前,还可以包括:Optionally, before inputting the image to be predicted into the pre-trained color spot prediction model for prediction processing, it may also include:
确定待预测图像中的色斑信息,色斑信息可以包括:色斑的位置以及类别;Determine the color spot information in the image to be predicted, the color spot information may include: the position and category of the color spot;
根据待预测图像中的色斑信息,对待预测图像进行预处理,得到预处理后的多帧图像,多帧图像分别可以包括:无色斑的图像以及标识有色斑类别的图像;According to the color speckle information in the image to be predicted, the image to be predicted is preprocessed to obtain preprocessed multi-frame images, and the multi-frame images may respectively include: an image without color speckle and an image marked with a color speckle category;
将待预测图像输入至预先训练得到的色斑预测模型中进行预测处理,可以包括:Inputting the image to be predicted into the pre-trained color spot prediction model for prediction processing may include:
将预处理后的多帧图像输入至预先训练得到的色斑预测模型中进行预测处理。The preprocessed multi-frame images are input into the pre-trained stain prediction model for prediction processing.
可选地,将待预测图像输入至预先训练得到的色斑预测模型中进行预测处理之前,还可以包括:Optionally, before inputting the image to be predicted into the pre-trained color spot prediction model for prediction processing, it may also include:
获取目标待处理图像,目标待处理图像中可以包括色斑信息;Obtain the target image to be processed, which may include color spot information;
根据目标待处理图像分别确定多个目标通道图像,目标通道图像可以包括:去除色斑信息的多通道图像以及色斑类别通道图像;A plurality of target channel images are respectively determined according to the target image to be processed, and the target channel images may include: a multi-channel image with color spot information removed and a color spot category channel image;
将多个目标通道图像以及目标噪声图像合并,一起输入待训练神经网络结构中得到训练后的色斑预测模型;目标噪声图像为随机生成的噪声图像。Combine multiple target channel images and target noise images, and input them together into the trained stain prediction model in the neural network structure to be trained; the target noise image is a randomly generated noise image.
可选地,根据目标待处理图像分别确定多个目标通道图像,可以包括:Optionally, determining a plurality of target channel images respectively according to the target image to be processed may include:
确定目标待处理图像中的色斑信息;Determine the color spot information in the target image to be processed;
根据目标待处理图像中的色斑信息,对目标待处理图像进行色斑去除处理,得到去除色斑信息的目标待处理图像。According to the speckle information in the target image to be processed, the speckle removal process is performed on the target image to be processed, and the target image to be processed with the speckle information removed is obtained.
可选地,根据目标待处理图像分别确定多个目标通道图像,可以包括:Optionally, determining a plurality of target channel images respectively according to the target image to be processed may include:
对目标待处理图像进行色斑检测处理,得到色斑类别通道图像。The speckle detection process is performed on the target image to be processed, and the channel image of the speckle category is obtained.
可选地,对目标待处理图像进行色斑检测处理,得到色斑类别通道图像,可以包括:Optionally, performing color spot detection processing on the target image to be processed to obtain a color spot category channel image may include:
分别确定目标待处理图像中每种类型的色斑信息的位置;Determine the position of each type of color spot information in the target image to be processed respectively;
在色斑信息的位置处设置色斑信息对应类型的灰度信息,得到色斑类别通道图像。Set grayscale information of the type corresponding to the color spot information at the position of the color spot information to obtain a color spot category channel image.
可选地,将多个目标通道图像以及目标噪声图像合并,一起输入待训练神经网络结构中得到训练后的色斑预测模型之前,该方法可以包括:Optionally, before a plurality of target channel images and target noise images are merged and input into the neural network structure to be trained before the trained stain prediction model, the method may include:
分别将目标通道图像以及目标噪声图像进行归一化处理,得到目标输入图像;Normalize the target channel image and the target noise image respectively to obtain the target input image;
将多个目标通道图像以及目标噪声图像合并,一起输入待训练神经网络结构中得到训练后的色斑预测模型,可以包括:Combine multiple target channel images and target noise images, and input them together into the trained color spot prediction model in the neural network structure to be trained, which can include:
将目标输入图像输入待训练神经网络结构中得到训练后的色斑预测模型。The target input image is input into the neural network structure to be trained to obtain the trained stain prediction model.
本申请的另一实施例提供了一种色斑预测装置,该装置可以包括:获取模块、预测模块、输出模块;Another embodiment of the present application provides a stain prediction device, which may include: an acquisition module, a prediction module, and an output module;
获取模块,可以配置成用于获取待预测图像;An acquisition module that can be configured to acquire an image to be predicted;
预测模块,可以配置成用于将待预测图像输入至预先训练得到的色斑预测模型中进行预测处理,色斑预测模型为全卷积生成对抗网络模型;The prediction module can be configured to input the image to be predicted into a pre-trained stain prediction model for prediction processing, and the stain prediction model is a fully convolutional generation confrontation network model;
输出模块,可以配置成用于通过色斑预测模型得到色斑预测结果图。The output module can be configured to obtain a stain prediction result map through the stain prediction model.
可选地,该装置还可以包括:预处理模块;预处理模块,可以配置成用于确定待预测图像中的色斑信息,色斑信息可以包括:色斑的位置以及类别;根据待预测图像中的色斑信息,对待预测图像进行预处理,得到预处理后的多帧图像,多帧图像分别可以包括:无色斑的图像以及标识有色斑类别的图像;另外的预测模块,可以配置成用于将预处理后的多帧图像输入至预先训练得到的色斑预测模型中进行预测处理。Optionally, the device may further include: a preprocessing module; the preprocessing module may be configured to determine color spot information in the image to be predicted, and the color spot information may include: the position and category of the color spot; The speckle information in the image to be predicted is preprocessed to obtain multi-frame images after preprocessing. The multi-frame images can include: images without speckles and images with speckle categories; another prediction module can be configured It is used to input the preprocessed multi-frame images into the pre-trained stain prediction model for prediction processing.
可选地,预处理模块,还可以配置成用于获取目标待处理图像,目标待处理图像中可以包括色斑信息;根据目标待处理图像分别确定多个目标通道图像,目标通道图像可以包 括:去除色斑信息的多通道图像以及色斑类别通道图像;将多个目标通道图像合并,一起输入待训练神经网络结构中得到训练后的色斑预测模型;目标噪声图像为随机生成的噪声图像。Optionally, the pre-processing module may also be configured to acquire the target image to be processed, which may include color spot information; respectively determine a plurality of target channel images according to the target image to be processed, the target channel image may include: Remove the multi-channel image of the color spot information and the channel image of the color spot category; merge multiple target channel images, and input them together into the trained color spot prediction model in the neural network structure to be trained; the target noise image is a randomly generated noise image.
可选地,预处理模块,还可以配置成用于确定目标待处理图像中的色斑信息;根据目标待处理图像中的色斑信息,对目标待处理图像进行色斑去除处理,得到去除色斑信息的目标待处理图像。Optionally, the preprocessing module can also be configured to determine color speckle information in the target to-be-processed image; perform color speckle removal processing on the target to-be-processed image according to the color speckle information in the target to-be-processed image, to obtain removed color speckles Target image to be processed with speckle information.
可选地,预处理模块,还可以配置成用于对目标待处理图像进行色斑检测处理,得到色斑类别通道图像。Optionally, the preprocessing module may also be configured to perform color spot detection processing on the target image to be processed to obtain a color spot category channel image.
可选地,预处理模块,可以配置成用于分别确定目标待处理图像中每种类型的色斑信息的位置;在色斑信息的位置处设置色斑信息对应类型的灰度信息,得到色斑类别通道图像。Optionally, the preprocessing module can be configured to determine the position of each type of color spot information in the target image to be processed respectively; set the grayscale information of the type corresponding to the color spot information at the position of the color spot information to obtain the color spot information Speckle category channel image.
可选地,预处理模块,还可以配置成用于分别将目标通道图像以及目标噪声图像进行归一化处理,得到目标输入图像;将目标输入图像输入带训练神经网络结构中得到训练后的色斑预测模型。Optionally, the preprocessing module can also be configured to perform normalization processing on the target channel image and the target noise image respectively to obtain the target input image; input the target input image into the training neural network structure to obtain the trained color Spot prediction model.
本申请的又一些实施例提供了一种计算机设备,该计算机设备可以包括:存储器、处理器,存储器中存储有可在处理器上运行的计算机程序,处理器执行计算机程序时,实现上述色斑预测方法的步骤。Some other embodiments of the present application provide a computer device, the computer device may include: a memory, a processor, a computer program that can run on the processor is stored in the memory, and when the processor executes the computer program, the above-mentioned color spots are realized The steps of the prediction method.
本申请的再一些实施例提供了一种计算机可读存储介质,存储介质上存储有计算机程序,该计算机程序被处理器执行时,实现上述色斑预测方法的步骤。Still other embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above method for predicting color spots are realized.
本申请实施例的有益效果至少包括:The beneficial effects of the embodiments of the present application at least include:
本申请实施例提供的一种色斑预测方法、装置、设备及存储介质中,可以获取待预测图像;将待预测图像输入至预先训练得到的色斑预测模型中进行预测处理,色斑预测模型为全卷积生成对抗网络模型;通过色斑预测模型得到色斑预测结果图。其中,通过全卷积生成对抗网络结构作为色斑预测模块,可以实现对皮肤色斑的未来变化情况进行预测处理,进而可以使用户及时了解到皮肤色斑的变化趋势。In a stain prediction method, device, device, and storage medium provided in the embodiments of the present application, the image to be predicted can be obtained; the image to be predicted can be input into the pre-trained stain prediction model for prediction processing, and the stain prediction model Generate an adversarial network model for the full convolution; get the color spot prediction result map through the color spot prediction model. Among them, the fully convolutional generation confrontation network structure is used as the spot prediction module, which can realize the prediction and processing of the future changes of the skin spots, and then enable the user to know the change trend of the skin spots in time.
附图说明Description of drawings
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following will briefly introduce the accompanying drawings used in the embodiments. It should be understood that the following drawings only show some embodiments of the present application, so It should be regarded as a limitation on the scope, and those skilled in the art can also obtain other related drawings based on these drawings without creative work.
图1为本申请实施例提供的色斑预测方法的流程示意图一;FIG. 1 is a schematic flow diagram of a stain prediction method provided in an embodiment of the present application;
图2为本申请实施例提供的色斑预测方法的流程示意图二;Fig. 2 is the second schematic flow diagram of the stain prediction method provided by the embodiment of the present application;
图3为本申请实施例提供的色斑预测方法的流程示意图三;Fig. 3 is a schematic flow chart three of the color spot prediction method provided by the embodiment of the present application;
图4为本申请实施例提供的色斑预测方法的流程示意图四;Fig. 4 is a schematic flow diagram 4 of the stain prediction method provided in the embodiment of the present application;
图5为本申请实施例提供的色斑预测方法的流程示意图五;Fig. 5 is a schematic flow diagram five of the stain prediction method provided in the embodiment of the present application;
图6为本申请实施例提供的色斑预测方法的流程示意图六;FIG. 6 is the sixth schematic flow chart of the stain prediction method provided by the embodiment of the present application;
图7为本申请实施例提供的色斑预测装置的结构示意图;FIG. 7 is a schematic structural diagram of a stain prediction device provided in an embodiment of the present application;
图8为本申请实施例提供的计算机设备的结构示意图。FIG. 8 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
具体实施方式detailed description
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本申请实施例的组件可以以各种不同的配置来布置和设计。In order to make the purposes, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments It is a part of the embodiments of this application, not all of them. The components of the embodiments of the application generally described and illustrated in the figures herein may be arranged and designed in a variety of different configurations.
因此,以下对在附图中提供的本申请的实施例的详细描述并非旨在限制要求保护的本申请的范围,而是仅仅表示本申请的选定实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。Accordingly, the following detailed description of the embodiments of the application provided in the accompanying drawings is not intended to limit the scope of the claimed application, but merely represents selected embodiments of the application. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of this application.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。It should be noted that like numerals and letters denote similar items in the following figures, therefore, once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.
在本申请的描述中,需要说明的是,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。In the description of the present application, it should be noted that the terms "first", "second", "third" and so on are only used to distinguish descriptions, and should not be understood as indicating or implying relative importance.
需要说明的是,现有技术中并未有对人面部皮肤色斑变化的预测方法,本申请的实施例中可以实现对人面部皮肤色斑未来一段时间的变化情况,从而可以使用户及时了解到自身面部的色斑变化。It should be noted that there is no prediction method for the change of human facial skin pigmentation in the prior art. In the embodiment of the present application, the change of human facial skin pigmentation in a certain period of time can be realized, so that users can know in time to the pigmentation changes on the face.
下面来具体解释本申请实施例中提供的色斑预测方法的具体实施过程。The specific implementation process of the stain prediction method provided in the embodiments of the present application will be explained in detail below.
图1为本申请实施例提供的色斑预测方法的流程示意图一,请参照图1,该方法可以包括:Fig. 1 is a schematic flow chart of the stain prediction method provided in the embodiment of the present application. Please refer to Fig. 1, the method may include:
S110:获取待预测图像。S110: Acquire an image to be predicted.
可选地,待预测图像可以是用户的照片、人脸的图像等任意一种需要进行色斑预测的皮肤对应的图片,该图片可以是进行剪裁之后,大小为预设尺寸的图像。Optionally, the image to be predicted may be a photo of the user, an image of a human face, or any image corresponding to the skin that needs to be predicted for color spots, and the image may be a cropped image with a preset size.
可选地,该方法的执行主体可以是计算机设备上的相关程序,例如:皮肤预测仪的预设程序、电子洗脸仪的某一功能等,在此不作具体限制,可以根据实际需求进行设置。Optionally, the execution subject of this method may be a related program on the computer device, for example: a preset program of the skin predictor, a certain function of the electronic facial cleanser, etc., which are not specifically limited here and can be set according to actual needs.
可选地,待预测图像可以是其他设备发送给计算机设备的,或者也可以是计算机设备通过拍摄装置等拍摄获取的,在此也不作具体限制。Optionally, the image to be predicted may be sent to the computer device by other devices, or may also be captured by the computer device through a shooting device, etc., and no specific limitation is set here.
S120:将待预测图像输入至预先训练得到的色斑预测模型中进行预测处理。S120: Input the image to be predicted into the pre-trained stain prediction model to perform prediction processing.
其中,色斑预测模型为全卷积生成对抗网络模型。Among them, the stain prediction model is a fully convolutional generation confrontation network model.
可选地,确定待预测图像之后,可以将待预测图像输入至预先训练好的色斑预测模型进行预测处理,其中,该色斑预测模型可以是预先训练后得到的全卷积生成对抗网络模型。Optionally, after the image to be predicted is determined, the image to be predicted can be input to a pre-trained mottle prediction model for prediction processing, wherein the mottle prediction model can be a fully convolutional generation confrontation network model obtained after pre-training .
其中,该色斑预测模型可以是该计算机设备预先训练得到的,或者也可以是其他电子设备发送给计算机设备的,在此不作限制。Wherein, the stain prediction model may be obtained by pre-training of the computer device, or may be sent to the computer device by other electronic devices, which is not limited herein.
可选地,全卷积生成对抗网络模型可以是由多个卷积神经网络所构成的生成对抗网络,其中,生成对抗网络可以是通过生成模型(Generative Model)和判别模型(Discriminative Model)的互相博弈学习产生相当好的输出的网络模型。Optionally, the fully convolutional generation confrontation network model can be a generation confrontation network composed of multiple convolutional neural networks, wherein the generation confrontation network can be a mutual The game learns network models that produce reasonably good outputs.
S130:通过色斑预测模型得到色斑预测结果图。S130: Obtain a color spot prediction result map through the color spot prediction model.
可选地,通过色斑预测模型进行预测处理后,可以得到色斑预测结果图,其中,色斑预测结果图可以显示待预测图像中面部皮肤未来一段时间后色斑的变化情况,具体的时间可以根据实际需求进行设置,在此不作限制。Optionally, after the prediction process is carried out by the color spot prediction model, a color spot prediction result map can be obtained, wherein the color spot prediction result map can show the change of the color spot in the future period of time in the image to be predicted, the specific time It can be set according to actual needs, and there is no limitation here.
可选地,色斑预测结果图可以包括多个,分别用于表示不同时间后待预测图像中面部皮肤未来一段时间后色斑的变化情况。Optionally, the color speckle prediction result graph may include multiple graphs, which are respectively used to represent the changes of the color spots on the facial skin in the image to be predicted after a certain period of time in the future.
本申请实施例提供的一种色斑预测方法中,可以获取待预测图像;将待预测图像输入至预先训练得到的色斑预测模型中进行预测处理,色斑预测模型为全卷积生成对抗网络模型;通过色斑预测模型得到色斑预测结果图。其中,通过全卷积生成对抗网络结构作为色斑预测模块,可以实现对皮肤色斑的未来变化情况进行预测处理,进而可以使用户及时了解到皮肤色斑的变化趋势。In a stain prediction method provided in the embodiment of the present application, the image to be predicted can be obtained; the image to be predicted can be input into the pre-trained stain prediction model for prediction processing, and the stain prediction model is a fully convolutional generative confrontation network Model; the color spot prediction result map is obtained through the color spot prediction model. Among them, the fully convolutional generation confrontation network structure is used as the spot prediction module, which can realize the prediction and processing of the future changes of the skin spots, and then enable the user to know the change trend of the skin spots in time.
下面来具体解释本申请实施例中提供的色斑预测方法的另一具体实施过程。Another specific implementation process of the stain prediction method provided in the embodiment of the present application will be explained in detail below.
图2为本申请实施例提供的色斑预测方法的流程示意图二,请参照图2,将待预测图像输入至预先训练得到的色斑预测模型中进行预测处理之前,还包括:Fig. 2 is a schematic flow diagram 2 of the mottle prediction method provided by the embodiment of the present application. Please refer to Fig. 2, before inputting the image to be predicted into the pre-trained mottle prediction model for prediction processing, it also includes:
S210:确定待预测图像中的色斑信息。S210: Determine color speckle information in the image to be predicted.
其中,色斑信息可以包括:色斑的位置以及类别。Wherein, the color spot information may include: the position and type of the color spot.
可选地,色斑信息可以是该待预测图像中的皮肤上的色斑信息,例如:可以包括:该待预测图像中的皮肤上的每个色斑的位置、类别等,每个色斑的位置可以以坐标的范围的形式进行记录,色斑的类别可以以标识的方式进行记录。Optionally, the color spot information may be the color spot information on the skin in the image to be predicted, for example, it may include: the position, category, etc. of each color spot on the skin in the image to be predicted, each color spot The position of the color spot can be recorded in the form of the range of coordinates, and the category of the color spot can be recorded in the form of identification.
可选地,可以采用预设的色斑识别的算法来获取并确定待预测图像中的色斑信息。Optionally, a preset color speckle recognition algorithm may be used to acquire and determine the color speckle information in the image to be predicted.
S220:根据待预测图像中的色斑信息,对待预测图像进行预处理,得到预处理后的多帧图像。S220: Perform preprocessing on the image to be predicted according to the color speckle information in the image to be predicted to obtain multi-frame images after preprocessing.
其中,多帧图像分别可以包括:无色斑的图像以及标识有色斑类别的图像。Wherein, the multiple frames of images may respectively include: an image without color speckle and an image marked with a color speckle category.
可选地,对待预测图像进行预处理可以包括色斑去除处理以及色斑确定处理,其中, 通过色斑去除处理可以得到上述无色斑的图像,也即是不具有色斑信息的待预测图像;通过色斑确定处理可以得到标识有色斑类别的图像,该图像中的可以通过不同的灰度值来表示不同的色斑类别。Optionally, the preprocessing of the image to be predicted may include color speckle removal processing and color speckle determination processing, wherein the above-mentioned image without color speckle can be obtained through the color speckle removal process, that is, the image to be predicted without color speckle information ; Through the color speckle determination process, an image marked with a color speckle category can be obtained, and different grayscale values in the image can represent different color speckle categories.
将待预测图像输入至预先训练得到的色斑预测模型中进行预测处理,可以包括:Inputting the image to be predicted into the pre-trained color spot prediction model for prediction processing may include:
S230:将预处理后的多帧图像输入至预先训练得到的色斑预测模型中进行预测处理。S230: Input the preprocessed multi-frame images into the color speckle prediction model obtained in advance to perform prediction processing.
可选地,分别确定上述多个预处理后的多帧图像之后,可以将这些图像合并,一起输入至预先训练得到的色斑预测模型中进行预测,从而得到对应预测结果。Optionally, after the above-mentioned plurality of preprocessed multi-frame images are respectively determined, these images may be combined and input into a pre-trained stain prediction model for prediction, so as to obtain corresponding prediction results.
下面来具体解释本申请实施例中提供的色斑预测方法的又一具体实施过程。Another specific implementation process of the stain prediction method provided in the embodiment of the present application will be explained in detail below.
图3为本申请实施例提供的色斑预测方法的流程示意图三,请参照图3,将待预测图像输入至预先训练得到的色斑预测模型中进行预测处理之前,还可以包括:Fig. 3 is a schematic flow chart of the mottle prediction method provided in the embodiment of the present application III. Please refer to Fig. 3, before inputting the image to be predicted into the pre-trained mottle prediction model for prediction processing, it may also include:
S310:获取目标待处理图像。S310: Acquire a target image to be processed.
其中,目标待处理图像中可以包括色斑信息。Wherein, the target image to be processed may include speckle information.
可选地,目标待处理图像可以是用于进行对色斑预测模型进行训练的样本图像,该样本图像中具有皮肤,皮肤上包括色斑信息。Optionally, the target image to be processed may be a sample image used for training the color spot prediction model, the sample image has skin, and the skin includes color spot information.
可选地,目标待处理图像可以是预先收集到的大量的样本图像,例如可以是通过网络下载的面部色斑的图像等,在此不作具体限制。Optionally, the target image to be processed may be a large number of pre-collected sample images, for example, images of facial spots downloaded through the network, etc., which are not specifically limited here.
S320:根据目标待处理图像分别确定多个目标通道图像。S320: Determine a plurality of target channel images respectively according to the target image to be processed.
其中,目标通道图像可以包括:去除色斑信息的多通道图像以及色斑类别通道图像。Wherein, the target channel image may include: a multi-channel image with color spot information removed and a color spot category channel image.
可选地,目标待处理图像可以分别进行处理后得到多个目标通道图像,其中,去除色斑信息的多通道图像可以是将目标待处理图像进行色斑去除处理后得到的,与前述获取无色斑的图像的方式相同。色斑类别通道图像可以是将目标待处理图像进行色斑识别后得到的,与前述获取标识有色斑类别的图像的方式相同。Optionally, the target image to be processed can be processed separately to obtain a plurality of target channel images, wherein the multi-channel image from which color speckle information is removed can be obtained after the target image to be processed is subjected to color speckle removal processing. Images of blotches in the same way. The color speckle category channel image may be obtained by identifying the color speckle on the target image to be processed, which is the same as the aforementioned method of acquiring an image marked with a color speckle category.
S330:将多个目标通道图像以及目标噪声图像合并,一起输入待训练神经网络结构中得到训练后的色斑预测模型。S330: Combine multiple target channel images and target noise images, and input them together into the trained color speckle prediction model in the neural network structure to be trained.
其中,目标噪声图像为随机生成的噪声图像。Among them, the target noise image is a randomly generated noise image.
可选地,获取上述多个目标通道图像之后,可以将这些目标通道图像以及预先生成的目标噪声图像合并,一起输入到待训练神经网络结构中进行训练,训练完成后可以得到上述色斑预测模型。Optionally, after acquiring the above-mentioned multiple target channel images, these target channel images and pre-generated target noise images can be combined and input together into the neural network structure to be trained for training. After the training is completed, the above-mentioned stain prediction model can be obtained .
下面来具体解释本申请实施例中提供的色斑预测方法的再一具体实施过程。Another specific implementation process of the stain prediction method provided in the embodiment of the present application will be explained in detail below.
图4为本申请实施例提供的色斑预测方法的流程示意图四,请参照图4,根据目标待处理图像分别确定多个目标通道图像,可以包括:Fig. 4 is a schematic flow diagram 4 of the speckle prediction method provided by the embodiment of the present application. Please refer to Fig. 4, and determine a plurality of target channel images according to the target image to be processed, which may include:
S410:确定目标待处理图像中的色斑信息。S410: Determine color speckle information in the target image to be processed.
可选地,确定目标待处理图像后,可以确定上述色斑信息,具体可以是通过前述色斑识别的方式来实现色斑信息确定的。Optionally, after the target image to be processed is determined, the color speckle information may be determined, specifically, the color speckle information may be determined by means of the aforementioned color speckle identification.
S420:根据目标待处理图像中的色斑信息,对目标待处理图像进行色斑去除处理,得到去除色斑信息的目标待处理图像。S420: Perform color speckle removal processing on the target image to be processed according to color speckle information in the target image to be processed, to obtain a target image to be processed with color speckle information removed.
可选地,得到上述色斑信息后,可以根据色斑信息进行色斑去除处理。将目标待处理图像中的色斑信息全部去除,得到目标待处理图像,其中,目标待处理图像中不具有色斑信息。Optionally, after the above stain information is obtained, stain removal processing may be performed according to the stain information. All color speckle information in the target image to be processed is removed to obtain a target image to be processed, wherein the target image to be processed does not have color speckle information.
可选地,得到目标待处理图像之后可以进行通道处理,分别得到红绿蓝三种颜色的颜色通道,可以得到去除色斑信息的红色通道图像、去除色斑信息的绿色通道图像、去除色斑信息的蓝色通道图像。Optionally, after the target image to be processed is obtained, channel processing can be performed to obtain the color channels of red, green and blue respectively, and the red channel image with color spot information removed, the green channel image with color spot information removed, and the color spot removed image can be obtained. Informational blue channel image.
可选地,根据目标待处理图像分别确定多个目标通道图像,可以包括:Optionally, determining a plurality of target channel images respectively according to the target image to be processed may include:
对目标待处理图像进行色斑检测处理,得到色斑类别通道图像。The speckle detection process is performed on the target image to be processed, and the channel image of the speckle category is obtained.
可选地,得到上述目标待处理图像之后,可以对该图像进行色斑检测处理,进一步获取到色斑类别通道图像,具体过程如下:Optionally, after the target image to be processed is obtained, the image can be subjected to color speckle detection processing, and the image of the color speckle category channel can be obtained further. The specific process is as follows:
图5为本申请实施例提供的色斑预测方法的流程示意图五,请参照图5,对目标待处理图像进行色斑检测处理,得到色斑类别通道图像,包括:Fig. 5 is a schematic flow chart of the mottle prediction method provided in the embodiment of the present application fifth, please refer to Fig. 5, perform the mottle detection process on the target image to be processed, and obtain the mottle category channel image, including:
S510:分别确定目标待处理图像中每种类型的色斑信息的位置。S510: Determine the position of each type of color speckle information in the target image to be processed respectively.
可选地,可以通过色斑识别的方式对目标待处理图像中每种类型的色斑的位置进行确定。Optionally, the position of each type of color spot in the target image to be processed may be determined by means of color spot identification.
S520:在色斑信息的位置处可以设置色斑信息对应类型的灰度信息,得到色斑类别通道图像。S520: Grayscale information of a type corresponding to the color spot information may be set at the position of the color spot information to obtain a color spot category channel image.
可选地,确定每种类型的色斑位置后,可以在对应位置设置对应的灰度值,不同的灰度值可以用于表征不同类型的色斑,该灰度值所在的具体位置和范围可以表示该色斑的位置和色斑的大小,确定图像中的每种色斑信息对应的灰度信息并设置完成后,可以得到上述色斑类别通道图像,具体可以是通过不同的灰度表示不同色斑类型的通道图像。Optionally, after determining the position of each type of stain, the corresponding gray value can be set at the corresponding position. Different gray values can be used to represent different types of stains. The specific position and range of the gray value It can represent the position and size of the color spot, determine the grayscale information corresponding to each color spot information in the image and set it up, you can get the above color spot category channel image, which can be represented by different grayscales Channel images of different speckle types.
下面来具体解释本申请实施例中提供的色斑预测方法的还一具体实施过程。Another specific implementation process of the stain prediction method provided in the embodiment of the present application will be explained in detail below.
图6为本申请实施例提供的色斑预测方法的流程示意图六,请参照图6,将多个目标通道图像以及目标噪声图像合并,一起输入待训练神经网络结构中得到训练后的色斑预测模型之前,该方法可以包括:Fig. 6 is a schematic flow diagram of the mottle prediction method provided by the embodiment of the present application 6. Please refer to Fig. 6. Multiple target channel images and target noise images are combined and input together into the neural network structure to be trained to obtain the trained mottle prediction Before the model, the method can include:
S610:分别将目标通道图像以及目标噪声图像进行归一化处理,得到目标输入图像。S610: Perform normalization processing on the target channel image and the target noise image respectively to obtain a target input image.
可选地,上述将多个目标通道图像以及目标噪声图像合并,一起输入待训练神经网络结构中得到训练后的色斑预测模型中,目标通道图像即包括前述去除色斑信息的红色通道 图像、去除色斑信息的绿色通道图像、去除色斑信息的蓝色通道图像以及色斑类别通道图像,这四类图像可以与目标噪声图像进行合并得到五通道图像,一起输入至待训练神经网络结构中得到训练后的色斑预测模型。Optionally, the above-mentioned multiple target channel images and target noise images are merged and input together into the trained color speckle prediction model in the neural network structure to be trained. The target channel image includes the aforementioned red channel image with color speckle information removed, The green channel image with color spot information removed, the blue channel image with color spot information removed, and the color spot category channel image, these four types of images can be combined with the target noise image to obtain a five-channel image, which is input to the neural network structure to be trained together Obtain the trained speckle prediction model.
可选地,归一化处理时,具体可以是将目标通道图像中的去除色斑信息的红色通道图像、去除色斑信息的绿色通道图像、去除色斑信息的蓝色通道图像以及上述目标噪声图像归一化至(-1,1)区间内,将色斑类别通道图像归一化至(0,1)区间内,具体计算公式如下:Optionally, during the normalization process, specifically, the red channel image with color speckle information removed, the green channel image with color speckle information removed, the blue channel image with color speckle information removed, and the above-mentioned target noise in the target channel image can be specifically The image is normalized to the (-1,1) interval, and the stain category channel image is normalized to the (0,1) interval. The specific calculation formula is as follows:
Im g (-1,1)=(Im g*2/255)-1; Im g (-1,1) = (Im g*2/255)-1;
其中,Img为0~255区间的单通道图像,Im g (-1,1)为归一化后的(-1,1)区间的图。 Wherein, Img is a single-channel image in the interval 0-255, and Im g (-1, 1) is a map in the interval (-1, 1) after normalization.
ClsMask (0,1)=ClsMask/255; ClsMask (0,1) = ClsMask/255;
其中,ClsMask为0~255区间的单通道图像,ClsMask (0,1)为归一化后的(0,1)区间的图。 Among them, ClsMask is a single-channel image in the interval of 0 to 255, and ClsMask (0, 1) is a normalized image in the interval (0, 1).
将多个目标通道图像以及目标噪声图像合并,一起输入待训练神经网络结构中得到训练后的色斑预测模型,可以包括:Combine multiple target channel images and target noise images, and input them together into the trained color spot prediction model in the neural network structure to be trained, which can include:
S620:将目标输入图像输入带训练神经网络结构中得到训练后的色斑预测模型。S620: Input the target input image into the training neural network structure to obtain the trained stain prediction model.
可选地,得到上述目标输入图像后,可以将目标输入图像输入至上述神经网络结构中得到训练后的色斑预测模型。Optionally, after the above target input image is obtained, the target input image may be input into the above neural network structure to obtain a trained stain prediction model.
下面来具体解释本申请实施例中所采用的色斑预测模型的具体结构:The specific structure of the color spot prediction model adopted in the embodiment of the present application is explained in detail below:
该模型采用编码-解码结构,解码部分的上采样采用的是最近邻上采样+卷积层的结合,输出层的激活函数为Tanh,具体结构关系可以为表1所示。The model adopts an encoding-decoding structure, and the upsampling of the decoding part adopts the combination of the nearest neighbor upsampling + convolution layer, and the activation function of the output layer is Tanh, and the specific structural relationship can be shown in Table 1.
表1Table 1
Figure PCTCN2021132553-appb-000001
Figure PCTCN2021132553-appb-000001
Figure PCTCN2021132553-appb-000002
Figure PCTCN2021132553-appb-000002
其中,Leakyrelu是深度学习中常规激活函数的一种,negativeslope是该激活函数里的一个配置参数,kh为卷积核的高度,kw为卷积核的宽度,padding为作卷积操作时特征图扩展的像素值,stride为卷积的步长,group为卷积核分组数,scale_factor、Mode均是上采样层的参数。其中,scale_factor表示上采样到2倍的尺寸,Mode=nearest表示采用最近临方式上采样。Among them, Leakyrelu is a kind of conventional activation function in deep learning, negativeslope is a configuration parameter in the activation function, kh is the height of the convolution kernel, kw is the width of the convolution kernel, and padding is the feature map for the convolution operation Extended pixel value, stride is the step size of convolution, group is the number of convolution kernel groups, scale_factor, Mode are the parameters of the upsampling layer. Among them, scale_factor means upsampling to 2 times the size, and Mode=nearest means upsampling in the nearest neighbor mode.
可选地,该模型中还包括判别网络部分,分别判别不同分辨率的真假图像。本申请的实施例中可以采用3个尺度的判别器,分别判别512x512,256x256,128x128分辨率的图像。对于不同分辨率的图像,可以通过下采样得到。Optionally, the model also includes a discriminant network part to distinguish real and fake images with different resolutions. In the embodiment of the present application, discriminators of three scales may be used to distinguish images with resolutions of 512x512, 256x256, and 128x128, respectively. For images of different resolutions, it can be obtained by downsampling.
可选地,该模型在训练的过程中可以设置20000个样本,并且对于每一个样本的图像可以进行多种增益,如翻转,旋转,平移,仿射变换,曝光,对比度调整,模糊等以实现对鲁棒性的提高。Optionally, the model can be set to 20,000 samples during the training process, and various gains can be performed on the image of each sample, such as flip, rotation, translation, affine transformation, exposure, contrast adjustment, blur, etc. to achieve Improvements in robustness.
可选地,网络训练的优化算法使用Adam算法,生成网络的学习率为0.0002,判别网络的学习率为0.0001。Optionally, the optimization algorithm for network training uses the Adam algorithm, the learning rate of the generating network is 0.0002, and the learning rate of the discriminant network is 0.0001.
可选地,该模型的损失函数具体计算如下:Optionally, the specific calculation of the loss function of the model is as follows:
L=L 1+L 2+L vgg+L advL=L 1 +L 2 +L vgg +L adv ;
L 1=|generate-GT|; L 1 =|generate-GT|;
L 2=||generate-GT||; L 2 =||generate-GT||;
Lvgg=∑ i||L perceptual(generate)-L perceptual(GT)||; Lvgg=∑ i ||L perceptual (generate)-L perceptual (GT)||;
其中,Generate代表网络的输出;GT为目标生成的图像。L 1、L 2均为损失函数,L vgg为感知损失函数,L adv为生成对抗损失函数。Lperceptual表示感知loss,感知loss是指:把网络的输出(生成图)generate和GT都输入另外一个网络,并提取出相应层的特征张量,并计算特征张量之间的差异,i即为第i个样本。 Among them, Generate represents the output of the network; GT is the image generated by the target. Both L 1 and L 2 are loss functions, L vgg is a perceptual loss function, and L adv is a generative adversarial loss function. Lperceptual means perceptual loss. Perceptual loss refers to inputting the output (generated graph) generate and GT of the network into another network, extracting the feature tensor of the corresponding layer, and calculating the difference between the feature tensors. i is i-th sample.
下述对用以执行的本申请所提供色斑预测方法对应的装置、设备及存储介质等进行说明,其具体的实现过程以及技术效果参见上述,下述不再赘述。The following describes the devices, devices, and storage media corresponding to the color speckle prediction method provided by the present application. For the specific implementation process and technical effects, refer to the above, and will not be repeated below.
图7为本申请实施例提供的色斑预测装置的结构示意图,请参照图7,该装置包括:获取模块100、预测模块200、输出模块300;Fig. 7 is a schematic structural diagram of a speckle prediction device provided in the embodiment of the present application, please refer to Fig. 7, the device includes: an acquisition module 100, a prediction module 200, and an output module 300;
获取模块100,可以配置成用于获取待预测图像;The obtaining module 100 may be configured to obtain an image to be predicted;
预测模块200,可以配置成用于将待预测图像输入至预先训练得到的色斑预测模型中进行预测处理,色斑预测模型为全卷积生成对抗网络模型;The prediction module 200 may be configured to input the image to be predicted into a pre-trained stain prediction model for prediction processing, and the stain prediction model is a fully convolutional generation confrontation network model;
输出模块300,可以配置成用于通过色斑预测模型得到色斑预测结果图。The output module 300 may be configured to obtain a color speckle prediction result map through the color speckle prediction model.
可选地,该装置还包括:预处理模块400;预处理模块400,可以配置成用于确定待预测图像中的色斑信息,色斑信息包括:色斑的位置以及类别;根据待预测图像中的色斑信息,对待预测图像进行预处理,得到预处理后的多帧图像,多帧图像分别包括:无色斑的图像以及标识有色斑类别的图像;另外的预测模块200,可以配置成用于将预处理后的多帧图像输入至预先训练得到的色斑预测模型中进行预测处理。Optionally, the device further includes: a preprocessing module 400; the preprocessing module 400 can be configured to determine color speckle information in the image to be predicted, and the color speckle information includes: the position and category of the color speckle; according to the image to be predicted The speckle information in the image to be predicted is preprocessed to obtain preprocessed multi-frame images, and the multi-frame images respectively include: images without speckles and images marked with speckle categories; another prediction module 200 can be configured It is used to input the preprocessed multi-frame images into the pre-trained stain prediction model for prediction processing.
可选地,预处理模块400,还可以配置成用于获取目标待处理图像,目标待处理图像中包括色斑信息;根据目标待处理图像分别确定多个目标通道图像,目标通道图像包括:去除色斑信息的多通道图像以及色斑类别通道图像;将多个目标通道图像以及目标噪声图像合并,一起输入待训练神经网络结构中得到训练后的色斑预测模型;目标噪声图像为随机生成的噪声图像。Optionally, the preprocessing module 400 may also be configured to acquire the target image to be processed, the target image to be processed includes color spot information; respectively determine a plurality of target channel images according to the target image to be processed, the target channel image includes: Multi-channel images of color spot information and channel images of color spot categories; multiple target channel images and target noise images are combined, and input into the trained color spot prediction model in the neural network structure to be trained together; target noise images are randomly generated noisy image.
可选地,预处理模块400,还可以配置成用于确定目标待处理图像中的色斑信息;根据目标待处理图像中的色斑信息,对目标待处理图像进行色斑去除处理,得到去除色斑信息的目标待处理图像;对去除色斑信息的目标待处理图像进行通道处理,得到去除色斑信息的多通道图像。Optionally, the pre-processing module 400 may also be configured to determine color speckle information in the target image to be processed; perform color speckle removal processing on the target image to be processed according to the color speckle information in the target image to be processed to obtain The target image to be processed with color speckle information; channel processing is performed on the target image to be processed with color speckle information removed to obtain a multi-channel image with color speckle information removed.
可选地,预处理模块400,还可以配置成用于对目标待处理图像进行色斑检测处理,得到色斑类别通道图像。Optionally, the preprocessing module 400 may also be configured to perform color spot detection processing on the target image to be processed to obtain a color spot category channel image.
可选地,预处理模块400,可以配置成用于分别确定目标待处理图像中每种类型的色斑信息的位置;在色斑信息的位置处设置色斑信息对应类型的灰度信息,得到色斑类别通道图像。Optionally, the preprocessing module 400 may be configured to determine the position of each type of color speckle information in the target image to be processed respectively; set grayscale information of the type corresponding to the color speckle information at the position of the color speckle information, to obtain The blob category channel image.
可选地,预处理模块400,还可以配置成用于分别将目标通道图像以及目标噪声图像进行归一化处理,得到目标输入图像;将目标输入图像输入带训练神经网络结构中得到训练后的色斑预测模型。Optionally, the preprocessing module 400 can also be configured to perform normalization processing on the target channel image and the target noise image respectively to obtain the target input image; input the target input image into the training neural network structure to obtain the trained Stain prediction model.
上述装置用于执行前述实施例提供的方法,其实现原理和技术效果类似,在此不再赘述。The above-mentioned apparatus is used to execute the methods provided in the foregoing embodiments, and its implementation principles and technical effects are similar, and details are not repeated here.
以上这些模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(Application Specific Integrated Circuit,简称ASIC),或,一个或多个微处理器,或,一个或者多个现场可编程门阵列(Field Programmable Gate Array,简称FPGA)等。再如,当以上某个模块通过处理元件调度程序代码的形式实现时,该处理元件可以是通用处理器,例如中央处理器(Central Processing Unit,简称CPU)或其它可以调用程序代码的处理器。再如,这些模块可以集成在一起,以片上系统(system-on-a-chip,简称SOC)的形式实现。The above modules may be one or more integrated circuits configured to implement the above method, for example: one or more specific integrated circuits (Application Specific Integrated Circuit, referred to as ASIC), or, one or more microprocessors, or, One or more Field Programmable Gate Arrays (Field Programmable Gate Array, FPGA for short), etc. For another example, when one of the above modules is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, referred to as CPU) or other processors that can call program codes. For another example, these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC for short).
图8为本申请实施例提供的计算机设备的结构示意图,请参照图8,计算机设备包括:存储器500、处理器600,存储器500中存储有可在处理器600上运行的计算机程序,处理器600执行计算机程序时,实现上述色斑预测方法的步骤。FIG. 8 is a schematic structural diagram of a computer device provided by an embodiment of the present application. Please refer to FIG. 8 . The computer device includes: a memory 500 and a processor 600. The memory 500 stores a computer program that can run on the processor 600. The processor 600 When the computer program is executed, the steps of the above stain prediction method are realized.
本申请的一些实施例还提供一种计算机可读存储介质,存储介质上存储有计算机程序,该计算机程序被处理器执行时,实现上述色斑预测方法的步骤。Some embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above method for predicting color spots are realized.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed devices and methods may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。A unit described as a separate component may or may not be physically separated, and a component shown as a unit may or may not be a physical unit, that is, it may be located in one place, or may also be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware, or in the form of hardware plus software functional units.
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(英文:processor)执行 本申请各个实施例方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取存储器(英文:Random Access Memory,简称:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The above-mentioned integrated units implemented in the form of software functional units may be stored in a computer-readable storage medium. The above-mentioned software functional units are stored in a storage medium, and include several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) or a processor (English: processor) to execute the methods of the various embodiments of the present application. partial steps. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (English: Read-Only Memory, abbreviated: ROM), random access memory (English: Random Access Memory, abbreviated: RAM), magnetic disk or optical disc, etc. Various media that can store program code.
上仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The above is only a specific embodiment of the application, but the scope of protection of the application is not limited thereto. Anyone skilled in the art can easily think of changes or substitutions within the technical scope disclosed in the application, and should be covered in Within the protection scope of this application. Therefore, the protection scope of the present application should be based on the protection scope of the claims.
以上所述仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。The above descriptions are only preferred embodiments of the present application, and are not intended to limit the present application. For those skilled in the art, there may be various modifications and changes in the present application. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of this application shall be included within the protection scope of this application.
工业实用性Industrial Applicability
本申请提供一种色斑预测方法、装置、设备及存储介质。该方法包括:获取待预测图像;将待预测图像输入至预先训练得到的色斑预测模型中进行预测处理,色斑预测模型为全卷积生成对抗网络模型;通过色斑预测模型得到色斑预测结果图。本申请可以实现对用户的面部皮肤的色斑变化情况的预测。The present application provides a stain prediction method, device, equipment and storage medium. The method includes: obtaining an image to be predicted; inputting the image to be predicted into a color spot prediction model obtained in advance for prediction processing, and the color spot prediction model is a fully convolutional generation confrontation network model; obtaining a color spot prediction through the color spot prediction model Result graph. This application can realize the prediction of the color spot change of the user's facial skin.
此外,可以理解的是,本申请的色斑预测方法、装置、设备及存储介质是可以重现的,并且可以用在多种工业应用中。例如,本申请的色斑预测方法、装置、设备及存储介质可以用于需要进行图像识别处理的领域。In addition, it can be understood that the stain prediction method, device, device and storage medium of the present application are reproducible and can be used in various industrial applications. For example, the stain prediction method, device, device and storage medium of the present application can be used in fields requiring image recognition processing.

Claims (16)

  1. 一种色斑预测方法,其特征在于,所述方法包括:A method for predicting stains, characterized in that said method comprises:
    获取待预测图像;Obtain the image to be predicted;
    将所述待预测图像输入至预先训练得到的色斑预测模型中进行预测处理,所述色斑预测模型为全卷积生成对抗网络模型;Inputting the image to be predicted into a pre-trained mottle prediction model for prediction processing, the mottle prediction model is a fully convolutional generation confrontation network model;
    通过所述色斑预测模型得到色斑预测结果图。A stain prediction result map is obtained through the stain prediction model.
  2. 根据权利要求1所述的方法,其特征在于,所述将所述待预测图像输入至预先训练得到的色斑预测模型中进行预测处理之前,还包括:The method according to claim 1, wherein before inputting the image to be predicted into the color spot prediction model obtained through pre-training for prediction processing, further comprising:
    确定所述待预测图像中的色斑信息,所述色斑信息包括:色斑的位置以及类别;Determining color spot information in the image to be predicted, where the color spot information includes: the position and category of the color spot;
    根据所述待预测图像中的色斑信息,对所述待预测图像进行预处理,得到预处理后的多帧图像,所述多帧图像分别包括:无色斑的图像以及标识有色斑类别的图像;According to the color spot information in the image to be predicted, the image to be predicted is preprocessed to obtain preprocessed multi-frame images, and the multi-frame images respectively include: an image without color spots and a category marked with color spots Image;
    所述将所述待预测图像输入至预先训练得到的色斑预测模型中进行预测处理,包括:Said inputting the image to be predicted into the pre-trained color spot prediction model for prediction processing includes:
    将所述预处理后的多帧图像输入至预先训练得到的色斑预测模型中进行预测处理。The preprocessed multi-frame images are input into the pre-trained stain prediction model for prediction processing.
  3. 根据权利要求1或2所述的方法,其特征在于,所述将所述待预测图像输入至预先训练得到的色斑预测模型中进行预测处理之前,还包括:The method according to claim 1 or 2, wherein before inputting the image to be predicted into the color spot prediction model obtained through pre-training for prediction processing, further comprising:
    获取目标待处理图像,所述目标待处理图像中包括色斑信息;Acquiring the target image to be processed, the target image to be processed includes color spot information;
    根据所述目标待处理图像分别确定多个目标通道图像,所述目标通道图像包括:去除色斑信息的多通道图像以及色斑类别通道图像;A plurality of target channel images are respectively determined according to the target image to be processed, and the target channel images include: a multi-channel image with color spot information removed and a color spot category channel image;
    将多个所述目标通道图像以及目标噪声图像合并,一起输入待训练神经网络结构中得到训练后的色斑预测模型;所述目标噪声图像为随机生成的噪声图像。Merge multiple target channel images and target noise images, and input them together into the trained stain prediction model in the neural network structure to be trained; the target noise images are randomly generated noise images.
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述目标待处理图像分别确定多个目标通道图像,包括:The method according to claim 3, wherein said determining a plurality of target channel images respectively according to said target image to be processed comprises:
    确定所述目标待处理图像中的色斑信息;Determine the color speckle information in the target image to be processed;
    根据所述目标待处理图像中的色斑信息,对所述目标待处理图像进行色斑去除处理,得到去除色斑信息的目标待处理图像。Perform color speckle removal processing on the target image to be processed according to color speckle information in the target image to be processed, to obtain a target image to be processed with color speckle information removed.
  5. 根据权利要求3或4所述的方法,其特征在于,所述根据所述目标待处理图像分别确定多个目标通道图像,包括:The method according to claim 3 or 4, wherein said determining a plurality of target channel images respectively according to said target image to be processed comprises:
    对所述目标待处理图像进行色斑检测处理,得到所述色斑类别通道图像。The stain detection process is performed on the target image to be processed to obtain the stain category channel image.
  6. 根据权利要求5所述的方法,其特征在于,所述对所述目标待处理图像进行色斑检测处理,得到所述色斑类别通道图像,包括:The method according to claim 5, wherein the step of performing stain detection processing on the target image to be processed to obtain the stain category channel image includes:
    分别确定所述目标待处理图像中每种类型的色斑信息的位置;Respectively determine the position of each type of color spot information in the target image to be processed;
    在所述色斑信息的位置处设置所述色斑信息对应类型的灰度信息,得到所述色斑类别通道图像。Gray information of a type corresponding to the color spot information is set at the position of the color spot information to obtain the color spot category channel image.
  7. 根据权利要求3至6中的任一项所述的方法,其特征在于,所述将多个所述目标通道图像以及目标噪声图像合并,一起输入待训练神经网络结构中得到训练后的色斑预测模型之前,所述方法包括:The method according to any one of claims 3 to 6, wherein the multiple target channel images and target noise images are combined and input together into the trained color spots in the neural network structure to be trained Before predicting the model, the method includes:
    分别将所述目标通道图像以及所述目标噪声图像进行归一化处理,得到目标输入图像;respectively performing normalization processing on the target channel image and the target noise image to obtain a target input image;
    所述将多个所述目标通道图像以及目标噪声图像分别输入待训练神经网络结构中得到训练后的色斑预测模型,包括:Said inputting a plurality of said target channel images and target noise images respectively into the neural network structure to be trained to obtain the trained stain prediction model includes:
    将所述目标输入图像输入待训练神经网络结构中得到训练后的色斑预测模型。The target input image is input into the neural network structure to be trained to obtain the trained stain prediction model.
  8. 一种色斑预测装置,其特征在于,所述装置包括:获取模块、预测模块、输出模块;A stain prediction device, characterized in that the device comprises: an acquisition module, a prediction module, and an output module;
    所述获取模块,配置成用于获取待预测图像;The acquiring module is configured to acquire an image to be predicted;
    所述预测模块,配置成用于将所述待预测图像输入至预先训练得到的色斑预测模型中进行预测处理,所述色斑预测模型为全卷积生成对抗网络模型;The prediction module is configured to input the image to be predicted into a color speckle prediction model obtained in advance for prediction processing, and the color speckle prediction model is a fully convolutional generative adversarial network model;
    所述输出模块,配置成用于通过所述色斑预测模型得到色斑预测结果图。The output module is configured to obtain a stain prediction result map through the stain prediction model.
  9. 根据权利要求8所述的色斑预测装置,其特征在于,所述装置还包括预处理模块和另外的预测模块,The speckle prediction device according to claim 8, wherein the device further comprises a preprocessing module and another prediction module,
    其中,所述预处理模块配置成用于:Wherein, the preprocessing module is configured to:
    确定所述待预测图像中的色斑信息,其中,所述色斑信息包括:色斑的位置以及类别;determining color speckle information in the image to be predicted, wherein the color speckle information includes: the position and category of the color speckle;
    根据所述待预测图像中的色斑信息,对所述待预测图像进行预处理,得到预处理后的多帧图像,其中,所述多帧图像分别包括:无色斑的图像以及标识有色斑类别的图像,以及According to the color speckle information in the image to be predicted, the image to be predicted is preprocessed to obtain preprocessed multi-frame images, wherein the multi-frame images respectively include: an image without color speckle and an image marked with colored images of the Spot category, and
    其中,所述另外的预测模块配置成用于:Wherein, the additional prediction module is configured for:
    将所述预处理后的所述多帧图像输入至预先训练得到的色斑预测模型中进行预测处理。The multi-frame images after the preprocessing are input into the pre-trained stain prediction model for prediction processing.
  10. 根据权利要求9所述的色斑预测装置,其特征在于,所述预处理模块还配置成用于:The speckle prediction device according to claim 9, wherein the preprocessing module is further configured to:
    获取目标待处理图像,其中,所述目标待处理图像中包括所述色斑信息;Acquiring the target image to be processed, wherein the target image to be processed includes the color spot information;
    根据所述目标待处理图像分别确定多个目标通道图像,其中,所述目标通道图像包括:去除所述色斑信息的多通道图像以及色斑类别通道图像;A plurality of target channel images are respectively determined according to the target image to be processed, wherein the target channel images include: a multi-channel image and a color spot category channel image from which the stain information is removed;
    将多个所述目标通道图像以及目标噪声图像合并,一起输入待训练神经网络结构中得到训练后的色斑预测模型,其中,所述目标噪声图像为随机生成的噪声图像。Merge multiple target channel images and target noise images, and input them together into the trained stain prediction model in the neural network structure to be trained, wherein the target noise images are randomly generated noise images.
  11. 根据权利要求10所述的色斑预测装置,其特征在于,所述预处理模块还配置成用 于:The speckle prediction device according to claim 10, wherein the preprocessing module is also configured to:
    确定所述目标待处理图像中的所述色斑信息;determining the color spot information in the target image to be processed;
    根据所述目标待处理图像中的所述色斑信息,对所述目标待处理图像进行色斑去除处理,得到去除色斑信息的目标待处理图像;performing color spot removal processing on the target image to be processed according to the color spot information in the target image to be processed, to obtain a target image to be processed with color spot information removed;
  12. 根据权利要求10或11所述的色斑预测装置,其特征在于,所述预处理模块还配置成用于:The speckle prediction device according to claim 10 or 11, wherein the preprocessing module is further configured to:
    对所述目标待处理图像进行色斑检测处理,得到色斑类别通道图像。The stain detection process is performed on the target image to be processed to obtain a stain category channel image.
  13. 根据权利要求10至12中的任一项所述的色斑预测装置,其特征在于,所述预处理模块还配置成用于:The speckle prediction device according to any one of claims 10 to 12, wherein the preprocessing module is further configured to:
    分别确定所述目标待处理图像中每种类型的色斑信息的位置;Respectively determine the position of each type of color spot information in the target image to be processed;
    在所述色斑信息的位置处设置所述色斑信息对应类型的灰度信息,得到所述色斑类别通道图像。Gray information of a type corresponding to the color spot information is set at the position of the color spot information to obtain the color spot category channel image.
  14. 根据权利要求10至13中的任一项所述的色斑预测装置,其特征在于,所述预处理模块还配置成用于:The stain prediction device according to any one of claims 10 to 13, wherein the preprocessing module is further configured to:
    分别将所述目标通道图像以及所述目标噪声图像进行归一化处理,得到目标输入图像;respectively performing normalization processing on the target channel image and the target noise image to obtain a target input image;
    将所述目标输入图像输入待训练神经网络结构中得到训练后的色斑预测模型。The target input image is input into the neural network structure to be trained to obtain the trained stain prediction model.
  15. 一种计算机设备,其特征在于,包括:存储器、处理器,所述存储器中存储有可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现上述权利要求1至7任一项所述的方法的步骤。A computer device, characterized by comprising: a memory and a processor, wherein a computer program that can run on the processor is stored in the memory, and when the processor executes the computer program, the above claim 1 is realized to the step of the method described in any one of 7.
  16. 一种计算机可读存储介质,其特征在于,所述存储介质上存储有计算机程序,该计算机程序被处理器执行时,实现权利要求1至7中任一项所述方法的步骤。A computer-readable storage medium, characterized in that a computer program is stored on the storage medium, and when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 7 are implemented.
PCT/CN2021/132553 2021-06-24 2021-11-23 Pigmentation prediction method and apparatus, and device and storage medium WO2022267327A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022540760A JP7385046B2 (en) 2021-06-24 2021-11-23 Color spot prediction method, device, equipment and storage medium
KR1020227022201A KR20230001005A (en) 2021-06-24 2021-11-23 Spot prediction methods, devices, equipment and storage media

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110707100.8A CN113379716B (en) 2021-06-24 2021-06-24 Method, device, equipment and storage medium for predicting color spots
CN202110707100.8 2021-06-24

Publications (1)

Publication Number Publication Date
WO2022267327A1 true WO2022267327A1 (en) 2022-12-29

Family

ID=77578969

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/132553 WO2022267327A1 (en) 2021-06-24 2021-11-23 Pigmentation prediction method and apparatus, and device and storage medium

Country Status (4)

Country Link
JP (1) JP7385046B2 (en)
KR (1) KR20230001005A (en)
CN (1) CN113379716B (en)
WO (1) WO2022267327A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379716B (en) * 2021-06-24 2023-12-29 厦门美图宜肤科技有限公司 Method, device, equipment and storage medium for predicting color spots

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006074482A (en) * 2004-09-02 2006-03-16 Fuji Xerox Co Ltd Color processing method, color processor, color processing program, and storage medium
CN110163813A (en) * 2019-04-16 2019-08-23 中国科学院深圳先进技术研究院 A kind of image rain removing method, device, readable storage medium storing program for executing and terminal device
CN111429416A (en) * 2020-03-19 2020-07-17 深圳数联天下智能科技有限公司 Face pigment spot identification method and device and electronic equipment
CN112464885A (en) * 2020-12-14 2021-03-09 上海交通大学 Image processing system for future change of facial color spots based on machine learning
CN112614140A (en) * 2020-12-17 2021-04-06 深圳数联天下智能科技有限公司 Method and related device for training color spot detection model
CN113379716A (en) * 2021-06-24 2021-09-10 厦门美图之家科技有限公司 Color spot prediction method, device, equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001060237A2 (en) * 2000-02-18 2001-08-23 Robert Kenet Method and device for skin cancer screening
CN101916334B (en) * 2010-08-16 2015-08-12 清华大学 A kind of skin Forecasting Methodology and prognoses system thereof
JP2012053813A (en) 2010-09-03 2012-03-15 Dainippon Printing Co Ltd Person attribute estimation device, person attribute estimation method and program
JP5950486B1 (en) 2015-04-01 2016-07-13 みずほ情報総研株式会社 Aging prediction system, aging prediction method, and aging prediction program
US10621771B2 (en) 2017-03-21 2020-04-14 The Procter & Gamble Company Methods for age appearance simulation
CN110473177B (en) * 2019-07-30 2022-12-09 上海媚测信息科技有限公司 Skin pigment distribution prediction method, image processing system, and storage medium
CN112883756B (en) 2019-11-29 2023-09-15 哈尔滨工业大学(深圳) Age-converted face image generation method and countermeasure network model generation method
CN112508812A (en) * 2020-12-01 2021-03-16 厦门美图之家科技有限公司 Image color cast correction method, model training method, device and equipment
CN112950569B (en) * 2021-02-25 2023-07-25 平安科技(深圳)有限公司 Melanoma image recognition method, device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006074482A (en) * 2004-09-02 2006-03-16 Fuji Xerox Co Ltd Color processing method, color processor, color processing program, and storage medium
CN110163813A (en) * 2019-04-16 2019-08-23 中国科学院深圳先进技术研究院 A kind of image rain removing method, device, readable storage medium storing program for executing and terminal device
CN111429416A (en) * 2020-03-19 2020-07-17 深圳数联天下智能科技有限公司 Face pigment spot identification method and device and electronic equipment
CN112464885A (en) * 2020-12-14 2021-03-09 上海交通大学 Image processing system for future change of facial color spots based on machine learning
CN112614140A (en) * 2020-12-17 2021-04-06 深圳数联天下智能科技有限公司 Method and related device for training color spot detection model
CN113379716A (en) * 2021-06-24 2021-09-10 厦门美图之家科技有限公司 Color spot prediction method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113379716A (en) 2021-09-10
JP2023534328A (en) 2023-08-09
KR20230001005A (en) 2023-01-03
CN113379716B (en) 2023-12-29
JP7385046B2 (en) 2023-11-21

Similar Documents

Publication Publication Date Title
US11222222B2 (en) Methods and apparatuses for liveness detection, electronic devices, and computer readable storage media
CN111738243B (en) Method, device and equipment for selecting face image and storage medium
US20060088209A1 (en) Video image quality
CN111160313B (en) Face representation attack detection method based on LBP-VAE anomaly detection model
CN112580521B (en) Multi-feature true and false video detection method based on MAML (maximum likelihood markup language) element learning algorithm
EP4085369A1 (en) Forgery detection of face image
CN111783629B (en) Human face in-vivo detection method and device for resisting sample attack
CN111275784A (en) Method and device for generating image
CN112990016B (en) Expression feature extraction method and device, computer equipment and storage medium
WO2022267327A1 (en) Pigmentation prediction method and apparatus, and device and storage medium
Feng et al. Iris R-CNN: Accurate iris segmentation and localization in non-cooperative environment with visible illumination
Wang et al. Multi-exposure decomposition-fusion model for high dynamic range image saliency detection
CN116912604B (en) Model training method, image recognition device and computer storage medium
CN113221842A (en) Model training method, image recognition method, device, equipment and medium
CN112818774A (en) Living body detection method and device
CN112651333A (en) Silence living body detection method and device, terminal equipment and storage medium
CN115862119B (en) Attention mechanism-based face age estimation method and device
TWI803243B (en) Method for expanding images, computer device and storage medium
RU2768797C1 (en) Method and system for determining synthetically modified face images on video
WO2022226744A1 (en) Texture completion
CN111899239A (en) Image processing method and device
US10762607B2 (en) Method and device for sensitive data masking based on image recognition
CN116935477B (en) Multi-branch cascade face detection method and device based on joint attention
Ramkissoon et al. Scene and Texture Based Feature Set for DeepFake Video Detection
CN112329606B (en) Living body detection method, living body detection device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022540760

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21946805

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE