WO2022267327A1 - Procédé et appareil de prédiction de pigmentation, ainsi que dispositif et support de stockage - Google Patents
Procédé et appareil de prédiction de pigmentation, ainsi que dispositif et support de stockage Download PDFInfo
- Publication number
- WO2022267327A1 WO2022267327A1 PCT/CN2021/132553 CN2021132553W WO2022267327A1 WO 2022267327 A1 WO2022267327 A1 WO 2022267327A1 CN 2021132553 W CN2021132553 W CN 2021132553W WO 2022267327 A1 WO2022267327 A1 WO 2022267327A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- target
- prediction
- color
- processed
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 230000019612 pigmentation Effects 0.000 title abstract description 8
- 238000012545 processing Methods 0.000 claims abstract description 56
- 238000012549 training Methods 0.000 claims abstract description 13
- 238000007781 pre-processing Methods 0.000 claims description 26
- 238000013528 artificial neural network Methods 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 10
- 238000010606 normalization Methods 0.000 claims description 7
- 208000012641 Pigmentation disease Diseases 0.000 abstract description 10
- 230000001815 facial effect Effects 0.000 abstract description 9
- 206010035021 Pigmentation changes Diseases 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 9
- 230000008859 change Effects 0.000 description 7
- 230000004913 activation Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 201000010251 cutis laxa Diseases 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000003449 preventive effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Definitions
- the present application relates to the technical field of image recognition and generation, and in particular, to a stain prediction method, device, equipment, and storage medium.
- the prediction of skin changes in the prior art usually predicts the degree of skin laxity and wrinkle, and cannot realize the prediction of future changes in pigmentation. Therefore, it is urgent to provide a method that can realize future changes in pigmentation Prediction of changes, so that users can timely understand the changes of skin pigmentation in the future.
- the present application provides a color spot prediction method, device, equipment and storage medium, which can realize the prediction of the change of color spots on the user's facial skin.
- Some embodiments of the present application provide a method for predicting stains, which may include:
- the color spot prediction result map is obtained through the color spot prediction model.
- the pre-trained color spot prediction model for prediction processing may also include:
- the color spot information may include: the position and category of the color spot;
- the image to be predicted is preprocessed to obtain preprocessed multi-frame images
- the multi-frame images may respectively include: an image without color speckle and an image marked with a color speckle category;
- Inputting the image to be predicted into the pre-trained color spot prediction model for prediction processing may include:
- the preprocessed multi-frame images are input into the pre-trained stain prediction model for prediction processing.
- the pre-trained color spot prediction model for prediction processing may also include:
- Obtain the target image to be processed which may include color spot information
- a plurality of target channel images are respectively determined according to the target image to be processed, and the target channel images may include: a multi-channel image with color spot information removed and a color spot category channel image;
- the target noise image is a randomly generated noise image.
- determining a plurality of target channel images respectively according to the target image to be processed may include:
- the speckle removal process is performed on the target image to be processed, and the target image to be processed with the speckle information removed is obtained.
- determining a plurality of target channel images respectively according to the target image to be processed may include:
- the speckle detection process is performed on the target image to be processed, and the channel image of the speckle category is obtained.
- performing color spot detection processing on the target image to be processed to obtain a color spot category channel image may include:
- the method may include:
- the target input image is input into the neural network structure to be trained to obtain the trained stain prediction model.
- a stain prediction device which may include: an acquisition module, a prediction module, and an output module;
- An acquisition module that can be configured to acquire an image to be predicted
- the prediction module can be configured to input the image to be predicted into a pre-trained stain prediction model for prediction processing, and the stain prediction model is a fully convolutional generation confrontation network model;
- the output module can be configured to obtain a stain prediction result map through the stain prediction model.
- the device may further include: a preprocessing module; the preprocessing module may be configured to determine color spot information in the image to be predicted, and the color spot information may include: the position and category of the color spot;
- the speckle information in the image to be predicted is preprocessed to obtain multi-frame images after preprocessing.
- the multi-frame images can include: images without speckles and images with speckle categories; another prediction module can be configured It is used to input the preprocessed multi-frame images into the pre-trained stain prediction model for prediction processing.
- the pre-processing module may also be configured to acquire the target image to be processed, which may include color spot information; respectively determine a plurality of target channel images according to the target image to be processed, the target channel image may include: Remove the multi-channel image of the color spot information and the channel image of the color spot category; merge multiple target channel images, and input them together into the trained color spot prediction model in the neural network structure to be trained; the target noise image is a randomly generated noise image.
- the preprocessing module can also be configured to determine color speckle information in the target to-be-processed image; perform color speckle removal processing on the target to-be-processed image according to the color speckle information in the target to-be-processed image, to obtain removed color speckles Target image to be processed with speckle information.
- the preprocessing module may also be configured to perform color spot detection processing on the target image to be processed to obtain a color spot category channel image.
- the preprocessing module can be configured to determine the position of each type of color spot information in the target image to be processed respectively; set the grayscale information of the type corresponding to the color spot information at the position of the color spot information to obtain the color spot information Speckle category channel image.
- the preprocessing module can also be configured to perform normalization processing on the target channel image and the target noise image respectively to obtain the target input image; input the target input image into the training neural network structure to obtain the trained color Spot prediction model.
- the computer device may include: a memory, a processor, a computer program that can run on the processor is stored in the memory, and when the processor executes the computer program, the above-mentioned color spots are realized The steps of the prediction method.
- Still other embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above method for predicting color spots are realized.
- the image to be predicted can be obtained; the image to be predicted can be input into the pre-trained stain prediction model for prediction processing, and the stain prediction model Generate an adversarial network model for the full convolution; get the color spot prediction result map through the color spot prediction model.
- the fully convolutional generation confrontation network structure is used as the spot prediction module, which can realize the prediction and processing of the future changes of the skin spots, and then enable the user to know the change trend of the skin spots in time.
- FIG. 1 is a schematic flow diagram of a stain prediction method provided in an embodiment of the present application
- Fig. 2 is the second schematic flow diagram of the stain prediction method provided by the embodiment of the present application.
- Fig. 3 is a schematic flow chart three of the color spot prediction method provided by the embodiment of the present application.
- Fig. 4 is a schematic flow diagram 4 of the stain prediction method provided in the embodiment of the present application.
- Fig. 5 is a schematic flow diagram five of the stain prediction method provided in the embodiment of the present application.
- FIG. 6 is the sixth schematic flow chart of the stain prediction method provided by the embodiment of the present application.
- FIG. 7 is a schematic structural diagram of a stain prediction device provided in an embodiment of the present application.
- FIG. 8 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
- Fig. 1 is a schematic flow chart of the stain prediction method provided in the embodiment of the present application. Please refer to Fig. 1, the method may include:
- the image to be predicted may be a photo of the user, an image of a human face, or any image corresponding to the skin that needs to be predicted for color spots, and the image may be a cropped image with a preset size.
- the execution subject of this method may be a related program on the computer device, for example: a preset program of the skin predictor, a certain function of the electronic facial cleanser, etc., which are not specifically limited here and can be set according to actual needs.
- the image to be predicted may be sent to the computer device by other devices, or may also be captured by the computer device through a shooting device, etc., and no specific limitation is set here.
- S120 Input the image to be predicted into the pre-trained stain prediction model to perform prediction processing.
- the stain prediction model is a fully convolutional generation confrontation network model.
- the image to be predicted can be input to a pre-trained mottle prediction model for prediction processing, wherein the mottle prediction model can be a fully convolutional generation confrontation network model obtained after pre-training .
- the stain prediction model may be obtained by pre-training of the computer device, or may be sent to the computer device by other electronic devices, which is not limited herein.
- the fully convolutional generation confrontation network model can be a generation confrontation network composed of multiple convolutional neural networks, wherein the generation confrontation network can be a mutual The game learns network models that produce reasonably good outputs.
- a color spot prediction result map can be obtained, wherein the color spot prediction result map can show the change of the color spot in the future period of time in the image to be predicted, the specific time It can be set according to actual needs, and there is no limitation here.
- the color speckle prediction result graph may include multiple graphs, which are respectively used to represent the changes of the color spots on the facial skin in the image to be predicted after a certain period of time in the future.
- the image to be predicted can be obtained; the image to be predicted can be input into the pre-trained stain prediction model for prediction processing, and the stain prediction model is a fully convolutional generative confrontation network Model; the color spot prediction result map is obtained through the color spot prediction model.
- the fully convolutional generation confrontation network structure is used as the spot prediction module, which can realize the prediction and processing of the future changes of the skin spots, and then enable the user to know the change trend of the skin spots in time.
- Fig. 2 is a schematic flow diagram 2 of the mottle prediction method provided by the embodiment of the present application. Please refer to Fig. 2, before inputting the image to be predicted into the pre-trained mottle prediction model for prediction processing, it also includes:
- S210 Determine color speckle information in the image to be predicted.
- the color spot information may include: the position and type of the color spot.
- the color spot information may be the color spot information on the skin in the image to be predicted, for example, it may include: the position, category, etc. of each color spot on the skin in the image to be predicted, each color spot The position of the color spot can be recorded in the form of the range of coordinates, and the category of the color spot can be recorded in the form of identification.
- a preset color speckle recognition algorithm may be used to acquire and determine the color speckle information in the image to be predicted.
- S220 Perform preprocessing on the image to be predicted according to the color speckle information in the image to be predicted to obtain multi-frame images after preprocessing.
- the multiple frames of images may respectively include: an image without color speckle and an image marked with a color speckle category.
- the preprocessing of the image to be predicted may include color speckle removal processing and color speckle determination processing, wherein the above-mentioned image without color speckle can be obtained through the color speckle removal process, that is, the image to be predicted without color speckle information ;
- the color speckle determination process an image marked with a color speckle category can be obtained, and different grayscale values in the image can represent different color speckle categories.
- Inputting the image to be predicted into the pre-trained color spot prediction model for prediction processing may include:
- S230 Input the preprocessed multi-frame images into the color speckle prediction model obtained in advance to perform prediction processing.
- these images may be combined and input into a pre-trained stain prediction model for prediction, so as to obtain corresponding prediction results.
- Fig. 3 is a schematic flow chart of the mottle prediction method provided in the embodiment of the present application III. Please refer to Fig. 3, before inputting the image to be predicted into the pre-trained mottle prediction model for prediction processing, it may also include:
- the target image to be processed may include speckle information.
- the target image to be processed may be a sample image used for training the color spot prediction model, the sample image has skin, and the skin includes color spot information.
- the target image to be processed may be a large number of pre-collected sample images, for example, images of facial spots downloaded through the network, etc., which are not specifically limited here.
- S320 Determine a plurality of target channel images respectively according to the target image to be processed.
- the target channel image may include: a multi-channel image with color spot information removed and a color spot category channel image.
- the target image to be processed can be processed separately to obtain a plurality of target channel images, wherein the multi-channel image from which color speckle information is removed can be obtained after the target image to be processed is subjected to color speckle removal processing. Images of blotches in the same way.
- the color speckle category channel image may be obtained by identifying the color speckle on the target image to be processed, which is the same as the aforementioned method of acquiring an image marked with a color speckle category.
- S330 Combine multiple target channel images and target noise images, and input them together into the trained color speckle prediction model in the neural network structure to be trained.
- the target noise image is a randomly generated noise image.
- these target channel images and pre-generated target noise images can be combined and input together into the neural network structure to be trained for training.
- the above-mentioned stain prediction model can be obtained .
- Fig. 4 is a schematic flow diagram 4 of the speckle prediction method provided by the embodiment of the present application. Please refer to Fig. 4, and determine a plurality of target channel images according to the target image to be processed, which may include:
- S410 Determine color speckle information in the target image to be processed.
- the color speckle information may be determined, specifically, the color speckle information may be determined by means of the aforementioned color speckle identification.
- S420 Perform color speckle removal processing on the target image to be processed according to color speckle information in the target image to be processed, to obtain a target image to be processed with color speckle information removed.
- stain removal processing may be performed according to the stain information. All color speckle information in the target image to be processed is removed to obtain a target image to be processed, wherein the target image to be processed does not have color speckle information.
- channel processing can be performed to obtain the color channels of red, green and blue respectively, and the red channel image with color spot information removed, the green channel image with color spot information removed, and the color spot removed image can be obtained.
- Informational blue channel image can be obtained.
- determining a plurality of target channel images respectively according to the target image to be processed may include:
- the speckle detection process is performed on the target image to be processed, and the channel image of the speckle category is obtained.
- the image can be subjected to color speckle detection processing, and the image of the color speckle category channel can be obtained further.
- the specific process is as follows:
- Fig. 5 is a schematic flow chart of the mottle prediction method provided in the embodiment of the present application fifth, please refer to Fig. 5, perform the mottle detection process on the target image to be processed, and obtain the mottle category channel image, including:
- S510 Determine the position of each type of color speckle information in the target image to be processed respectively.
- the position of each type of color spot in the target image to be processed may be determined by means of color spot identification.
- Grayscale information of a type corresponding to the color spot information may be set at the position of the color spot information to obtain a color spot category channel image.
- the corresponding gray value can be set at the corresponding position.
- Different gray values can be used to represent different types of stains.
- the specific position and range of the gray value It can represent the position and size of the color spot, determine the grayscale information corresponding to each color spot information in the image and set it up, you can get the above color spot category channel image, which can be represented by different grayscales Channel images of different speckle types.
- Fig. 6 is a schematic flow diagram of the mottle prediction method provided by the embodiment of the present application 6. Please refer to Fig. 6. Multiple target channel images and target noise images are combined and input together into the neural network structure to be trained to obtain the trained mottle prediction Before the model, the method can include:
- S610 Perform normalization processing on the target channel image and the target noise image respectively to obtain a target input image.
- the above-mentioned multiple target channel images and target noise images are merged and input together into the trained color speckle prediction model in the neural network structure to be trained.
- the target channel image includes the aforementioned red channel image with color speckle information removed, The green channel image with color spot information removed, the blue channel image with color spot information removed, and the color spot category channel image, these four types of images can be combined with the target noise image to obtain a five-channel image, which is input to the neural network structure to be trained together Obtain the trained speckle prediction model.
- the red channel image with color speckle information removed, the green channel image with color speckle information removed, the blue channel image with color speckle information removed, and the above-mentioned target noise in the target channel image can be specifically
- the image is normalized to the (-1,1) interval
- the stain category channel image is normalized to the (0,1) interval.
- Im g (-1,1) (Im g*2/255)-1;
- Img is a single-channel image in the interval 0-255
- Im g (-1, 1) is a map in the interval (-1, 1) after normalization.
- ClsMask (0,1) ClsMask/255;
- ClsMask is a single-channel image in the interval of 0 to 255
- ClsMask (0, 1) is a normalized image in the interval (0, 1).
- S620 Input the target input image into the training neural network structure to obtain the trained stain prediction model.
- the target input image may be input into the above neural network structure to obtain a trained stain prediction model.
- the model adopts an encoding-decoding structure, and the upsampling of the decoding part adopts the combination of the nearest neighbor upsampling + convolution layer, and the activation function of the output layer is Tanh, and the specific structural relationship can be shown in Table 1.
- Leakyrelu is a kind of conventional activation function in deep learning
- negativeslope is a configuration parameter in the activation function
- kh is the height of the convolution kernel
- kw is the width of the convolution kernel
- padding is the feature map for the convolution operation
- Extended pixel value stride is the step size of convolution
- group is the number of convolution kernel groups
- scale_factor Mode are the parameters of the upsampling layer.
- scale_factor means upsampling to 2 times the size
- the model also includes a discriminant network part to distinguish real and fake images with different resolutions.
- discriminators of three scales may be used to distinguish images with resolutions of 512x512, 256x256, and 128x128, respectively. For images of different resolutions, it can be obtained by downsampling.
- the model can be set to 20,000 samples during the training process, and various gains can be performed on the image of each sample, such as flip, rotation, translation, affine transformation, exposure, contrast adjustment, blur, etc. to achieve Improvements in robustness.
- the optimization algorithm for network training uses the Adam algorithm, the learning rate of the generating network is 0.0002, and the learning rate of the discriminant network is 0.0001.
- the specific calculation of the loss function of the model is as follows:
- L L 1 +L 2 +L vgg +L adv ;
- Generate represents the output of the network; GT is the image generated by the target.
- L 1 and L 2 are loss functions, L vgg is a perceptual loss function, and L adv is a generative adversarial loss function.
- Lperceptual means perceptual loss. Perceptual loss refers to inputting the output (generated graph) generate and GT of the network into another network, extracting the feature tensor of the corresponding layer, and calculating the difference between the feature tensors. i is i-th sample.
- Fig. 7 is a schematic structural diagram of a speckle prediction device provided in the embodiment of the present application, please refer to Fig. 7, the device includes: an acquisition module 100, a prediction module 200, and an output module 300;
- the obtaining module 100 may be configured to obtain an image to be predicted
- the prediction module 200 may be configured to input the image to be predicted into a pre-trained stain prediction model for prediction processing, and the stain prediction model is a fully convolutional generation confrontation network model;
- the output module 300 may be configured to obtain a color speckle prediction result map through the color speckle prediction model.
- the device further includes: a preprocessing module 400; the preprocessing module 400 can be configured to determine color speckle information in the image to be predicted, and the color speckle information includes: the position and category of the color speckle; according to the image to be predicted
- the speckle information in the image to be predicted is preprocessed to obtain preprocessed multi-frame images, and the multi-frame images respectively include: images without speckles and images marked with speckle categories; another prediction module 200 can be configured It is used to input the preprocessed multi-frame images into the pre-trained stain prediction model for prediction processing.
- the preprocessing module 400 may also be configured to acquire the target image to be processed, the target image to be processed includes color spot information; respectively determine a plurality of target channel images according to the target image to be processed, the target channel image includes: Multi-channel images of color spot information and channel images of color spot categories; multiple target channel images and target noise images are combined, and input into the trained color spot prediction model in the neural network structure to be trained together; target noise images are randomly generated noisy image.
- the pre-processing module 400 may also be configured to determine color speckle information in the target image to be processed; perform color speckle removal processing on the target image to be processed according to the color speckle information in the target image to be processed to obtain The target image to be processed with color speckle information; channel processing is performed on the target image to be processed with color speckle information removed to obtain a multi-channel image with color speckle information removed.
- the preprocessing module 400 may also be configured to perform color spot detection processing on the target image to be processed to obtain a color spot category channel image.
- the preprocessing module 400 may be configured to determine the position of each type of color speckle information in the target image to be processed respectively; set grayscale information of the type corresponding to the color speckle information at the position of the color speckle information, to obtain The blob category channel image.
- the preprocessing module 400 can also be configured to perform normalization processing on the target channel image and the target noise image respectively to obtain the target input image; input the target input image into the training neural network structure to obtain the trained Stain prediction model.
- the above modules may be one or more integrated circuits configured to implement the above method, for example: one or more specific integrated circuits (Application Specific Integrated Circuit, referred to as ASIC), or, one or more microprocessors, or, One or more Field Programmable Gate Arrays (Field Programmable Gate Array, FPGA for short), etc.
- ASIC Application Specific Integrated Circuit
- microprocessors or, One or more Field Programmable Gate Arrays (Field Programmable Gate Array, FPGA for short)
- FPGA Field Programmable Gate Array
- the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, referred to as CPU) or other processors that can call program codes.
- CPU central processing unit
- these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC for short).
- FIG. 8 is a schematic structural diagram of a computer device provided by an embodiment of the present application. Please refer to FIG. 8 .
- the computer device includes: a memory 500 and a processor 600.
- the memory 500 stores a computer program that can run on the processor 600.
- the processor 600 When the computer program is executed, the steps of the above stain prediction method are realized.
- Some embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above method for predicting color spots are realized.
- the disclosed devices and methods may be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented.
- the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
- a unit described as a separate component may or may not be physically separated, and a component shown as a unit may or may not be a physical unit, that is, it may be located in one place, or may also be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated units can be implemented in the form of hardware, or in the form of hardware plus software functional units.
- the above-mentioned integrated units implemented in the form of software functional units may be stored in a computer-readable storage medium.
- the above-mentioned software functional units are stored in a storage medium, and include several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) or a processor (English: processor) to execute the methods of the various embodiments of the present application. partial steps.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (English: Read-Only Memory, abbreviated: ROM), random access memory (English: Random Access Memory, abbreviated: RAM), magnetic disk or optical disc, etc.
- the present application provides a stain prediction method, device, equipment and storage medium.
- the method includes: obtaining an image to be predicted; inputting the image to be predicted into a color spot prediction model obtained in advance for prediction processing, and the color spot prediction model is a fully convolutional generation confrontation network model; obtaining a color spot prediction through the color spot prediction model Result graph.
- This application can realize the prediction of the color spot change of the user's facial skin.
- the stain prediction method, device, device and storage medium of the present application are reproducible and can be used in various industrial applications.
- the stain prediction method, device, device and storage medium of the present application can be used in fields requiring image recognition processing.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Databases & Information Systems (AREA)
- Public Health (AREA)
- Quality & Reliability (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
La présente demande appartient au domaine technique du traitement de reconnaissance d'image. Sont fournis un procédé et un appareil de prédiction de pigmentation, ainsi qu'un dispositif et un support de stockage. Le procédé consiste à : acquérir une image à soumettre à une prédiction ; entrer ladite image dans un modèle de prédiction de pigmentation, obtenu au moyen d'un pré-entraînement, pour un traitement de prédiction, le modèle de prédiction de pigmentation étant un modèle de réseau antagoniste génératif à convolution complète ; et obtenir une carte de résultat de prédiction de pigmentation à l'aide du modèle de prédiction de pigmentation. Au moyen de la présente demande, une situation de changement de pigmentation de la peau du visage d'un utilisateur peut être prédite.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020227022201A KR20230001005A (ko) | 2021-06-24 | 2021-11-23 | 반점 예측 방법, 장치, 장비 및 저장 매체 |
JP2022540760A JP7385046B2 (ja) | 2021-06-24 | 2021-11-23 | 色斑予測方法、装置、設備及び記憶媒体 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110707100.8A CN113379716B (zh) | 2021-06-24 | 2021-06-24 | 一种色斑预测方法、装置、设备及存储介质 |
CN202110707100.8 | 2021-06-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022267327A1 true WO2022267327A1 (fr) | 2022-12-29 |
Family
ID=77578969
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/132553 WO2022267327A1 (fr) | 2021-06-24 | 2021-11-23 | Procédé et appareil de prédiction de pigmentation, ainsi que dispositif et support de stockage |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP7385046B2 (fr) |
KR (1) | KR20230001005A (fr) |
CN (1) | CN113379716B (fr) |
WO (1) | WO2022267327A1 (fr) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113379716B (zh) * | 2021-06-24 | 2023-12-29 | 厦门美图宜肤科技有限公司 | 一种色斑预测方法、装置、设备及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006074482A (ja) * | 2004-09-02 | 2006-03-16 | Fuji Xerox Co Ltd | 色処理方法および色処理装置、色処理プログラム、記憶媒体 |
CN110163813A (zh) * | 2019-04-16 | 2019-08-23 | 中国科学院深圳先进技术研究院 | 一种图像去雨方法、装置、可读存储介质及终端设备 |
CN111429416A (zh) * | 2020-03-19 | 2020-07-17 | 深圳数联天下智能科技有限公司 | 一种人脸色素斑识别方法、装置及电子设备 |
CN112464885A (zh) * | 2020-12-14 | 2021-03-09 | 上海交通大学 | 基于机器学习的面部色斑未来变化的图像处理系统 |
CN112614140A (zh) * | 2020-12-17 | 2021-04-06 | 深圳数联天下智能科技有限公司 | 一种训练色斑检测模型的方法及相关装置 |
CN113379716A (zh) * | 2021-06-24 | 2021-09-10 | 厦门美图之家科技有限公司 | 一种色斑预测方法、装置、设备及存储介质 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6792137B2 (en) * | 2000-02-18 | 2004-09-14 | Robert Kenet | Method and device for skin cancer screening |
CN101916334B (zh) * | 2010-08-16 | 2015-08-12 | 清华大学 | 一种皮肤状况预测方法及其预测系统 |
JP2012053813A (ja) | 2010-09-03 | 2012-03-15 | Dainippon Printing Co Ltd | 人物属性推定装置、人物属性推定方法、及びプログラム |
JP5950486B1 (ja) | 2015-04-01 | 2016-07-13 | みずほ情報総研株式会社 | 加齢化予測システム、加齢化予測方法及び加齢化予測プログラム |
US10621771B2 (en) | 2017-03-21 | 2020-04-14 | The Procter & Gamble Company | Methods for age appearance simulation |
CN110473177B (zh) * | 2019-07-30 | 2022-12-09 | 上海媚测信息科技有限公司 | 皮肤色素分布预测方法、图像处理系统及存储介质 |
CN112883756B (zh) | 2019-11-29 | 2023-09-15 | 哈尔滨工业大学(深圳) | 年龄变换人脸图像的生成方法及生成对抗网络模型 |
CN112508812A (zh) * | 2020-12-01 | 2021-03-16 | 厦门美图之家科技有限公司 | 图像色偏校正方法、模型训练方法、装置及设备 |
CN112950569B (zh) * | 2021-02-25 | 2023-07-25 | 平安科技(深圳)有限公司 | 黑色素瘤图像识别方法、装置、计算机设备及存储介质 |
-
2021
- 2021-06-24 CN CN202110707100.8A patent/CN113379716B/zh active Active
- 2021-11-23 KR KR1020227022201A patent/KR20230001005A/ko unknown
- 2021-11-23 WO PCT/CN2021/132553 patent/WO2022267327A1/fr unknown
- 2021-11-23 JP JP2022540760A patent/JP7385046B2/ja active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006074482A (ja) * | 2004-09-02 | 2006-03-16 | Fuji Xerox Co Ltd | 色処理方法および色処理装置、色処理プログラム、記憶媒体 |
CN110163813A (zh) * | 2019-04-16 | 2019-08-23 | 中国科学院深圳先进技术研究院 | 一种图像去雨方法、装置、可读存储介质及终端设备 |
CN111429416A (zh) * | 2020-03-19 | 2020-07-17 | 深圳数联天下智能科技有限公司 | 一种人脸色素斑识别方法、装置及电子设备 |
CN112464885A (zh) * | 2020-12-14 | 2021-03-09 | 上海交通大学 | 基于机器学习的面部色斑未来变化的图像处理系统 |
CN112614140A (zh) * | 2020-12-17 | 2021-04-06 | 深圳数联天下智能科技有限公司 | 一种训练色斑检测模型的方法及相关装置 |
CN113379716A (zh) * | 2021-06-24 | 2021-09-10 | 厦门美图之家科技有限公司 | 一种色斑预测方法、装置、设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
JP7385046B2 (ja) | 2023-11-21 |
KR20230001005A (ko) | 2023-01-03 |
CN113379716A (zh) | 2021-09-10 |
JP2023534328A (ja) | 2023-08-09 |
CN113379716B (zh) | 2023-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11222222B2 (en) | Methods and apparatuses for liveness detection, electronic devices, and computer readable storage media | |
CN111738243B (zh) | 人脸图像的选择方法、装置、设备及存储介质 | |
US20230021661A1 (en) | Forgery detection of face image | |
CN111160313B (zh) | 一种基于lbp-vae异常检测模型的人脸表示攻击检测方法 | |
CN111814620A (zh) | 人脸图像质量评价模型建立方法、优选方法、介质及装置 | |
CN112580521B (zh) | 一种基于maml元学习算法的多特征真假视频检测方法 | |
CN111783629B (zh) | 一种面向对抗样本攻击的人脸活体检测方法及装置 | |
CN111275784A (zh) | 生成图像的方法和装置 | |
CN112990016B (zh) | 表情特征提取方法、装置、计算机设备及存储介质 | |
Wang et al. | Multi-exposure decomposition-fusion model for high dynamic range image saliency detection | |
WO2022267327A1 (fr) | Procédé et appareil de prédiction de pigmentation, ainsi que dispositif et support de stockage | |
Feng et al. | Iris R-CNN: Accurate iris segmentation and localization in non-cooperative environment with visible illumination | |
CN116912604B (zh) | 模型训练方法、图像识别方法、装置以及计算机存储介质 | |
CN113221842A (zh) | 模型训练方法、图像识别方法、装置、设备及介质 | |
CN112818774A (zh) | 一种活体检测方法及装置 | |
CN112651333A (zh) | 静默活体检测方法、装置、终端设备和存储介质 | |
CN115862119B (zh) | 基于注意力机制的人脸年龄估计方法及装置 | |
TWI803243B (zh) | 圖像擴增方法、電腦設備及儲存介質 | |
RU2768797C1 (ru) | Способ и система для определения синтетически измененных изображений лиц на видео | |
WO2022226744A1 (fr) | Complétion de texture | |
CN111899239A (zh) | 图像处理方法和装置 | |
US10762607B2 (en) | Method and device for sensitive data masking based on image recognition | |
CN116935477B (zh) | 一种基于联合注意力的多分支级联的人脸检测方法及装置 | |
Ramkissoon et al. | Scene and Texture Based Feature Set for DeepFake Video Detection | |
Wang et al. | Bio-driven visual saliency detection with color factor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2022540760 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21946805 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |