WO2021244352A1 - Method and apparatus for determining local area that affects degree of facial aging - Google Patents
Method and apparatus for determining local area that affects degree of facial aging Download PDFInfo
- Publication number
- WO2021244352A1 WO2021244352A1 PCT/CN2021/095753 CN2021095753W WO2021244352A1 WO 2021244352 A1 WO2021244352 A1 WO 2021244352A1 CN 2021095753 W CN2021095753 W CN 2021095753W WO 2021244352 A1 WO2021244352 A1 WO 2021244352A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- facial image
- facial
- area
- degree
- apparent age
- Prior art date
Links
- 230000001815 facial effect Effects 0.000 title claims abstract description 273
- 238000000034 method Methods 0.000 title claims abstract description 108
- 230000032683 aging Effects 0.000 title claims abstract description 73
- 238000012545 processing Methods 0.000 claims abstract description 42
- 238000012549 training Methods 0.000 claims description 34
- 238000013135 deep learning Methods 0.000 claims description 25
- 238000009795 derivation Methods 0.000 claims description 25
- 230000000873 masking effect Effects 0.000 claims description 23
- 238000002474 experimental method Methods 0.000 claims description 21
- 210000001061 forehead Anatomy 0.000 claims description 12
- 230000008447 perception Effects 0.000 claims description 12
- 230000000694 effects Effects 0.000 claims description 9
- 238000013527 convolutional neural network Methods 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000011156 evaluation Methods 0.000 description 27
- 238000010586 diagram Methods 0.000 description 16
- 238000012800 visualization Methods 0.000 description 14
- 238000013136 deep learning model Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 230000004424 eye movement Effects 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 6
- 238000007794 visualization technique Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 101150041570 TOP1 gene Proteins 0.000 description 3
- 101150107801 Top2a gene Proteins 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 230000009897 systematic effect Effects 0.000 description 3
- 241000282412 Homo Species 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000031018 biological processes and functions Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Definitions
- This application relates to facial aging prediction technology, in particular to the technology of determining the local area that affects the degree of facial aging.
- Facial aging refers to a complex biological process in which facial morphology and structure change over time. With the improvement of the quality of life, people are increasingly concerned about facial aging. Facial aging is not only a criterion for judging health in the field of biomedicine, but also a general concern of society. Therefore, accurately predicting the facial aging area, so as to help individuals delay or improve the facial aging situation, has extremely important research and application value.
- the purpose of this application is to provide a method and device for determining the local area that affects the degree of facial aging, which can accurately assess the degree of influence of the local area of the face on facial aging.
- This application discloses a method for determining the local area that affects the degree of facial aging, including:
- the apparent age prediction model, and the second facial image determine the effect of changed pixels and/or regions in the second facial image on the first apparent age degree.
- the image processing adopts a method selected from the following group:
- Pixel derivation method area masking method, or a combination thereof.
- the predetermined number of pixels are all pixels of the first facial image.
- the predetermined area is selected from one or more of the following group:
- Eye area cheek area, mouth area, forehead area.
- the image processing adopts a pixel derivation method
- the performing image processing on the first facial image and changing a predetermined number of pixels and/or predetermined areas in the first facial image to obtain a second facial image further includes:
- the apparent age prediction model, and the second facial image determine the effect of changed pixels and/or regions in the second facial image on the first apparent age
- the degree further includes:
- the image processing adopts a region masking method
- the performing image processing on the first facial image and changing a predetermined number of pixels and/or a predetermined area in the first facial image to obtain a second facial image further includes:
- the degree of influence further includes:
- the second apparent age and the first apparent age are compared, and the degree of influence of the predetermined area on the apparent age of the human face is calculated based on the comparison result.
- the image processing of the first facial image by using a region masking method, and covering the predetermined region with the average pixel value of the first facial image to obtain the second facial image further includes:
- the first facial image is divided into a plurality of local regions, and the region masking method is used to perform image processing on the first facial image, and the pixels of the first facial image are sequentially used. Covering each local area by means to obtain a corresponding second facial image covering each local area further includes:
- the first facial image is divided into four local areas of eye area, cheek area, mouth area, and forehead area, and image processing is performed on the first facial image by using the area masking method.
- the pixel average of the first facial image covers each local area to obtain a corresponding second facial image that covers each local area.
- the apparent age prediction model is obtained by a method including the following steps:
- the convolutional neural network model is trained with the training sample set to obtain the apparent age prediction model.
- the convolutional neural network model is a ResNet18 model.
- the application also discloses a device for determining the local area that affects the degree of facial aging, including:
- An image acquisition module for acquiring a first facial image of an object
- An image processing module configured to perform image processing on the first facial image, and change a predetermined number of pixels and/or a predetermined area in the first facial image to obtain a second facial image;
- An age prediction module configured to input the first facial image into an apparent age prediction model of a human face to obtain the first apparent age
- the influence degree determination module is configured to determine, according to the first apparent age, the apparent age prediction model, and the second facial image, that the changed pixels and/or regions in the second facial image are relevant to the The degree of influence of the first apparent age.
- the application also discloses a device for determining the local area that affects the degree of facial aging, including:
- Memory for storing computer executable instructions
- the processor is used to implement the steps in the method described above when executing the computer-executable instructions.
- the present application also discloses a computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, the steps in the method described above are implemented.
- the facial perception experimental method is selected to quantify the facial aging phenotype as a whole, and combined with deep learning and visualization methods, to locate the main affected areas of the overall facial aging, which can objectively and effectively evaluate the degree of impact of facial local areas on facial aging.
- visualization methods such as pixel derivation method and/or area masking method can more accurately and objectively locate the facial area that affects facial aging, and provide a more scientific basis for decision-making in the fields of medical treatment and cosmetology.
- the feature A+B+C is disclosed, and in another example, the feature A+B+D+E is disclosed, and the features C and D are equivalent technical means that play the same role.
- Feature E can be combined with feature C technically, then the A+B+C+D solution should not be regarded as recorded because it is technically infeasible, and A+B+ The C+E plan should be deemed to have been documented.
- Fig. 1 is a schematic flowchart of a method for determining a local area that affects the degree of facial aging according to a first embodiment of the present application.
- FIG. 2 shows a schematic diagram of the preprocessing flow of facial image data in an embodiment of the present application.
- Figure A is a schematic diagram of the identification of 106 facial key points in the facial area
- Figure B is a schematic diagram of calculating the position of the central axis of the face using a regression model
- Figure C is a schematic diagram of rotating the facial image according to the tilt angle to align the face vertically
- Figure D and Figure E are the schematic diagrams of the interception of the two pictures according to the mandibular point, left and right cheek points and upper forehead point.
- Fig. 3 shows a schematic diagram of facial area division according to an embodiment of the present application.
- FIG. 4 shows a schematic diagram of the variation curve of the variance of 1000 samples with the number of evaluators in an embodiment of the present application.
- Fig. 5 shows a schematic diagram of the deep learning, visualization and verification process in an embodiment of the present application.
- Figure 6 shows a schematic diagram of the ResNet18 network structure in an embodiment of the present application.
- Figure a is the basic module of the residual network, which establishes a shortcut link from input to output
- Figure b is the network structure of ResNet18, in which the dotted line refers to the feature doubled.
- FIG. 7 shows a schematic diagram of a comparison of training effects using three different deep learning models and three different training tags in an embodiment of the present application.
- FIG. 8 shows a schematic diagram of the division and visualization of the facial area in an embodiment of the present application.
- Figure a is a schematic diagram of the division of the facial area, showing the four regions of the face
- Figure b is a schematic diagram of the second facial image obtained based on the pixel derivation method
- Figure d is the result of the pixel derivation of Figure b being counted to four The heat map of the aging degree of each part
- Figure c is the heat map of the aging degree of the four parts obtained by the result of the area covering method.
- FIG. 9 shows a schematic flow chart of using the pixel derivation method in an embodiment of the present application.
- FIG. 10 shows a schematic diagram of the process of adopting the area covering method in an embodiment of the present application.
- Figure 11 shows a schematic diagram of the consistency check in an embodiment of the present application.
- Figure A is an example of deep learning ranking results, and the numbers represent the order of importance of different regions
- Figure BD shows the ranking results of eye movement experiments (or manual evaluation), and the bold dashed box is the main contrast area, where Figure B represents The order of the four regions is exactly the same.
- Figure C shows that only the most important regions are consistent, and Figure D shows that the most important regions are consistent with the less important regions.
- FIG. 12 shows a schematic diagram of the structure of the device for determining the local area that affects the degree of facial aging according to the second embodiment of the present application.
- Visualization Display the basis of neural network decision-making in the form of images or pictures.
- the first embodiment of the present application relates to a method for determining the local area that affects the degree of facial aging.
- the process is shown in Figure 1.
- the method includes the following steps:
- step 101 a first facial image of an object is acquired
- step 102 the first facial image is input into the apparent age prediction model of the human face to obtain the first apparent age
- step 103 image processing is performed on the first facial image, and a predetermined number of pixels and/or predetermined regions in the first facial image are changed to obtain a second facial image;
- step 104 according to the first apparent age, the apparent age prediction model, and the second facial image, determine the degree of influence of the changed pixels and/or regions in the second facial image on the first apparent age .
- the first facial image in step 101 may be a whole facial image or a partial facial image.
- the apparent age prediction model is obtained in advance through the following steps 1 and 2:
- the perception experiment is used to quantify the age distribution, average age or median age of the facial sample image as deep learning Train the label to obtain the training sample set; then perform step 2, use the training sample set to train the convolutional neural network model to obtain the apparent age prediction model.
- the age distribution is used as the training label.
- the convolutional neural network model may be, but is not limited to, a VGG 16 model, a ResNet 18 model, or a ResNet 50.
- the convolutional neural network model is a ResNet18 model.
- the image processing may adopt a pixel derivation method, an area covering method, or a combination thereof.
- the predetermined number of pixels are all pixels of the first facial image.
- the predetermined area is a sub-area of m1 pixels ⁇ m2 pixels, and m1 and m2 are each independently a positive integer of 1-1000, preferably 2-500, more preferably 3-250, and most preferably 5- 100.
- the predetermined area is 0.01%-25% of the entire facial area, preferably 0.1-10%, more preferably 1-5%.
- the predetermined area is selected from one or more of the following group: eye area, cheek area, mouth area, and forehead area.
- step 103 can be further implemented as the following steps: image processing is performed on the first facial image by using a pixel derivation method, and Gaussian noise is added to the predetermined number of pixels to obtain a second facial image For example, random Gaussian noise can be added to all pixels of the first face image, but it is not limited to this.
- this step 104 can be further implemented as the following steps: use the apparent age prediction model to derivate the second facial image to obtain a derivative value corresponding to each pixel on the second facial image, and based on the derivative of each pixel Numerically calculate the degree of influence of the changed pixels on the first apparent age.
- step 1 the first facial image is divided into multiple local areas, and all the pixels in each local area are counted separately The sum of the derivative values of is used as the weight coefficient of the influence of the local area on the overall facial aging; and in step 2, based on the influence weight coefficient of each local area, on the first facial image for each local area Make an annotation to obtain a third facial image.
- step 103 can be further implemented as the following steps: image processing the first facial image by using an area masking method, and covering the predetermined area with the average pixel value of the first facial image to obtain the first facial image Two facial images.
- this step 104 can be further implemented as the following step: input the second facial image into the apparent age prediction model of the face to obtain the second apparent age, and compare the second apparent age with the first apparent age. The apparent age, and the degree of influence of the preset area on the apparent age of the human face is calculated based on the comparison result.
- the above-mentioned "adopting an area covering method to perform image processing on the first facial image, and using the average pixel value of the first facial image to cover the predetermined area to obtain a second facial image” is further implemented as: A facial image is divided into a plurality of local areas, the first facial image is processed by the area masking method, and each local area is sequentially covered with the pixel average of the first facial image to obtain the corresponding cover each The second facial image of the local area.
- the first facial image can be divided into four local areas of eye area, cheek area, mouth area, and forehead area, and image processing is performed on the first facial image by using the area masking method.
- the pixel average of the first facial image covers each local area to obtain a corresponding second facial image that covers each local area.
- the above “divide the first facial image into multiple partial regions, use the region masking method to perform image processing on the first facial image, and sequentially cover each partial region with the average pixel value of the first facial image Area to obtain the corresponding second facial image covering each local area", it also includes the following steps 1 and 2: In step 1, the second apparent age and the first age corresponding to each local area are counted separately. The difference in apparent age is used as the weight coefficient of the influence of the local area on the overall facial aging; and in step 2, based on the influence weight coefficient of each local area, on the first facial image for each local area Make an annotation to obtain a third facial image.
- a deep learning network such as ResNet 18 is used to build a facial aging evaluation system, and a deep learning visualization method such as pixel derivation or facial masking is used to locate the main areas of facial aging.
- the specific plan is as follows:
- the face++ software was selected to identify 106 key points of the face in the face area ( Figure 2A, https://www.faceplusplus.com/). Then, according to the position coordinates of the eyes, nose and mouth in the key points, the regression model is used to calculate the position of the central axis of the face (Figure 2B, the red solid line), and the vertical line between the central axis and the numerical direction is calculated at the same time (Figure 2B) , The red dashed line) of the inclination angle. According to this, the face picture is rotated according to the tilt angle, so as to align the face vertically (Figure 2C). Finally, we intercepted the pictures according to the jaw point, left and right cheek points and upper forehead point. The final results of the interception are shown in Figure 2D and Figure 2E. The captured pictures were used for follow-up experiments and analysis.
- the evaluator uses a unified display device to evaluate the perceived age. Before the experiment, the evaluator did not know the age of the sample and the age distribution of the data set. In the evaluation, the evaluator needs to observe all 5,768 sample pictures, and then predict the age of each sample and record it. In addition, we selected 1,014 sample photos (500 men and 514 women). For these 1,014 sample photos, the evaluator needs to evaluate the facial area that he focused on when evaluating the age of the sample according to Figure 3, and then tick it in the evaluation form. The evaluation form is shown in Table 1, for example, where multiple areas can be selected.
- simulation data 10,000 perceptual age data for 1,000 evaluators.
- the simulated value of 1000 perceived age was repeatedly sampled for the i-th sampling, so as to calculate the variance of the perceptual age of each i-th sample.
- X i represents the i-th sampling
- n is selected represents the number of panelists.
- deep learning uses 5,768 sample photos as training data, uses the perceived age evaluated by the perception experiment to quantify facial aging, as a deep learning training label; then uses deep learning to visualize and locate the main areas that affect facial aging.
- the training data set has only 5,768 samples, less training data will not only affect the learning of network parameters, but also prone to overfitting. Therefore, before the actual training process, we follow the traditional deep learning data enhancement method to enhance the training data set in two ways: mirroring and tailoring.
- Mirror enhancement is the image generated by reflecting a picture relative to the Y axis, that is, one original picture generates two mirror pictures; Crop enhancement is to intercept five times on the top left, bottom right, top right, and the center to generate five different area pictures. According to this, the training data set can be expanded tenfold.
- the convolutional neural network models VGG 16, ResNet 18 and ResNet 50 are selected for age prediction.
- ResNet18 is one of the five main models of the ResNet network structure, which mainly includes three parts: Input, Output and Intermediate Convolution (Stages).
- One of the main difficulties in training a deep neural network is that as the number of network layers increases, the parameters of the neural network will become difficult to optimize, and the problem of network degradation will occur, that is, the ability of the trained model to fit the data is even low. For models with fewer network layers.
- ResNet establishes a shortcut link from shallow features to deep features by introducing a residual module, and uses neural networks to model the residuals between deep and shallow features instead of deep features
- the gradient can be returned more effectively when the back propagation algorithm is used to optimize the network parameters, so the degradation problem of the deep learning network can be better solved.
- this project uses VGG16 and ResNet50 to measure the training effect of the samples.
- the network parameters are initialized using the model pre-trained on ImageNet, the batch size is set to 64, the learning rate is set to 0.001, and a total of 100 epochs are trained.
- Figure 6(a) is the basic module of the residual network, which establishes a shortcut link from input to output, where weight layer refers to the weight layer, and relu is an activation function.
- Figure 6(b) shows the network structure of ResNet18, where the dotted line indicates the feature doubled.
- the connection part of the solid line indicates the same channel, such as 3*3conv, 64 means that 64 convolution kernels with a size of 3*3 are used for convolution;
- the connection part of the dashed line indicates that the channel is different, and the feature is doubled, such as 3 *3conv,128,/2 means that 128 convolution kernels with a size of 3*3 are used for convolution, and /2 means that the feature layer is doubled compared to the upper layer 64.
- the evaluation result distribution, evaluation median, and evaluation mean of 22 age evaluators were used as labels for deep learning model training.
- the error variance between the predicted result and the real result is used as the effect evaluation criterion to select the model.
- the project uses a 10-fold cross-validation method to obtain the prediction data of the training data set, that is, the sample data is divided into 10 parts, and each time 9 parts are used for training to obtain the model to predict the age of the remaining 1 part, and the cycle repeats 10 times.
- the deep learning model can be used to obtain the perceptual age data predicted by all samples.
- the model evaluation uses three training labels of 22 evaluators' age distribution, age mean and age median, and three deep learning models of VGG 16, ResNet 18, and ResNet 50.
- the correlation coefficient is calculated by Pearson's correlation coefficient, and the P values in the table are all less than 0.001.
- the results show that using the ResNet18 model, the age distribution is used as the training label to present the best results (the average difference is 2.27 years, and the correlation coefficient is 0.96, as shown in Figure 7).
- the ResNet 18 model trained with the age distribution as the training label is used for the subsequent process.
- the pixel derivation method and the area masking method are selected to realize visualization, and the face is divided into four regions: forehead, eyes, mouth and cheeks based on the facial anatomy and 106 facial feature points automatically calibrated by face++ (Figure 8a).
- the core idea of the pixel derivation method is to derive the value of the pixel from the predicted perceived age, and use the magnitude of the derivative to measure the importance of each pixel of the picture to the perceived age.
- the neural network is a highly nonlinear mapping, there are usually a small number of pixels with very large derivatives, which brings difficulties to visualization, so random noise is added to the picture, and the results of multiple derivation are averaged to get smoother Visualization results.
- sensitivity mask sensitivity mask
- the pixel derivation method adds noise to each pixel separately, and then calculates the derivative of the perceived age to the pixel after noise is added, and uses the derivative to evaluate the importance of each pixel.
- n refers to the number of calculations
- x refers to the original pixel value
- M c sensitivity means:
- step 901 the first facial image is input to the trained ResNet18 model to output the corresponding first apparent age; in step 902, n copies of the first facial image are copied , And respectively add random Gaussian noise to obtain corresponding n second facial images; in step 903, for each second facial image, calculate the derivative of the trained ResNet18 model for each pixel in the image; in step 904
- the derivation results of n second facial images are averaged and visualized to obtain the second facial image (as shown in Figure 8b, the brighter the color, the stronger the importance of the point); in step 905, count the parts of the face
- the sum of the derivatives of the area mouth, forehead, eyes, cheeks), sort each part according to the corresponding sum value, the higher the sum, the higher the importance and aging degree of the area, and the color shade indicates the degree of aging to each part Mark the area (as shown in Figure 8d, the redder the color, the more aging).
- the regional occlusion rule is to occlude each region with the mean value of all pixels in the data set, and evaluate the importance of each region with the prediction difference before and after the occlusion.
- the average pixel value is used to cover the four regions, and then The age of the concealed picture is predicted by the network, and then the age difference between the occluded and before the occlusion is obtained. If the age difference is negative, it means that the occluded area will increase the overall aging of the face; on the contrary, it means that the occluded area will reduce the overall aging of the face. The larger the difference, the stronger the degree.
- step 1001 the first facial image (unoccluded) is input to the trained ResNet18 model to output the corresponding first apparent age; in step 1002, the first facial image The four local areas (mouth, forehead, eyes, and cheeks) of the image are occluded and filled with the average value of the image to obtain the corresponding second facial image; in step 1003, these four occluded images are input into the trained ResNet18 model to obtain The corresponding second apparent age; in step 1004, the difference between the apparent age prediction results before and after the occlusion is used as the aging degree of each area, and the order is sorted (as shown in Figure 8c, the value corresponding to the color is the value after the occlusion and The predicted age difference before occlusion, the stronger the blue, the more aging, the stronger the red, the younger).
- Eyelink 1000 eye tracker was used to observe and record the ratio of the staying time of the evaluator's gaze point in each facial area ( Figure 11) to the total gaze time, to objectively quantify the degree of influence of local facial areas on the overall facial aging evaluation.
- this project designed three consistency test indicators based on the numerical characteristics of different methods. Using this indicator, combined with the results of manual evaluation and eye movement experiments, verifies the reliability of the deep learning visualization of this application.
- the indicators are introduced as follows:
- Top1 matching rate The ratio of the number of samples that match each other in the most important area (Figure 11C) to the total number of samples in different methods. This ratio shows that the two methods are completely consistent when looking for the most significant local areas that have the most significant impact on the overall aging of the face.
- Top2 matching rate In different methods, the ratio of the number of samples matching the most important regions (Figure 11C) and the number of samples matching the most important regions ( Figure 11D) to the total number of samples in different methods. The higher the ratio, the stronger the consistency between the two methods.
- Deep learning visualization uses two methods: pixel derivation and area concealment, and the visualization effect is evaluated according to three evaluation criteria: perfect agreement rate, key matching rate and secondary key matching rate.
- the results show that, compared with the area masking method, the use of pixel derivation obtains better results.
- the deep learning-pixel derivation visualization result has the highest coincidence rate with manual evaluation (0.18vs0.13), which is about 38% higher than the area masking method.
- the matching rate of Top1 is 0.52, and the matching rate of Top2 reaches 0.89; the matching rate of Top1 with the eye movement experiment is 0.61, and the matching rate of Top2 is 0.85.
- the second embodiment of the present application relates to a device for determining the local area that affects the degree of facial aging. Its structure is shown in FIG. 12.
- the device for determining the local area that affects the degree of facial aging includes:
- An image acquisition module for acquiring a first facial image of an object
- An image processing module configured to perform image processing on the first facial image, change a predetermined number of pixels and/or a predetermined area in the first facial image, to obtain a second facial image;
- An age prediction module configured to input the first facial image into the apparent age prediction model of the human face to obtain the first apparent age
- the influence degree determination module is configured to determine, according to the first apparent age, the apparent age prediction model, and the second facial image, that the changed pixels and/or areas in the second facial image are relative to the first apparent age The degree of influence.
- the first embodiment is a method embodiment corresponding to this embodiment.
- the technical details in the first embodiment can be applied to this embodiment, and the technical details in this embodiment can also be applied to the first embodiment.
- each module shown in the above implementation of the device for determining the local area that affects the degree of facial aging can be realized by a program (executable instruction) running on the processor, or can be realized by a specific logic circuit . If the device for determining the local area that affects the degree of facial aging in the embodiment of the present application is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
- the technical solutions of the embodiments of the present application can be embodied in the form of a software product in essence or a part that contributes to the prior art.
- the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a personal computer, a server, or a network device, etc.) executes all or part of the method in each embodiment of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, Read Only Memory (ROM, Read Only Memory), magnetic disk or optical disk and other media that can store program codes. In this way, the embodiments of the present application are not limited to any specific combination of hardware and software.
- the embodiments of the present application also provide a computer-readable storage medium in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, each method implementation of the present application is implemented.
- Computer-readable storage media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
- Information can be computer-readable instructions, data structures, program modules, or other data.
- Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable storage media does not include transitory media, such as modulated data signals and carrier waves.
- PRAM phase change memory
- SRAM static random access memory
- DRAM dynamic random access memory
- RAM random access memory
- ROM read-only memory
- EEPROM electrically erasable programmable read-only memory
- flash memory or other memory technology
- CD-ROM compact disc
- DVD digital versatile disc
- Magnetic cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other
- the embodiments of the present application also provide a device for determining a local area that affects the degree of facial aging, which includes a memory for storing computer-executable instructions, and a processor; the processor is used to execute data in the memory
- the computer-executable instructions implement the steps in the foregoing method implementation manners.
- the processor can be a central processing unit (Central Processing Unit, "CPU"), other general-purpose processors, digital signal processors (Digital Signal Processor, "DSP"), and application specific integrated circuits (Application Specific Integrated Circuits). Integrated Circuit, referred to as "ASIC”), etc.
- CPU Central Processing Unit
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- the aforementioned memory may be a read-only memory (read-only memory, "ROM” for short), random access memory (random access memory, "RAM” for short), flash memory (Flash), hard disk or solid state hard disk, etc.
- ROM read-only memory
- RAM random access memory
- flash flash memory
- hard disk or solid state hard disk etc.
- the steps of the method disclosed in the various embodiments of the present invention may be directly embodied as being executed and completed by a hardware processor, or executed and completed by a combination of hardware and software modules in the processor.
- an act is performed based on a certain element, it means that the act is performed at least based on that element. It includes two situations: performing the act only based on the element, and performing the act based on the element and Other elements perform the behavior. Multiple, multiple, multiple, etc. expressions include 2, 2, 2 and more than 2, 2 or more, and 2 or more.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
图片编号Picture ID | 感知年龄Perceived age | 眼睛Eye | 嘴巴mouth | 额头Forehead | 脸颊cheek | 所有区域All regions |
15HanTZ0005TB1_F15HanTZ0005TB1_F | To | To | To | To | To | To |
15HanTZ0010TB1_F15HanTZ0010TB1_F | To | To | To | To | To | To |
15HanTZ0014TB1_F15HanTZ0014TB1_F | To | To | To | To | To | To |
15HanTZ0022TB1_F15HanTZ0022TB1_F | To | To | To | To | To | To |
15HanTZ0023TB1_F15HanTZ0023TB1_F | To | To | To | To | To | To |
Claims (14)
- 一种用于确定影响面部衰老程度的局部区域的方法,其特征在于,包括:A method for determining the local area that affects the degree of facial aging, which is characterized in that it includes:获取一对象的第一面部图像;Acquiring a first facial image of an object;将所述第一面部图像输入人面部的表观年龄预测模型,以获得第一表观年龄;Inputting the first facial image into the apparent age prediction model of the human face to obtain the first apparent age;对所述第一面部图像进行图像处理,改变所述第一面部图像中预定数量的像素点和/或预定区域,以获得第二面部图像;Performing image processing on the first facial image, and changing a predetermined number of pixels and/or a predetermined area in the first facial image to obtain a second facial image;根据所述第一表观年龄、所述表观年龄预测模型和所述第二面部图像确定所述第二面部图像中发生改变的像素点和/或区域对于所述第一表观年龄的影响程度。According to the first apparent age, the apparent age prediction model, and the second facial image, determine the effect of changed pixels and/or regions in the second facial image on the first apparent age degree.
- 如权利要求1所述用于确定影响面部衰老程度的局部区域的方法,其特征在于,所述图像处理采用选自下组的方法:The method for determining the local area that affects the degree of facial aging according to claim 1, wherein the image processing adopts a method selected from the following group:像素求导法、区域遮盖法、或其组合。Pixel derivation method, area masking method, or a combination thereof.
- 如权利要求1所述的用于确定影响面部衰老程度的局部区域的方法,其特征在于,所述预定数量的像素点为所述第一面部图像的所有像素点。The method for determining a local area that affects the degree of facial aging according to claim 1, wherein the predetermined number of pixels are all the pixels of the first facial image.
- 如权利要求1所述的用于确定影响面部衰老程度的局部区域的方法,其特征在于,所述预定区域选自下组的一个或多个:The method for determining the local area that affects the degree of facial aging according to claim 1, wherein the predetermined area is selected from one or more of the following group:眼部区域、面颊区域、嘴部区域、前额部区域。Eye area, cheek area, mouth area, forehead area.
- 如权利要求2所述的用于确定影响面部衰老程度的局部区域的方法,其特征在于,所述图像处理采用像素求导法;The method for determining the local area that affects the degree of facial aging according to claim 2, wherein the image processing adopts a pixel derivation method;所述对所述第一面部图像进行图像处理,改变所述第一面部图像中预定数量的像素点和/或预定区域,以获得第二面部图像,进一步包括:The performing image processing on the first facial image and changing a predetermined number of pixels and/or predetermined areas in the first facial image to obtain a second facial image further includes:采用像素求导法对所述第一面部图像进行图像处理,在所述预定数量的像素点上添加高斯噪声,以获得第二面部图像;Performing image processing on the first facial image by using a pixel derivation method, and adding Gaussian noise to the predetermined number of pixels to obtain a second facial image;根据所述第一表观年龄、所述表观年龄预测模型和所述第二面部图像 确定所述第二面部图像中发生改变的像素点和/或区域对于所述第一表观年龄的影响程度,进一步包括:According to the first apparent age, the apparent age prediction model, and the second facial image, determine the effect of changed pixels and/or regions in the second facial image on the first apparent age The degree further includes:使用所述表观年龄预测模型对所述第二面部图像求导得到对应于第二面部图像上每一像素点的导数值,基于各像素点的导数值计算发生改变的像素点对于所述第一表观年龄的影响程度。Use the apparent age prediction model to derive the second facial image to obtain the derivative value corresponding to each pixel on the second facial image, and calculate the changed pixel based on the derivative value of each pixel for the first The degree of influence of apparent age.
- 如权利要求5所述的用于确定影响面部衰老程度的局部区域的方法,其特征在于,所述使用所述表观年龄预测模型对所述第二面部图像求导得到对应于第二面部图像上每一像素点的导数值,基于各像素点的导数值计算发生改变的像素点对于所述第一表观年龄的影响程度之后,还包括:The method for determining a local area that affects the degree of facial aging according to claim 5, wherein the use of the apparent age prediction model to derive the second facial image to obtain a second facial image The derivative value of each pixel above, after calculating the degree of influence of the changed pixel on the first apparent age based on the derivative value of each pixel, also includes:将所述第一面部图像划分成多个局部区域,分别统计每个局部区域的所有像素点的导数值的和作为该局部区域对整体面部衰老程度的影响权重系数;Dividing the first facial image into a plurality of partial regions, and separately counting the sum of the derivative values of all pixels in each partial region as the weight coefficient of the influence of the partial region on the overall facial aging;基于每个局部区域的所述影响权重系数,在所述第一面部图像上对所述每个局部区域进行标注,以获得第三面部图像。Based on the influence weight coefficient of each partial area, mark each partial area on the first facial image to obtain a third facial image.
- 如权利要求2所述的用于确定影响面部衰老程度的局部区域的方法,其特征在于,所述图像处理采用区域遮盖法;The method for determining the local area that affects the degree of facial aging according to claim 2, wherein the image processing adopts an area masking method;所述对所述第一面部图像进行图像处理,改变所述第一面部图像中预定数量的像素点和/或预定区域,以获得第二面部图像进一步包括:The performing image processing on the first facial image and changing a predetermined number of pixels and/or a predetermined area in the first facial image to obtain a second facial image further includes:采用区域遮盖法对所述第一面部图像进行图像处理,用所述第一面部图像的像素均值遮盖所述预定区域,以获得第二面部图像;Performing image processing on the first facial image by using an area covering method, and covering the predetermined area with an average pixel value of the first facial image to obtain a second facial image;所述根据所述第一表观年龄、所述表观年龄预测模型和所述第二面部图像确定所述第二面部图像中发生改变的像素点和/或区域对于所述第一表观年龄的影响程度,进一步包括:The determining, according to the first apparent age, the apparent age prediction model, and the second facial image, that the changed pixels and/or areas in the second facial image are relative to the first apparent age The degree of influence further includes:将所述第二面部图像输入所述表观年龄预测模型,以获得第二表观年龄;Inputting the second facial image into the apparent age prediction model to obtain a second apparent age;比对所述第二表观年龄和所述第一表观年龄,并基于比对结果计算所述预设区域对于人面部的表观年龄的影响程度。The second apparent age and the first apparent age are compared, and the degree of influence of the predetermined area on the apparent age of the human face is calculated based on the comparison result.
- 如权利要求7所述的用于确定影响面部衰老程度的局部区域的方 法,其特征在于,所述采用区域遮盖法对所述第一面部图像进行图像处理,用所述第一面部图像的像素均值遮盖所述预定区域,以获得第二面部图像进一步包括:The method for determining the local area that affects the degree of facial aging according to claim 7, wherein the first facial image is processed by the area masking method, and the first facial image is used for image processing. The average value of pixels covering the predetermined area to obtain the second facial image further includes:将所述第一面部图像划分为多个局部区域,采用区域遮盖法对所述第一面部图像进行图像处理,依次用所述第一面部图像的像素均值遮盖每个局部区域,以获得对应的遮盖每个局部区域的第二面部图像;Divide the first facial image into multiple partial regions, use the region masking method to perform image processing on the first facial image, and sequentially cover each partial region with the average pixel value of the first facial image to Obtain a corresponding second facial image covering each local area;所述将所述第一面部图像划分为多个局部区域,采用区域遮盖法对所述第一面部图像进行图像处理,依次用所述第一面部图像的像素均值遮盖每个局部区域,以获得对应的遮盖每个局部区域的第二面部图像之后,还包括:Said dividing the first facial image into a plurality of partial regions, performing image processing on the first facial image by using a region masking method, and sequentially covering each partial region with the average pixel value of the first facial image , After obtaining the corresponding second facial image covering each local area, it also includes:分别统计每个局部区域对应的所述第二表观年龄和所述第一表观年龄的差作为该局部区域对整体面部衰老程度的影响权重系数;Separately counting the difference between the second apparent age and the first apparent age corresponding to each local area as the weight coefficient of the influence of the local area on the overall facial aging;基于每个局部区域的所述影响权重系数,在所述第一面部图像上对所述每个局部区域进行标注,以获得第三面部图像。Based on the influence weight coefficient of each partial area, mark each partial area on the first facial image to obtain a third facial image.
- 如权利要求8所述的用于确定影响面部衰老程度的局部区域的方法,其特征在于,所述将所述第一面部图像划分为多个局部区域,采用区域遮盖法对所述第一面部图像进行图像处理,依次用所述第一面部图像的像素均值遮盖每个局部区域,以获得对应的遮盖每个局部区域的第二面部图像进一步包括:The method for determining the local area that affects the degree of facial aging according to claim 8, wherein the first facial image is divided into a plurality of local areas, and the first facial image is covered by a region masking method. Performing image processing on the facial image, and sequentially covering each partial area with the pixel average of the first facial image to obtain a corresponding second facial image covering each partial area further includes:将所述第一面部图像划分为眼部区域、面颊区域、嘴部区域、前额部区域的四个局部区域,采用区域遮盖法对所述第一面部图像进行图像处理,依次用所述第一面部图像的像素均值遮盖每个局部区域,以获得对应的遮盖每个局部区域的第二面部图像。The first facial image is divided into four local areas of eye area, cheek area, mouth area, and forehead area, and image processing is performed on the first facial image by using the area masking method. The pixel average of the first facial image covers each local area to obtain a corresponding second facial image that covers each local area.
- 如权利要求1-9中任意一项所述的用于确定影响面部衰老程度的局部区域的方法,其特征在于,所述表观年龄预测模型是采用包括以下步骤的方法获得:The method for determining the local area that affects the degree of facial aging according to any one of claims 1-9, wherein the apparent age prediction model is obtained by a method including the following steps:利用感知实验定量面部样本图像的年龄分布、年龄均值或年龄中位数作为深度学习训练标签,得到训练样本集;和Use perception experiments to quantify the age distribution, age mean or age median of facial sample images as deep learning training labels to obtain a training sample set; and用所述训练样本集训练卷积神经网络模型得到所述表观年龄预测模型。The convolutional neural network model is trained with the training sample set to obtain the apparent age prediction model.
- 如权利要求10所述的用于确定影响面部衰老程度的局部区域的方法,其特征在于,所述卷积神经网络模型是ResNet18模型。The method for determining the local area that affects the degree of facial aging according to claim 10, wherein the convolutional neural network model is a ResNet18 model.
- 一种用于确定影响面部衰老程度的局部区域的装置,其特征在于,包括:A device for determining the local area that affects the degree of facial aging, which is characterized in that it comprises:图像获取模块,用于获取一对象的第一面部图像;An image acquisition module for acquiring a first facial image of an object;图像处理模块,用于对所述第一面部图像进行图像处理,改变所述第一面部图像中预定数量的像素点和/或预定区域,以获得第二面部图像;An image processing module, configured to perform image processing on the first facial image, and change a predetermined number of pixels and/or a predetermined area in the first facial image to obtain a second facial image;年龄预测模块,用于将所述第一面部图像输入人面部的表观年龄预测模型,以获得第一表观年龄;An age prediction module, configured to input the first facial image into an apparent age prediction model of a human face to obtain the first apparent age;影响程度确定模块,用于根据所述第一表观年龄、所述表观年龄预测模型和所述第二面部图像确定所述第二面部图像中发生改变的像素点和/或区域对于所述第一表观年龄的影响程度。The influence degree determination module is configured to determine, according to the first apparent age, the apparent age prediction model, and the second facial image, that the changed pixels and/or regions in the second facial image are relevant to the The degree of influence of the first apparent age.
- 一种用于确定影响面部衰老程度的局部区域的装置,其特征在于,包括:A device for determining the local area that affects the degree of facial aging, which is characterized in that it comprises:存储器,用于存储计算机可执行指令;以及,Memory for storing computer executable instructions; and,处理器,用于在执行所述计算机可执行指令时实现如权利要求1至11中任意一项所述的方法中的步骤。The processor is configured to implement the steps in the method according to any one of claims 1 to 11 when executing the computer-executable instructions.
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现如权利要求1至11中任意一项所述的方法中的步骤。A computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, the computer-readable The steps in the method described.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010505109.6A CN113761985A (en) | 2020-06-05 | 2020-06-05 | Method and apparatus for determining local regions affecting the degree of facial aging |
CN202010505109.6 | 2020-06-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021244352A1 true WO2021244352A1 (en) | 2021-12-09 |
Family
ID=78784947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/095753 WO2021244352A1 (en) | 2020-06-05 | 2021-05-25 | Method and apparatus for determining local area that affects degree of facial aging |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113761985A (en) |
WO (1) | WO2021244352A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1870047A (en) * | 2006-06-15 | 2006-11-29 | 西安交通大学 | Human face image age changing method based on average face and senile proportional image |
CN107315987A (en) * | 2016-04-27 | 2017-11-03 | 伽蓝(集团)股份有限公司 | Assess facial apparent age, the method and its application of facial aging degree |
US20170351905A1 (en) * | 2016-06-06 | 2017-12-07 | Samsung Electronics Co., Ltd. | Learning model for salient facial region detection |
CN108140110A (en) * | 2015-09-22 | 2018-06-08 | 韩国科学技术研究院 | Age conversion method based on face's each position age and environmental factor, for performing the storage medium of this method and device |
CN110709856A (en) * | 2017-05-31 | 2020-01-17 | 宝洁公司 | System and method for determining apparent skin age |
-
2020
- 2020-06-05 CN CN202010505109.6A patent/CN113761985A/en active Pending
-
2021
- 2021-05-25 WO PCT/CN2021/095753 patent/WO2021244352A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1870047A (en) * | 2006-06-15 | 2006-11-29 | 西安交通大学 | Human face image age changing method based on average face and senile proportional image |
CN108140110A (en) * | 2015-09-22 | 2018-06-08 | 韩国科学技术研究院 | Age conversion method based on face's each position age and environmental factor, for performing the storage medium of this method and device |
CN107315987A (en) * | 2016-04-27 | 2017-11-03 | 伽蓝(集团)股份有限公司 | Assess facial apparent age, the method and its application of facial aging degree |
US20170351905A1 (en) * | 2016-06-06 | 2017-12-07 | Samsung Electronics Co., Ltd. | Learning model for salient facial region detection |
CN110709856A (en) * | 2017-05-31 | 2020-01-17 | 宝洁公司 | System and method for determining apparent skin age |
Also Published As
Publication number | Publication date |
---|---|
CN113761985A (en) | 2021-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Human visual system-based fundus image quality assessment of portable fundus camera photographs | |
Elangovan et al. | Glaucoma assessment from color fundus images using convolutional neural network | |
US20190191988A1 (en) | Screening method for automated detection of vision-degenerative diseases from color fundus images | |
RU2648836C2 (en) | Systems, methods and computer-readable media for identifying when a subject is likely to be affected by medical condition | |
CN102567734B (en) | Specific value based retina thin blood vessel segmentation method | |
WO2021068781A1 (en) | Fatigue state identification method, apparatus and device | |
WO2021190656A1 (en) | Method and apparatus for localizing center of macula in fundus image, server, and storage medium | |
Hatamizadeh et al. | Deep dilated convolutional nets for the automatic segmentation of retinal vessels | |
CN113782184A (en) | Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning | |
CN114694236A (en) | Eyeball motion segmentation positioning method based on cyclic residual convolution neural network | |
Li et al. | BrainK for structural image processing: creating electrical models of the human head | |
Feng et al. | Using eye aspect ratio to enhance fast and objective assessment of facial paralysis | |
Jiang et al. | Improving the generalizability of infantile cataracts detection via deep learning-based lens partition strategy and multicenter datasets | |
Maillard et al. | A deep residual learning implementation of metamorphosis | |
Muramatsu | Diagnosis of glaucoma on retinal fundus images using deep learning: detection of nerve fiber layer defect and optic disc analysis | |
Tsietso et al. | Multi-Input deep learning approach for breast cancer screening using thermal infrared imaging and clinical data | |
Wan et al. | A novel system for measuring pterygium's progress using deep learning | |
Vamsi et al. | Early Detection of Hemorrhagic Stroke Using a Lightweight Deep Learning Neural Network Model. | |
Joshi et al. | Graph deep network for optic disc and optic cup segmentation for glaucoma disease using retinal imaging | |
CN106446805A (en) | Segmentation method and system for optic cup in eye ground photo | |
Zhang et al. | Critical element prediction of tracheal intubation difficulty: Automatic Mallampati classification by jointly using handcrafted and attention-based deep features | |
Trotta et al. | A neural network-based software to recognise blepharospasm symptoms and to measure eye closure time | |
WO2021244352A1 (en) | Method and apparatus for determining local area that affects degree of facial aging | |
Goceri et al. | Automated Detection of Facial Disorders (ADFD): a novel approach based-on digital photographs | |
Carrasco Limeros et al. | Assessing GAN-Based Generative Modeling on Skin Lesions Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21818666 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21818666 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21818666 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.06.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21818666 Country of ref document: EP Kind code of ref document: A1 |