CN114612994A - Method and device for training wrinkle detection model and method and device for detecting wrinkles - Google Patents

Method and device for training wrinkle detection model and method and device for detecting wrinkles Download PDF

Info

Publication number
CN114612994A
CN114612994A CN202210290606.8A CN202210290606A CN114612994A CN 114612994 A CN114612994 A CN 114612994A CN 202210290606 A CN202210290606 A CN 202210290606A CN 114612994 A CN114612994 A CN 114612994A
Authority
CN
China
Prior art keywords
wrinkle
face
detection model
region
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210290606.8A
Other languages
Chinese (zh)
Inventor
王博
曾金龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Bode Ruijie Health Technology Co ltd
Original Assignee
Shenzhen Bode Ruijie Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Bode Ruijie Health Technology Co ltd filed Critical Shenzhen Bode Ruijie Health Technology Co ltd
Priority to CN202210290606.8A priority Critical patent/CN114612994A/en
Publication of CN114612994A publication Critical patent/CN114612994A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for training a wrinkle detection model and a method and a device for detecting wrinkles. Wherein, the method comprises the following steps: dividing each sample picture in a data set for training the wrinkle detection model into a plurality of areas based on different wrinkle types, and obtaining a plurality of divided pictures corresponding to the plurality of areas, wherein the plurality of areas respectively correspond to different wrinkle types; using a plurality of the segmentation pictures corresponding to a corresponding area in the plurality of areas to train the wrinkle detection model corresponding to the corresponding area. The invention solves the technical problem that the wrinkle detection model trained in the related technology can not accurately detect wrinkles.

Description

Method and device for training wrinkle detection model and method and device for detecting wrinkles
Technical Field
The invention relates to the field of image processing, in particular to a method and a device for training a wrinkle detection model and a method and a device for detecting wrinkles.
Background
With the development of big data, deep learning has gained higher and higher acceptance in various fields, and image processing has evolved from a conventional image processing method to a deep learning method.
At present, two schemes are generally adopted for wrinkle detection, one is a traditional image processing method, and the other is to train a wrinkle detection model directly through a target detection algorithm in deep learning.
The traditional image processing method adopts the traditional image processing technology to detect and statistically analyze the wrinkles, and the scheme has the advantages of high calculation speed and high accuracy in ideal environment; but the shortcomings are also obvious, the adaptability to complex environments is very poor, and the traditional image processing methods are easily interfered, for example, when illumination changes, posture changes, glasses wearing, hair and the like occur, the traditional image processing methods cannot process the complex situations, and detection errors can be caused.
In order to solve the problems of the conventional image processing methods, a wrinkle detection method based on deep learning is proposed in the related art. However, these existing methods based on deep learning are wrinkle detection models trained by directly using general target detection algorithms, and these general target detection algorithms are more suitable for detecting independent individual objects in a living scene, and have insufficient accuracy for detecting small objects such as wrinkles and slow detection speed.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for training a wrinkle detection model and a method and a device for detecting wrinkles, which are used for at least solving the technical problem that the wrinkle detection model trained in the related technology cannot accurately detect wrinkles.
According to an aspect of an embodiment of the present invention, there is provided a method for training a wrinkle detection model, including: dividing each sample picture in a data set for training the wrinkle detection model into a plurality of areas based on different wrinkle types, and obtaining a plurality of divided pictures corresponding to the plurality of areas, wherein the plurality of areas respectively correspond to different wrinkle types; using a plurality of the segmentation pictures corresponding to a corresponding area in the plurality of areas to train the wrinkle detection model corresponding to the corresponding area.
According to another aspect of the embodiments of the present invention, there is also provided a method of detecting wrinkles, including: carrying out face detection on a picture to be detected to obtain a face area in the picture to be detected; carrying out face key point positioning on the face area to obtain a plurality of face key points; performing region segmentation on the face region based on the face key points to obtain a plurality of regions, wherein each region in the plurality of regions corresponds to a wrinkle type; and detecting the wrinkles of each area based on the wrinkle detection model corresponding to the area.
According to still another aspect of an embodiment of the present invention, there is provided a training apparatus of a wrinkle detection model, including: a segmentation module configured to segment each sample picture in the data set used for training the wrinkle detection model into a plurality of regions based on different wrinkle types, resulting in a plurality of segmented pictures corresponding to the plurality of regions; a training module configured to train the wrinkle detection model corresponding to a respective region of the plurality of regions using a plurality of the segmented pictures corresponding to the respective region.
According to still another aspect of an embodiment of the present invention, there is provided an apparatus for detecting wrinkles, including: the face detection module is configured to detect a face of a picture to be detected and acquire a face region in the picture to be detected; a key point positioning module configured to perform face key point positioning on the face region to obtain a plurality of key points; a region segmentation module configured to perform region segmentation on the face region based on a plurality of face key points to obtain a plurality of regions, wherein each region in the plurality of regions corresponds to a wrinkle type; and the wrinkle detection module is used for detecting wrinkles of each area based on the wrinkle detection model corresponding to the area.
In the embodiment of the invention, each sample picture in the data set for training the wrinkle detection model is divided into a plurality of areas based on different wrinkle types, wherein the areas respectively correspond to different wrinkle types; training the wrinkle detection model corresponding to a respective region of the regions by using a plurality of the segmentation pictures corresponding to the respective region. Through the technical means, the technical problem that a wrinkle detection model trained in the related technology cannot accurately detect wrinkles is solved, and the technical effect of improving the accuracy of wrinkle detection is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a flowchart of a training method of a wrinkle detection model according to a first embodiment of the present invention;
FIG. 2 is a flowchart of a training method of a wrinkle detection model according to a second embodiment of the present invention;
FIG. 3 is a flowchart of a training method of a wrinkle detection model according to a third embodiment of the present invention;
FIG. 3A is a diagram illustrating an example of a grading of wrinkle severity in accordance with an embodiment of the present invention;
fig. 4 is a flow chart of a method of detecting wrinkles according to a fourth embodiment of the present invention;
FIG. 5 is a schematic diagram of segmentation of a face region according to a fifth embodiment of the present invention;
fig. 6 is a flow chart of a method of detecting wrinkles according to a sixth embodiment of the present invention;
fig. 7 is a flowchart of a sample picture processing method according to a seventh embodiment of the present invention;
fig. 8 is a schematic structural diagram of a training apparatus for a wrinkle detection model according to an eighth embodiment of the present invention;
fig. 9 is a schematic configuration diagram of an apparatus for detecting wrinkles according to a ninth embodiment of the present invention;
fig. 10 is a schematic diagram of a UI interface of an electronic device according to a tenth embodiment of the invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
SUMMARY
According to the wrinkle detection model training method and the wrinkle detection method provided by the embodiment of the application, the classification training model is used for replacing the traditional target detection model in different regions. Partitioning and dividing the face according to the positions of the regions where the wrinkle types such as raised lines, fishtail lines, Chinese character lines, fine lines under the eyes, statutory lines and the like usually appear on the face; and then training based on a classification model aiming at the wrinkles in different areas, wherein the training core comprises anti-interference sample inclusion training and severity classification training so as to obtain respective wrinkle detection models of the wrinkles in different areas.
After each wrinkle detection model is trained, a face image to be detected, such as a human face image, is input, and wrinkles in different regions are directly classified and detected by using the wrinkle detection models corresponding to the different regions, so that the detection of wrinkles in each region of the whole face is completed.
Compared with the scheme of traditional image processing, the scheme provided by the embodiment of the application has better anti-interference capability, is suitable for illumination change and face posture change, and can obtain good detection effect even if hairs, glasses and the like are shielded and interfered.
Compared with the existing deep learning scheme, the scheme has more accurate identification precision and faster processing speed.
Example 1
According to an embodiment of the present invention, there is provided a method for training a wrinkle detection model, as shown in fig. 1, the method including:
step S102, based on different wrinkle types, dividing each sample picture in the data set for training the wrinkle detection model into a plurality of regions, and obtaining a plurality of divided pictures corresponding to the plurality of regions, where the plurality of regions respectively correspond to different wrinkle types.
In an exemplary embodiment, the data set includes sample pictures classified based on wrinkle severity and/or sample pictures with interference factors. For example, the data set includes at least one of the following types of sample pictures with interference factors: sample pictures of different brightness; sample pictures with different postures and angle turning directions; masked sample pictures.
In an exemplary embodiment, a picture with an interference factor may be directly selected as a sample picture. In other embodiments, an interference factor may also be added to the sample picture to generate a sample picture with the interference factor. For example, the brightness of a plurality of sample pictures in the data set is adjusted by different brightness proportions, so that a plurality of sample pictures with different brightness are obtained; or, scaling and rotating a plurality of sample pictures in the data set at different scaling and rotation angles to obtain a plurality of sample pictures with different postures and angle turns; or selecting a plurality of masked pictures as the plurality of sample pictures in the data set, wherein the masked pictures comprise hair masks or glasses masks.
Then, in an exemplary embodiment, the sample picture may be subjected to face detection to obtain a face region where a face is located in the sample picture; performing face key point detection on the face region to obtain face key points in the face region; segmenting the facial region in the sample picture into the plurality of regions based on the facial keypoints and the different wrinkle types.
Wherein the plurality of regions may include at least one of: the head raising line area where the head raising line is located, the III line area where the III line is located, the left fishtail line area where the left eye fishtail line is located, the right fishtail line area where the right eye fishtail line is located, the left eye lower fine line area where the left eye lower fine line is located, the right eye lower fine line area where the right eye lower fine line is located, the left stature line area where the left face stature line is located, and the right stature line area where the right face stature line is located.
Step S104, using a plurality of the segmented pictures corresponding to a corresponding area of the plurality of areas to train the wrinkle detection model corresponding to the corresponding area.
In one exemplary embodiment, a plurality of the divided pictures corresponding to each of the plurality of regions are subjected to wrinkle severity grading based on a wrinkle type corresponding to each of the plurality of regions; training the wrinkle detection model corresponding to the corresponding region by using a plurality of classified segmentation pictures corresponding to the corresponding region.
In this embodiment, each sample picture is segmented based on the wrinkle type to obtain a plurality of segmented pictures, and a wrinkle detection model corresponding to a corresponding region of the plurality of regions is trained based on a plurality of segmented pictures corresponding to the corresponding region, so that the technical problem that the wrinkle detection model trained in the related art cannot accurately detect wrinkles is solved, and the wrinkle detection model capable of accurately detecting wrinkles is provided, and has the technical effect of accurately detecting wrinkles.
Example 2
According to an embodiment of the present invention, there is provided a flowchart of a training method of a wrinkle detection model, as shown in fig. 2, the method including:
step S202, a sample picture is obtained, and the sample picture is processed.
The plurality of sample pictures used for training the wrinkle detection model may be pictures including the entire face region, or may be pictures including only a partial face region, for example, pictures including only a raised line region or only a plain region.
If the sample picture is a picture containing only a partial face region, for example, only a raised line region, the sample picture may be directly used as the sample picture to train a wrinkle detection model for the corresponding region, for example, the raised line region.
If the sample picture is a picture including the entire face region, the sample picture also needs to be segmented. For example, a plurality of sample pictures for training the wrinkle detection model are acquired; respectively detecting the sample pictures to obtain a face area in each sample picture; carrying out face key point detection on the face area to obtain face key points in the face area; segmenting the face region into a plurality of the regions based on the face keypoints and the different wrinkle types. And then, the pictures corresponding to all the segmented areas are used for training the wrinkle detection model of the corresponding area.
In an exemplary embodiment, the plurality of sample pictures includes sample pictures classified based on wrinkle severity and/or sample pictures with interference factors.
In an exemplary embodiment, the following process may also be performed on the sample picture.
For example, each of the sample pictures is ranked for wrinkle severity based on the severity of wrinkles in each of the sample pictures.
For example, a plurality of pictures with interference factors are selected as a plurality of sample pictures or the sample pictures are subjected to interference factor adding processing to obtain a plurality of sample pictures with interference factors. Wherein the interference factors include at least one of: brightness factors, posture factors; an angular steering factor and a shading factor.
Step S204, training a plurality of wrinkle detection models respectively using the plurality of processed sample pictures.
And training a wrinkle detection model corresponding to each region by using the processed sample picture corresponding to each region.
The embodiment divides, classifies and/or adds interference factor processing to the sample picture based on the wrinkle type, and trains the wrinkle detection model corresponding to each region based on the processed sample picture, thereby solving the technical problem that the wrinkle detection model trained in the related technology can not accurately detect wrinkles, providing the wrinkle detection model capable of accurately detecting wrinkles, and having the technical effect of accurately detecting wrinkles.
Example 3
According to an embodiment of the present invention, there is provided a flowchart of a method for training a wrinkle detection model, as shown in fig. 3, the method including:
step S302, constructing different wrinkle detection models in different areas.
In the existing wrinkle detection based on deep learning, the training method is to train all types of wrinkles as wrinkles all at once, which results in poor accuracy of detecting wrinkles by the trained wrinkle detection model.
The embodiment of the application is used for independently constructing the wrinkle detection model aiming at different types of wrinkles. In this embodiment, the wrinkle types include a raising line, a herringbone line, a left crow's foot line, a right crow's foot line, a left eye lower fine line, a right eye lower fine line, a left stature line, and a right stature line.
Based on these different wrinkle types, a target region containing different types of wrinkles, such as a face region of a human face, is divided into a plurality of regions. For example, the face area is divided into an area where the raised line is located, an area where the herringbone line is located, an area where the left crow's foot line is located, an area where the right crow's foot line is located, an area where the left eye lower fine line is located, an area where the right eye lower fine line is located, an area where the left stature line is located, and an area where the right stature line is located.
A plurality of wrinkle detection models are respectively constructed for these regions corresponding to different wrinkle types. For example, a wrinkle detection model for a raised line, a wrinkle detection model for a herringbone line, a wrinkle detection model for a left crow's foot line, a wrinkle detection model for a right crow's foot line, a wrinkle detection model for a fine line under the left eye, a wrinkle detection model for a fine line under the right eye, a wrinkle detection model for a left statuary line, and a wrinkle detection model for a right statuary line are constructed.
In this embodiment, the method for constructing different wrinkle detection models by using a partition method has the following two advantages:
1) more accurate: because the facial area is divided in different areas, the wrinkle types to be detected in different areas are determined, and the wrinkle samples corresponding to the wrinkle types are adopted to train a single wrinkle detection model, the proprietary wrinkle detection model aiming at different wrinkle types has higher accuracy than a universal wrinkle detection model;
2) less training samples are required: since each wrinkle detection model is directed to a specific type of wrinkle, the amount of samples required can be much smaller than for a general wrinkle detection model, which reduces the training effort.
And step S304, training different wrinkle detection models according to further degrees.
In the existing wrinkle detection method, most of wrinkles are only detected, and the severity of the wrinkles is calculated by comparing the length or the area of the wrinkles after the wrinkles are detected in a small part; in fact, the mere length or area ratio of the severity of wrinkles does not necessarily indicate that they match the visual severity of the person.
The present embodiment employs a mechanism for grading the severity of a sample, and then trains the wrinkle detection model.
In one exemplary embodiment, wrinkle severity may be divided into four levels as shown in fig. 3A: no wrinkles, slight wrinkles, severe wrinkles and severe wrinkles. Wherein, the wrinkle-free is skin tightness and wrinkle-free; mild wrinkles refer to the occult visible wrinkles; more severe wrinkles refer to wrinkles that are visibly apparent; severe wrinkles refer to wrinkles having a certain depth.
In another embodiment, the severity of the wrinkles can be classified into six grades according to the depth of the wrinkles: grade 0, fine and smooth skin, compact skin and no wrinkles; level 1, invisible lines, no easily distinguishable wrinkles have formed; grade 2, wrinkles just discernible, like broken line creases; grade 3, the light wrinkles can be distinguished, and the light creases are similar to the light wrinkles; grade 4, obviously visible wrinkles, certain depth of the wrinkles and clear and sharp boundaries; grade 5, the depth of the wrinkle exceeds the depth threshold, and the wrinkle begins to appear.
In one exemplary embodiment, the severity ranking process of the sample includes the steps of:
step S3040, calculating and classifying by the area of wrinkles and the number of ravines;
detecting wrinkles according to the previous steps and performing instance segmentation; calculating the area of the wrinkles, carrying out edge detection on the wrinkle examples through a filtering algorithm, and calculating the number of wrinkle gullies; establishing preliminary severity grading through the area and the number of gullies; and (4) feeding back the grading severity degree by combining with user feedback, and further training grading judgment of the optimized severity degree.
For example, the edge detection may be performed on the wrinkle example by the following method.
First, a picture including an example of a wrinkle is subjected to a filtering process. Firstly, mean filtering is carried out, then, a recursive filter is adopted to carry out filtering on the picture after mean filtering processing, then, sigma standard variance is used to carry out nonlinear filtering, then, discrete Gaussian function is used to carry out filtering on the picture, and finally, median filtering is carried out.
Next, edge detection is performed. For example, the frei _ chen method, the Kirsch method, the prewitt method, and the Sobel method can be used to detect an edge, and thereafter, the non-maximum point on the edge is suppressed.
Finally, image enhancement is performed. For example, the image contrast operator is increased to make the dark portions of the image very bright and the bright portions slightly dark. Thereafter, low-pass filtering is performed with a low-pass filter operator.
For example, image enhancement can be performed by the following formula:
H=round((val-H’)*α+H”)
wherein H represents the gray value of the enhanced picture, val represents a bitmap, H' represents the gray value of the filtered picture, α represents the coefficient of strong contrast, which may be 0.8, and H "represents the gray of the original image.
By the method, the wrinkle edge is detected, the image is clearer, the quality of the edge can be improved, and the accuracy of the detected wrinkle is higher.
Step S3042, pushing the result of the calculation ranking to the user;
step S3044, the user feeds back the severity level in a grading way;
step S3046, carrying out classification training of severity classification on the calculation classification and the classification input fed back by the user to the end-to-end deep learning model;
s3048, gradually adopting the severity classification model trained in the fourth step to replace calculation classification, recommending the calculation classification to a user, and continuously circulating the steps S3046 and S3048;
the specific steps of calculating the grade through the wrinkle area and the number of ravines mentioned in the above steps are as follows:
in another exemplary embodiment, the severity rating of the sample may be automatically judged by a wrinkle severity rating method.
And step S306, importing the anti-interference sample into a training wrinkle detection model.
The method is intersected with a wrinkle detection method adopting a traditional image processing algorithm, and the data set is constructed by introducing corresponding interference factors when a sample picture of the data set is constructed, so that a trained wrinkle detection model has better adaptability.
Specifically included interference training is:
1) illumination change: and training by using sample pictures with different brightness so as to adapt to different lighting environments. Such noisy samples may be generated by manual collection, or by adjusting the brightness of existing samples. The present embodiment specifically includes a simulation of the light conditions of a typical living scene. The specific rule is as follows:
A. the illumination change simulation of different periods can be realized by simulating the illumination conditions of different periods in the same scene, the photos can be taken once every 1 hour in a typical scene,
then, counting the change of illumination brightness, calculating the change of average brightness value, and generating the change on other pictures of the same scene;
B. the variation of different lights. In consideration of the home environment, the illumination is affected not only by the change of natural light but also by the illumination light source. Selecting a common lamp in the household environment,
manufacturing illumination effects of different lamps, then counting information such as color temperature and brightness of the lamps, and transferring the information to other pictures;
2) posture change: different postures of the human face can influence the detection, human face pictures with different postures and angle turning can be collected, and the interference samples can be generated by zooming and rotating the existing samples;
3) shielding interference: typical shades such as glasses are interference factors in reality, the embodiment of the application brings a face sample worn with glasses into an interference sample, hair can affect both raised lines and Chinese lines, and the embodiment also brings a face sample with Liuhai into an interference sample.
In general, typical samples having various disturbance factors are sample pictures of three factors, that is, bang, glasses, and head-bending posture. By preparing a comprehensive sample of the factors in advance, the interference factors can be well processed, and the wrinkle detection model can learn the interference factors, so that the interference factors can be well processed when appearing, and the classification accuracy is kept.
Example 4
According to an embodiment of the present invention, there is provided a flowchart of a method of detecting wrinkles, as shown in fig. 4, the method including:
in step S402, the face region is divided into a plurality of regions according to the face key points.
Since the distribution of wrinkles on a face, such as a human face, is regular and has its own names for wrinkles in different regions, the location of which is relatively fixed from region to region, different regions can be cut out on the face, which correspond to different types of wrinkles. The association of the region segmentation and the wrinkle type is the key point that the speed of the embodiment of the application is faster than that of the traditional target detection algorithm based on deep learning.
Step S404, training an anti-interference wrinkle detection model with the subarea severity degrees.
And respectively carrying out wrinkle detection model training corresponding to different wrinkle types on sample pictures corresponding to the segmented areas, further classifying the sample pictures according to the wrinkle severity under the wrinkles, adding interference factors into the sample pictures, and then carrying out training, thus obtaining a series of anti-interference wrinkle detection models with the partition severity.
The method for training each wrinkle detection model has been described in detail in the embodiments of training wrinkle detection models of the present application, and is not described here again.
And step S406, rapidly detecting the wrinkles of the human face by using the wrinkle detection model.
Carrying out face detection on a picture to be detected to obtain a face area in the picture to be detected; carrying out face key point positioning on the face area to obtain a plurality of face key points; and performing region segmentation on the face region based on the plurality of face key points to obtain a plurality of regions, wherein each region in the plurality of regions corresponds to one wrinkle type. Here, the method of segmenting the region of the picture to be detected is similar to the method of segmenting the sample picture when the wrinkle detection model is trained, and is based on the region segmentation of the face key point.
Then, the corresponding wrinkle detection models are directly used for detecting different areas, and the severity of wrinkles of the different areas is output.
The specific region segmentation method will be described in detail below, and will not be described herein again.
Example 5
According to an embodiment of the present invention, a region segmentation method is provided.
Wrinkles are the manifestation of aging of human skin, and the areas of the face where wrinkles generally appear are the forehead, around the eyes, the nasal splint, around the mouth, and the like. The wrinkles appearing in different areas are generally expressed by special names, for example, the wrinkles on the forehead are called the raised lines, and the wrinkles between the two eyebrows are the lines shaped like Chinese character 'chuan'.
According to the method and the device, the face is subjected to region segmentation according to the regions corresponding to different types of wrinkles. Before segmentation, a face region is firstly positioned through face detection, and then face key points are obtained through a face key point detection algorithm. In the embodiment, a method for detecting 106 key points is adopted, and 106 key points are models commonly used in the current face key point model.
After the key points are detected, the region segmentation is realized by referring to the key points, and the specific segmentation method is shown in table 1:
Figure BDA0003561685150000131
Figure BDA0003561685150000141
Figure BDA0003561685150000151
Figure BDA0003561685150000161
Figure BDA0003561685150000171
wherein p represents 106 keypoint coordinate arrays, e.g., p [1] represents the first keypoint, p [105] represents the last keypoint, p [1] [0] represents the x-coordinate of the first keypoint, and p [1] [1] represents the y-coordinate of the first keypoint.
In a specific implementation, polygon segmentation is used. The fixed point coordinates of the polygon are provided, and finally each region is its polygon's fixed point coordinate sequence points.
The segmentation example of the present embodiment is an experimental segmentation implementation method that has been able to achieve a very good result, and the parameters of segmentation implementation may be slightly adjusted based on the segmentation rule. The division rule is merely an example, and any other modification method based on the concept of the present application is within the scope of the present application.
Based on an example of the visual segmentation obtained by the above segmentation method, as shown in fig. 5, the face region can be segmented into the following regions: a raised line area 51 where the raised line is located, a herringbone line area 52 where the herringbone line is located, a left fishtail line area 54 where the left eye fishtail line is located, a right fishtail line area 53 where the right eye fishtail line is located, a left eye lower fine line area 56 where the left eye lower fine line is located, a right eye lower fine line area 55 where the right eye lower fine line is located, a left stature line area 58 where the left face stature line is located, and a right stature line area 57 where the right face stature line is located.
Example 6
According to an embodiment of the present invention, another method of detecting wrinkles is provided. In the present embodiment, the wrinkles on the face of a human are detected as an example, but in other embodiments, the wrinkles on other parts of the human body, such as the neck lines, may be detected, and the wrinkles on the face of an animal may be detected.
As shown in fig. 6, the method for detecting wrinkles in the present embodiment includes:
step S602, acquiring a picture to be detected.
And acquiring a picture to be detected, such as a human face image. The face image may be an image obtained by photographing with a mobile phone or the like.
Step S604, face detection.
And detecting the region of the face in the picture to be detected by using a face detection algorithm, then properly expanding the region and intercepting a face image.
Step S606, positioning the key points of the face.
Face keypoint localization is performed using a face keypoint localization algorithm, such as a 2D106 model.
Step S608, area division.
The region segmentation method described in the above embodiments of the present application is used to perform region segmentation on a picture to be detected, and each obtained region corresponds to a certain wrinkle type.
Step S610, inference prediction.
And calling the wrinkle detection models corresponding to the corresponding regions provided by the embodiments of the present application to perform inference prediction, so as to detect wrinkles in the corresponding regions.
And step S612, outputting the detection result.
And integrating the output of all the wrinkle detection models to obtain the final wrinkle detection result.
The embodiment of the application not only detects whether wrinkles exist, but also can provide the severity of the wrinkles and the area coordinates.
Example 7
According to an embodiment of the invention, a method for processing a sample picture is provided. As shown in fig. 7, the method includes:
step S702, the sample picture is divided into areas.
Please refer to the method in embodiment 5 for the method of partitioning areas, which is not described herein again.
Step S704, a sample picture is graded.
Obtaining a picture corresponding to a certain region in the sample pictures after the region division, taking a square region of N-by-N pixels in the region by using a sliding window, and calculating the gray value of the pixel point in the square region to obtain the gray value matrix of the square region. And determining the reliability value of the wrinkle point according to each gray value in the gray value matrix corresponding to the area, and taking the point with the reliability value larger than the reliability threshold value as the wrinkle point.
Wherein, the reliability value can be calculated by the following formula:
Figure BDA0003561685150000191
where M represents the confidence value, N represents the number of pixels, i represents the row of pixels in the matrix, j represents the column of pixels in the matrix, P represents the confidence valueijWhich represents a gray-scale value of the image,
Figure BDA0003561685150000192
denotes the average gray value and μ denotes the correction value.
The step size N of the sliding window may be 5, or a natural number less than 10. And sliding the sliding window according to a preset step length, traversing the corresponding area in the sample picture, thereby obtaining the reliability values of all pixels of the corresponding area, and further determining the number of wrinkle points in the corresponding area.
And dividing the corresponding area into non-wrinkle, slight wrinkle, severe wrinkle and severe wrinkle according to a preset wrinkle point number threshold value.
By the wrinkle severity grading method, wrinkles in each area of the sample picture are graded according to the wrinkle severity grade, and then the graded sample picture is used for training a wrinkle detection model, so that the detection accuracy of the wrinkle detection model is improved.
Step S706, add interference factors to the sample picture.
In this embodiment, the sample picture may be rotated or scaled.
In an exemplary embodiment, the sample picture may be rotated according to a part or all of the preset rotation angles in the preset rotation angle set, respectively, where the preset rotation angle set may include a plurality of different preset rotation angles. It should be understood that the preset rotation angle may represent an angle indicating that rotation of the sample picture is required, or may be a rotation angle of a certain region of a plurality of regions into which the sample picture is divided. Wherein, the value of the preset rotation angle can be [ -180, 180] ° (degree), for example, the preset rotation angle can be 10 °; the preset rotation angle may not be [ -180, 180] °, for example, the preset rotation angle is 200 °.
The preset angles in the preset angle set may be set according to the types of wrinkles, for example, since the distribution direction of the periocular wrinkles is mostly an oblique direction, the preset angles in the preset angle set for the periocular region may include 10 °, 30 °, 60 °, and 90 °. The forehead wrinkle distribution direction is mostly a horizontal direction, and the preset angles in the preset angle set for the forehead area may include 0 °, 90 °, and 180 °.
In an exemplary embodiment, the sample picture obtained after rotation may be respectively scaled down according to a part of or all of the preset scales in the preset scale set. For example, after the sample picture or the picture corresponding to the region in the divided sample picture is rotated according to the preset angle in the preset angle set, the rotated sample picture is scaled according to the preset scaling in the preset scaling set.
The preset scaling in the preset scaling set may be set according to the type of wrinkle. The gray value of the area where the wrinkles are located in the gray image is lower than that of the non-wrinkle area, particularly fine wrinkles around eyes, but after the wrinkles are amplified according to the preset scaling ratio larger than 1, the contrast of the light and dark areas in the gray image is not obvious before amplification, the gray value difference of pixel points in the area where the wrinkles are located is not large, and therefore the wrinkle points are more difficult to accurately detect in the amplified gray image. Therefore, in order to improve the accuracy of the trained wrinkle detection model, the preset scales in the preset scale set of the wrinkle region around the eye, for example, in the sample picture may include 0.6 times, 0.8 times, and 0.9 times.
The present embodiment may also convert the sample picture into a picture with different brightness.
For example, the sample picture is sharpened by gaussian filtering processing. Specifically, Gaussian blur processing is performed on a sample picture, then a preset coefficient is subtracted from a prior coefficient corresponding to the sample picture after the Gaussian blur processing, a matrix corresponding to the sample picture after the Gaussian blur is multiplied, and then the numerical value in the matrix after the multiplication is reduced to be within the RGB pixel value range of 0-255. The sample picture processed in the way can remove some fine interference details and noise in the sample picture, and is more real and credible than an image sharpening result obtained by directly using a convolution sharpening operator.
Alternatively, the sample picture is converted into an image with different brightness by the following method. Calculating the average gray scale of the sample picture, dividing the sample picture into N-M squares according to a certain size, calculating the average value of each square, and obtaining the brightness matrix D of each square; multiplying the brightness matrix D by different preset brightness proportions to obtain a brightness difference matrix E of each square block; and the matrix E is differed into a brightness distribution matrix R with the same size as the sample picture, so that the sample picture with the brightness adjusted is obtained. By performing luminance difference processing on the background luminance distribution, the sample picture is made to exhibit illumination unevenness similar to that generated when photographing is performed.
The processed sample picture can be used for training a wrinkle detection model.
In this embodiment, since the sample picture is subjected to the process of dividing regions, classifying and adding interferon, a wrinkle detection model can be trained more accurately.
It should be noted that for simplicity of description, the above-mentioned method embodiments are shown as a series of combinations of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 8
The embodiment of the application provides a trainer of wrinkle detection model, includes: a segmentation module 82 and a training module 84.
The segmentation module 82 is configured to segment each sample picture in the data set used for training the wrinkle detection model into a plurality of regions based on different wrinkle types, resulting in a plurality of segmented pictures corresponding to the plurality of regions.
The training module 84 is configured to train the wrinkle detection model corresponding to a corresponding region of the plurality of regions using the plurality of segmented pictures corresponding to the corresponding region.
In an exemplary embodiment, the segmentation module 82 is further configured to perform face detection on each of the sample pictures, so as to obtain a face region where a face is located in each of the sample pictures; performing face key point detection on the face region to obtain face key points in the face region; segmenting the face region into the plurality of regions based on the facial keypoints and the different wrinkle types.
In an exemplary embodiment, the training apparatus for wrinkle detection model further comprises a sample processing module configured to grade the severity of wrinkles for a plurality of the segmented pictures corresponding to the respective region based on the wrinkle type corresponding to the respective region. The training module 84 is configured to train the wrinkle detection model corresponding to the respective region using the ranked plurality of segmented pictures corresponding to the respective region.
In an exemplary embodiment, the sample processing module is further configured to at least one of: adjusting the brightness of a plurality of sample pictures in the data set according to different brightness ratios to obtain a plurality of sample pictures with different brightness; scaling and rotating a plurality of the sample pictures in the data set at different scaling and/or rotation angles, respectively, to obtain a plurality of the sample pictures with different poses and/or angular turns; selecting a plurality of masked pictures as the plurality of sample pictures in the data set, wherein the masked pictures comprise hair masks or glasses masks. The training module 84 is configured to train the wrinkle detection model corresponding to the respective region by using the processed plurality of segmented pictures corresponding to the respective region.
Example 9
The embodiment of the application provides a device for detecting wrinkles, includes: a face detection module 92, a keypoint localization module 94, a region segmentation module 96, and a wrinkle detection module 98.
The face detection module 92 is configured to perform face detection on a picture to be detected, and acquire a face region in the picture to be detected; the key point positioning module 94 is configured to perform face key point positioning on the face region, obtaining a plurality of face key points; the region segmentation module 96 is configured to perform region segmentation on the face region based on a plurality of the face key points, so as to obtain a plurality of regions, wherein each region in the plurality of regions corresponds to a wrinkle type; the wrinkle detection module 98 is configured to detect wrinkles for each of the plurality of regions based on the wrinkle detection model corresponding to the region.
The wrinkle detection model in this embodiment is trained by using the wrinkle detection model training method in other embodiments in this application, and details are not described here.
Example 10
According to the embodiment of the invention, the electronic equipment is further provided.
The electronic device may include a processor, an external memory interface, an internal memory, a Universal Serial Bus (USB) interface, a charge management module, a power management module, a battery, an antenna, a wireless communication module, an audio module, a speaker, a receiver, a microphone, an earphone interface, a sensor module, a key, an indicator, a camera, a display screen, and the like. Wherein the sensor module comprises an ambient light sensor. Further, the sensor module may further include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, a bone conduction sensor, and the like. In other embodiments, the electronic device in the embodiments of the present application may further include a mobile communication module, a Subscriber Identity Module (SIM) card interface, and the like. The functions of the above modules or devices are prior art and will not be described herein.
The application programs supported by the electronic device in the embodiment of the present application may include a photographing application, such as a camera.
Applications supported by the electronic device in embodiments of the present application may also include applications for skin detection. Among them, an application for skin detection is to detect features of the user's facial skin, such as wrinkles, from a captured image of a human face, and may provide an analysis report to the user.
The application for skin detection in this embodiment may use the wrinkle detection method provided in other embodiments of the present application to detect a wrinkle condition on the skin.
In this embodiment, an electronic device is taken as an example of a mobile phone, and a specific operation is shown in fig. 10.
As shown in a of fig. 10, the electronic device detects a click operation on the skin detection icon, and the electronic device displays a user interface of the skin detection application on the display screen in response to the operation on the icon, as shown in B of fig. 10. In the interface, a camera icon is included.
The electronic equipment detects the operation on the camera icon, responds to the operation on the camera icon, and calls a camera application on the electronic equipment to acquire the picture to be detected. Of course, the user may also select the picture containing the face stored in the internal memory as the picture to be detected.
After receiving the input picture to be detected, the application for skin detection can detect the wrinkle condition on the skin by adopting the wrinkle detection method provided by other embodiments of the application. In other embodiments, in addition to detecting wrinkles, skin features such as pores, blackheads, blotches, redness, and shine of the facial skin may be detected and all features combined to provide a skin analysis report to the user as shown in fig. 10C.
The skin analysis report may be presented to the user through a user interface of the electronic device, for example, the skin analysis report may provide scores including composite score, age of skin, and pores, blackheads, fine lines, color spots, and red silks, and provide relevant suggestions for the user to refer to.
It is clear to those skilled in the art that the embodiments of the present application can be implemented in hardware, or firmware, or a combination thereof. When implemented in software, the functions described above may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. Taking this as an example but not limiting: the computer-readable medium may include RAM, ROM, an Electrically Erasable Programmable Read Only Memory (EEPROM), a compact disc read-Only memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
Further, any connection is properly termed a computer-readable medium. For example, if software is transmitted from a website, a server, or other remote source using a coaxial cable, a fiber optic cable, a twisted pair, a Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, the coaxial cable, the fiber optic cable, the twisted pair, the DSL, or the wireless technologies such as infrared, radio, and microwave are included in the fixation of the medium. Disk and disc, as used in accordance with embodiments of the present application, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A method for training a wrinkle detection model, comprising:
dividing each sample picture in a data set for training the wrinkle detection model into a plurality of areas based on different wrinkle types, and obtaining a plurality of divided pictures corresponding to the plurality of areas, wherein the plurality of areas respectively correspond to different wrinkle types;
using a plurality of the segmentation pictures corresponding to a corresponding area in the plurality of areas to train the wrinkle detection model corresponding to the corresponding area.
2. The method of claim 1, wherein segmenting each sample picture in the data set used to train the wrinkle detection model into a plurality of regions based on different wrinkle types comprises:
carrying out face detection on each sample picture to obtain a face area where the face is located in each sample picture;
performing face key point detection on the face region to obtain face key points in the face region;
segmenting the face region into the plurality of regions based on the facial keypoints and the different wrinkle types.
3. The method of claim 2, wherein the plurality of regions comprises at least one of: the head raising line area where the head raising line is located, the III line area where the III line is located, the left fishtail line area where the left eye fishtail line is located, the right fishtail line area where the right eye fishtail line is located, the left eye lower fine line area where the left eye lower fine line is located, the right eye lower fine line area where the right eye lower fine line is located, the left stature line area where the left face stature line is located, and the right stature line area where the right face stature line is located.
4. The method according to claim 1, characterized in that the data set comprises sample pictures classified based on wrinkle severity and/or sample pictures with disturbing factors.
5. The method of claim 1, wherein training the wrinkle detection model corresponding to a respective region of the plurality of regions using a plurality of the segmented pictures corresponding to the respective region comprises:
classifying a wrinkle severity degree of a plurality of the divided pictures corresponding to the respective region based on a wrinkle type corresponding to the respective region;
training the wrinkle detection model corresponding to the respective region using the ranked plurality of the segmented pictures corresponding to the respective region.
6. The method according to claim 1, wherein prior to segmenting each sample picture in the data set used for training the wrinkle detection model into a plurality of regions, the method further comprises at least one of:
adjusting the brightness of a plurality of sample pictures in the data set according to different brightness ratios to obtain a plurality of sample pictures with different brightness;
scaling and rotating a plurality of the sample pictures in the data set at different scaling and/or rotation angles, respectively, to obtain a plurality of the sample pictures with different poses and/or angular turns;
selecting a plurality of masked pictures as the plurality of sample pictures in the data set, wherein the masked pictures comprise hair masks or glasses masks.
7. A method of detecting wrinkles, comprising:
carrying out face detection on a picture to be detected to obtain a face area in the picture to be detected;
carrying out face key point positioning on the face area to obtain a plurality of face key points;
performing region segmentation on the face region based on a plurality of face key points to obtain a plurality of regions, wherein each region in the plurality of regions corresponds to a wrinkle type;
and detecting the wrinkles of each area based on the wrinkle detection model corresponding to the area.
8. The method according to claim 7, wherein the wrinkle detection model is trained based on the method according to any one of claims 1 to 6.
9. A training apparatus for a wrinkle detection model, comprising:
a segmentation module configured to segment each sample picture in the data set used for training the wrinkle detection model into a plurality of regions based on different wrinkle types, resulting in a plurality of segmented pictures corresponding to the plurality of regions;
a training module configured to train the wrinkle detection model corresponding to a respective region of the plurality of regions using a plurality of the segmented pictures corresponding to the respective region.
10. An apparatus for detecting wrinkles, comprising:
the face detection module is configured to detect a face of a picture to be detected and acquire a face region in the picture to be detected;
a key point positioning module configured to perform face key point positioning on the face region to obtain a plurality of face key points;
a region segmentation module configured to perform region segmentation on the face region based on a plurality of face key points to obtain a plurality of regions, wherein each region in the plurality of regions corresponds to a wrinkle type;
and the wrinkle detection module is used for detecting wrinkles of each area based on the wrinkle detection model corresponding to the area.
CN202210290606.8A 2022-03-23 2022-03-23 Method and device for training wrinkle detection model and method and device for detecting wrinkles Pending CN114612994A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210290606.8A CN114612994A (en) 2022-03-23 2022-03-23 Method and device for training wrinkle detection model and method and device for detecting wrinkles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210290606.8A CN114612994A (en) 2022-03-23 2022-03-23 Method and device for training wrinkle detection model and method and device for detecting wrinkles

Publications (1)

Publication Number Publication Date
CN114612994A true CN114612994A (en) 2022-06-10

Family

ID=81865458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210290606.8A Pending CN114612994A (en) 2022-03-23 2022-03-23 Method and device for training wrinkle detection model and method and device for detecting wrinkles

Country Status (1)

Country Link
CN (1) CN114612994A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993714A (en) * 2023-08-30 2023-11-03 深圳伯德睿捷健康科技有限公司 Skin detection method, system and computer readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007128171A (en) * 2005-11-01 2007-05-24 Advanced Telecommunication Research Institute International Face image synthesizer, face image synthesizing method and face image synthesizing program
CN110110637A (en) * 2019-04-25 2019-08-09 深圳市华嘉生物智能科技有限公司 A kind of method of face wrinkle of skin automatic identification and wrinkle severity automatic classification
CN110443765A (en) * 2019-08-02 2019-11-12 厦门美图之家科技有限公司 Image processing method, device and electronic equipment
CN112101185A (en) * 2020-09-11 2020-12-18 深圳数联天下智能科技有限公司 Method for training wrinkle detection model, electronic device and storage medium
CN112347843A (en) * 2020-09-18 2021-02-09 深圳数联天下智能科技有限公司 Method and related device for training wrinkle detection model
CN112418195A (en) * 2021-01-22 2021-02-26 电子科技大学中山学院 Face key point detection method and device, electronic equipment and storage medium
US20210209344A1 (en) * 2020-06-29 2021-07-08 Beijing Baidu Netcom Science And Technology Co., Ltd. Image recognition method and apparatus, device, and computer storage medium
CN113673470A (en) * 2021-08-30 2021-11-19 广州虎牙科技有限公司 Face detection model training method, electronic device and computer-readable storage medium
CN113989290A (en) * 2021-10-19 2022-01-28 杭州颜云科技有限公司 Wrinkle segmentation method based on U-Net

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007128171A (en) * 2005-11-01 2007-05-24 Advanced Telecommunication Research Institute International Face image synthesizer, face image synthesizing method and face image synthesizing program
CN110110637A (en) * 2019-04-25 2019-08-09 深圳市华嘉生物智能科技有限公司 A kind of method of face wrinkle of skin automatic identification and wrinkle severity automatic classification
CN110443765A (en) * 2019-08-02 2019-11-12 厦门美图之家科技有限公司 Image processing method, device and electronic equipment
US20210209344A1 (en) * 2020-06-29 2021-07-08 Beijing Baidu Netcom Science And Technology Co., Ltd. Image recognition method and apparatus, device, and computer storage medium
CN112101185A (en) * 2020-09-11 2020-12-18 深圳数联天下智能科技有限公司 Method for training wrinkle detection model, electronic device and storage medium
CN112347843A (en) * 2020-09-18 2021-02-09 深圳数联天下智能科技有限公司 Method and related device for training wrinkle detection model
CN112418195A (en) * 2021-01-22 2021-02-26 电子科技大学中山学院 Face key point detection method and device, electronic equipment and storage medium
CN113673470A (en) * 2021-08-30 2021-11-19 广州虎牙科技有限公司 Face detection model training method, electronic device and computer-readable storage medium
CN113989290A (en) * 2021-10-19 2022-01-28 杭州颜云科技有限公司 Wrinkle segmentation method based on U-Net

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993714A (en) * 2023-08-30 2023-11-03 深圳伯德睿捷健康科技有限公司 Skin detection method, system and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN110705583B (en) Cell detection model training method, device, computer equipment and storage medium
CN108234882B (en) Image blurring method and mobile terminal
CN109064390A (en) A kind of image processing method, image processing apparatus and mobile terminal
CN111444744A (en) Living body detection method, living body detection device, and storage medium
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
JP2021531571A (en) Certificate image extraction method and terminal equipment
CN106297755A (en) A kind of electronic equipment for musical score image identification and recognition methods
CN115330640B (en) Illumination mapping noise reduction method, device, equipment and medium
CN112052730B (en) 3D dynamic portrait identification monitoring equipment and method
CN113822136A (en) Video material image selection method, device, equipment and storage medium
CN111784658B (en) Quality analysis method and system for face image
CN108764139A (en) A kind of method for detecting human face, mobile terminal and computer readable storage medium
CN111080665B (en) Image frame recognition method, device, equipment and computer storage medium
CN112686820A (en) Virtual makeup method and device and electronic equipment
CN114612994A (en) Method and device for training wrinkle detection model and method and device for detecting wrinkles
US11823433B1 (en) Shadow removal for local feature detector and descriptor learning using a camera sensor sensitivity model
CN113378790A (en) Viewpoint positioning method, apparatus, electronic device and computer-readable storage medium
CN111080754B (en) Character animation production method and device for connecting characteristic points of head and limbs
CN116309494B (en) Method, device, equipment and medium for determining interest point information in electronic map
CN110210401B (en) Intelligent target detection method under weak light
CN113538304A (en) Training method and device of image enhancement model, and image enhancement method and device
CN109919164B (en) User interface object identification method and device
CN110162949B (en) Method and device for controlling image display
CN114255193A (en) Board card image enhancement method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination