CN115410240A - Intelligent face pockmark and color spot analysis method and device and storage medium - Google Patents

Intelligent face pockmark and color spot analysis method and device and storage medium Download PDF

Info

Publication number
CN115410240A
CN115410240A CN202110510735.9A CN202110510735A CN115410240A CN 115410240 A CN115410240 A CN 115410240A CN 202110510735 A CN202110510735 A CN 202110510735A CN 115410240 A CN115410240 A CN 115410240A
Authority
CN
China
Prior art keywords
face
image
severity
pox
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110510735.9A
Other languages
Chinese (zh)
Inventor
李博
芦迪
徐佳
孟广浩
钟昊翔
闫茜宇
白杨
胡茂伟
夏树涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Juyue Technology Culture Co ltd
Original Assignee
Shenzhen Juyue Technology Culture Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Juyue Technology Culture Co ltd filed Critical Shenzhen Juyue Technology Culture Co ltd
Priority to CN202110510735.9A priority Critical patent/CN115410240A/en
Publication of CN115410240A publication Critical patent/CN115410240A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides a method, a device and a storage medium for analyzing pockmarks and color spots of an intelligent face, wherein the method comprises the following steps: the electronic equipment acquires a face image of a target object; the electronic equipment preprocesses the face image, and segments a face region image after executing alignment; and inputting the face region picture into a preset face color spot and pox severity classification model, executing classification calculation to obtain a classification result, and determining the severity grade of the color spot and the pox according to the classification result. The technical scheme provided by the application has the advantage of high user experience.

Description

Intelligent face pockmark and color spot analysis method and device and storage medium
Technical Field
The application relates to the technical field of images and artificial intelligence, in particular to a method and a device for analyzing pockmarks and color spots of an intelligent face and a storage medium.
Background
With the development of economy, people are more and more interested in the skin condition of people, and the importance of human face skin analysis in daily life is continuously improved. Different skin conditions correspond to different skin care modes and different types of cosmetics, such as facial cleansers and the like, for which oil control should be selected for oily skin. However, many people in real life cannot clearly recognize the skin condition of the people, and the situation of 'getting worse with more care' is easily caused by slightly considering the matching degree of the people and the products. It is important to know the condition of the skin before applying cosmetics or skin care.
Existing methods for analyzing pox and spots are generally based on manual analysis by a professional (e.g., doctor or cosmetologist), depend on the level of expertise of the person, and are costly.
Disclosure of Invention
The embodiment of the application provides an intelligent face pockmark and color spot analysis method, device and storage medium, which can realize automatic analysis of pockmarks and color spots, reduce cost and improve user experience.
In a first aspect, an embodiment of the present application provides a method for analyzing pockmarks and color spots of an intelligent face, where the method is applied to an electronic device, and the method includes the following steps:
the electronic equipment acquires a face image of a target object;
the electronic equipment preprocesses the face image, and segments a face region image after executing alignment;
and inputting the face region picture into a preset face color spot and pox severity classification model, performing classification calculation to obtain a classification result, and determining the severity grade of the color spots and the pox according to the classification result.
In a second aspect, a method for training a classification model of facial stain and acne severity is provided, the method comprising:
collecting a face image, wherein the face image is marked with label information of pockmarks and color spot severity, and constructing a data set;
preprocessing the face image, segmenting a face region, and aligning to obtain a face region picture;
inputting the face region picture into a pre-established initial neural network model; and training the initial neural network model by taking the face region picture as training data to obtain a face color spot and pox severity classification model.
In a third aspect, an intelligent face pockmark and color spot analyzing device is provided, which is applied to an electronic device, and the method includes the following steps:
the acquisition unit is used for acquiring a face image of a target object;
the processing unit is used for preprocessing the face image, and segmenting a face region image after executing alignment; and inputting the face region picture into a preset face color spot and pox severity classification model, performing classification calculation to obtain a classification result, and determining the severity grade of the color spots and the pox according to the classification result.
In a fourth aspect, a training device for a classification model of facial stain and acne severity is provided, the device comprising:
the collecting unit is used for collecting a face image, wherein the face image is marked with label information of pockmarks and color spot severity, and a data set is constructed;
the processing unit is used for preprocessing the face image, segmenting a face region and aligning the face region to obtain a face region picture;
the training unit is used for inputting the face region picture into a pre-established initial neural network model; and training the initial neural network model by taking the face region picture as training data to obtain a face color spot and pox severity classification model.
In a fifth aspect, a computer-readable storage medium is provided, which stores a program for electronic data exchange, wherein the program causes a terminal to execute the method provided in the first or second aspect.
In a sixth aspect, a computer program product is provided, wherein the computer program product causes the terminal to execute the method provided in the first or second aspect.
The embodiment of the application has the following beneficial effects:
according to the technical scheme, the required classification model of the severity of the face color spots and the acnes can be obtained by training according to the face data set with the color spot and the acnes labeling information by adopting the training method of the classification model of the severity of the face color spots and the acnes. By adopting the method for detecting the severity of the face color spots and the acnes, the severity information of the color spots and the acnes of the face image to be detected can be obtained. The method and the device automatically realize the classified treatment of the severity of the color spots and the acnes, reduce the cost and improve the user experience.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device provided in the present application.
Fig. 2 is a schematic flow chart of the intelligent face pockmark and color spot analysis method provided by the application.
Fig. 3 is a schematic flow chart of a training method of a classification model for severity of facial stains and pox provided by the present application.
Fig. 4 is a schematic structural diagram of an intelligent face pockmark and color spot analysis device provided by the present application.
Fig. 5 is a schematic structural diagram of a training device of a classification model for facial mottle and acne severity provided by the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, fig. 1 provides an electronic device, which may specifically include: the device comprises a processor, a memory, a camera and a display screen, wherein the components can be connected through a bus or in other ways, and the application is not limited to the specific way of the connection. In practical applications, the electronic device may be a smart phone, a personal computer, a server, a tablet computer, a smart television, a smart removable mirror, and the like.
Referring to fig. 2, fig. 2 provides a method for analyzing pockmarks and spots of an intelligent face, which may be performed by the electronic device shown in fig. 1, and the electronic device may include: the face color spot and vaccinia severity classification model can be used in the following manner shown in fig. 3, which is not described herein again, and the intelligent face vaccinia and color spot analysis method shown in fig. 2 includes the following steps:
step S201, the electronic equipment collects a face image of the target object.
Illustratively, the facial image in the above steps is a facial image that is uploaded after the user shoots the facial image in the using process. The target object may be a user who needs to determine the severity of pox and color spots, such as Zhang III, li IV, etc.
Step S202, the electronic equipment preprocesses the face image, and segments a face region image after executing alignment;
the face images in the face data set may contain environmental information other than the faces, and the positions of the faces in the images may be different from each other, and the poses of the faces may be different. In order to facilitate subsequent training of the neural network, images in the face image data set need to be preprocessed, and the face region is cut out and aligned. The method comprises the following specific steps:
and inputting the face image into a face key point detection model to obtain face key points.
The face keypoint detection model is typically a deep learning-based method, such as face-alignment, MTCNN, etc., and includes a plurality of (e.g., 68 or other numerical) keypoints at the mandible line, eyes, eyebrows, nose, mouth, etc. of the face.
And aligning the face image according to the acquired key points. In particular, the method comprises the following steps of,
firstly, determining the inclination angle of the face according to the angle of the connecting line between the left eye and the right eye, and then correcting the face by using an image rotation method. Then, determining the positions of the left side and the right side of the face according to the key points of the left side face and the right side face; determining the position of the lower part of the face according to the chin key point information; and estimating the forehead length according to the eye key point coordinates and the chin coordinates, and determining the position of the upper part of the face. And finally, cutting out the face part according to the acquired face position information to obtain a face region image.
Step S203, inputting the face region picture into a preset face color spot and pox severity classification model, executing classification calculation to obtain a classification result, and determining the severity grade of the color spot and the pox according to the classification result.
According to the technical scheme, the required classification model of the severity of the face color spots and the acnes can be obtained by training according to the face data set with the color spot and the acnes labeling information. By adopting the method for detecting the severity of the face color spots and the acnes, the severity information of the color spots and the acnes of the face image to be detected can be obtained. The method and the device automatically realize the classified treatment of the severity of the color spots and the acnes, reduce the cost and improve the user experience.
Referring to fig. 3, fig. 3 provides a training method for a facial stain and pox severity classification model, which may be executed in the electronic device shown in fig. 1, but may also be executed in a server or a computing center to improve the training efficiency, and the present application does not limit the execution subject of the method, and the method shown in fig. 3 includes the following steps:
and S301, collecting a face image, wherein the face image is marked with label information of pockmarks and color spot severity, and constructing a data set.
The method comprises the following specific steps:
the face image is an image containing a face, and is usually captured by a mobile phone, a camera, or other capturing devices under normal lighting, and one image has only one face, and the face should be approximately upright. The area of the face cannot be too small.
After the face image is acquired, the face image is labeled in a manual mode. And respectively labeling the face image as no color spots, light color spots, moderate color spots and severe color spots according to different color spots in the face area. According to the difference of the number of the pox in the face region, the face image is respectively marked as no pox, mild pox, moderate pox and severe pox. And labeling a color spot label and a pox label on each face image respectively.
The face image and the label information form a face image data set, and the face data set is divided into three parts, namely a training set, a verification set and a test set, wherein the proportion of the three parts is about 6.
Step S302, the face image is preprocessed, and a face area is segmented and aligned.
The face images in the face data set may include environmental information other than the face, and positions of the face in the images may be different from each other, and postures of the face may be different from each other. In order to facilitate subsequent training of the neural network, images in the face image data set need to be preprocessed, and the face regions are cut out and aligned. The method comprises the following specific steps:
and inputting the face image into a face key point detection model to obtain face key points. The face key point detection model is usually a deep learning-based method, such as face-alignment, MTCNN, etc., and includes 68 key points of the mandible line, eyes, eyebrows, nose, mouth, etc. of the face.
And correcting the face according to the acquired key points. Firstly, determining the inclination angle of the face according to the angle of the connecting line between the left eye and the right eye, and then correcting the face by using an image rotation method. Then, determining the positions of the left side and the right side of the face according to the key points of the left side face and the right side face; determining the position of the lower part of the face according to the chin key point information; and estimating the forehead length according to the eye key point coordinates and the chin coordinates, and determining the position of the upper part of the face. And finally, cutting out the face part according to the acquired face position information.
And step S303, inputting the face image into a pre-established initial neural network model.
After the face image is preprocessed, the face image can be input into a pre-established initial neural network model so as to train the initial neural network model by using the face image. Before the face image is input into the neural network model, the face image needs to be zoomed to a preset size, so that the neural network can conveniently learn the face images with the same size, and the training efficiency of the model is improved.
In the field of image processing, convolutional neural network models are often used for pattern recognition. We use ResNet, which performs well in classification applications, as the initial neural network model, and other deep convolutional neural network models can also be used. The model is initialized by using parameters pre-trained by the model on a large face data set.
When using convolutional neural networks for the classification task, the neural network model can roughly classify both the feature extractor and the classifier. The feature extraction module usually comprises a plurality of convolution layers, pooling layers, batch normalization layers, nonlinear activation layers and the like, and a face image is mapped into a multidimensional vector through the neural network layers to form a face feature vector. The classification module is usually a full connection layer and a Softmax layer, and the face feature vector is input into the classifier to obtain a vector with the same length as the number of classes, namely 4, which corresponds to four classes, i.e. none, light, medium and heavy.
For the two tasks of classifying the severity of pox and classifying the severity of stain, two models can be trained respectively for classification. A multi-task training method can also be adopted to train a model, and the parameters of the feature extraction modules of the model are the same, but different classification layer parameters are used.
And step S304, training the initial neural network model by using the face images which are aligned to obtain a face color spot and pox severity classification model.
Because pox and mottle data usually have the characteristic of data imbalance, that is, a large amount of data in a data set belong to the two categories of none or mild data, while the data in the two categories of moderate or severe data are less, the neural network tends to judge the face as none or mild, and the performance is reduced. Data resampling or downsampling methods may be used to reduce the impact of this problem.
And training the neural network model by adopting a back propagation method. The loss function is the cross entropy loss or the focal loss. And optimizing the parameters of the neural network model by using a sampling random gradient descent optimizer. Specifically, the network feature extraction module fully learns the relevant features of the color spots and the acnes in the face image, and the full connection layer can map the relevant features according to the learned relevant features to obtain the classification result of the severity of the color spots and the acnes. The classification result obtained by the neural network is compared with the severity of the color spots and the acnes marked in advance by the face image, so that the parameters of the neural network can be optimized. After iterative training of more training samples, the network model can obtain a classification model of the severity of the face stains and the pox.
The method adopts a mode of global classification of the images by the neural network to carry out skin quality analysis, and specifically comprises analysis of the severity of pox and color spots. The neural network selects Resnet with stronger characteristic extraction capability for the image, and after the characteristics are extracted, the characteristic vectors are accessed into different linear layers, so that different skin problems are classified and processed. The technical scheme of the application has the following characteristics:
according to the technical scheme, the whole skin detection process is automatic, extra manpower and material resource consumption is not needed, and the result output of each picture only needs millisecond-level time; high speed and low cost.
The model of the technical scheme of the application can be deployed on a server, and can be conveniently detected anytime and anywhere by using an application program to communicate with the server in an intelligent mobile terminal such as a mobile phone;
according to the technical scheme, the image features are automatically extracted by means of a strong neural network to carry out end-to-end detection, the features do not need to be artificially defined, and the image features can be better modeled. Compared with the local detection of the human face, the human face global image classification task can better control the global information of the human face, is not limited to the extraction of the local information, and can better judge the skin of the human face.
Referring to fig. 4, fig. 4 provides an intelligent face pockmark and stain analysis device, which is applied to an electronic device, and the method includes the following steps:
an acquisition unit 401 configured to acquire a face image of a target object;
a processing unit 402, configured to pre-process the face image, and segment a face region image after performing rectification and alignment; and inputting the face region picture into a preset face color spot and pox severity classification model, performing classification calculation to obtain a classification result, and determining the severity grade of the color spots and the pox according to the classification result.
Illustratively, the processing unit is specifically configured to input a face image into the face key point detection model, and obtain a plurality of face key points; and aligning the face image according to the acquired key points.
In an example, the processing unit is specifically configured to determine an angle at which a human face is inclined according to an angle of a connecting line between left and right eyes, and correct the human face by using an image rotation method; determining the positions of the left side and the right side of the face according to the key points of the left side face and the right side face; determining the position of the lower part of the face according to the chin key point information; and estimating the forehead length according to the eye key point coordinates and the chin coordinates, and determining the position of the upper part of the face to finish alignment.
Referring to fig. 5, fig. 5 provides a training apparatus for a classification model of facial stain and pox severity, the apparatus comprising:
the collecting unit 501 is used for collecting a face image, wherein the face image is marked with label information of pockmarks and color spot severity, and a data set is constructed;
a processing unit 502, configured to pre-process the face image, segment a face region, and align the face region to obtain a face region picture;
a training unit 503, configured to input the face region picture into a pre-established initial neural network model; and training the initial neural network model by taking the face region picture as training data to obtain a face color spot and pox severity classification model.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the methods as recited in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific implementation manner and the application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. The method for analyzing the pox and the color spots of the intelligent face is applied to an electronic device and comprises the following steps:
the electronic equipment acquires a face image of a target object;
the electronic equipment preprocesses the face image, and segments a face region image after executing alignment;
and inputting the face region picture into a preset face color spot and pox severity classification model, executing classification calculation to obtain a classification result, and determining the severity grade of the color spot and the pox according to the classification result.
2. The method according to claim 1, wherein the electronic device preprocesses the face image, and segmenting the face region image after performing the alignment specifically comprises:
inputting a face image into a face key point detection model to obtain a plurality of face key points;
and aligning the face image according to the acquired key points.
3. The method according to claim 2, wherein the aligning and aligning the face image according to the obtained key points specifically comprises:
determining the inclination angle of the human face according to the angle of the connecting line between the left eye and the right eye, and righting the human face by using an image rotation method; determining the positions of the left side and the right side of the face according to the key points of the left side face and the right side face; determining the position of the lower part of the face according to the chin key point information; and estimating the forehead length according to the eye key point coordinates and the chin coordinates, and determining the position of the upper part of the face to finish alignment.
4. A training method of a classification model of facial color spots and acne severity is characterized by comprising the following steps:
collecting a face image, wherein the face image is marked with label information of pockmarks and color spot severity, and constructing a data set;
preprocessing the face image, segmenting a face region, and aligning to obtain a face region picture;
inputting the face region picture into a pre-established initial neural network model; and training the initial neural network model by taking the face region picture as training data to obtain a face color spot and pox severity classification model.
5. An intelligent face pockmark and color spot analysis device, which is applied to an electronic device, and the method comprises the following steps:
the acquisition unit is used for acquiring a face image of a target object;
the processing unit is used for preprocessing the face image, and segmenting a face region image after executing alignment; and inputting the face region picture into a preset face color spot and pox severity classification model, executing classification calculation to obtain a classification result, and determining the severity grade of the color spot and the pox according to the classification result.
6. The apparatus of claim 5,
the processing unit is specifically used for inputting the face image into the face key point detection model and acquiring a plurality of face key points; and aligning the face image according to the acquired key points.
7. The apparatus of claim 5,
the processing unit is specifically used for determining the inclination angle of the human face according to the angle of the connecting line between the left eye and the right eye and righting the human face by using an image rotation method; determining the positions of the left side and the right side of the face according to the key points of the left side face and the right side face; determining the position of the lower part of the face according to the chin key point information; and estimating the forehead length according to the eye key point coordinates and the chin coordinates, and determining the position of the upper part of the face to finish alignment.
8. A training device for a classification model of facial color spots and pockmarks severity, the device comprising:
the collecting unit is used for collecting a face image, wherein the face image is marked with label information of pockmarks and color spot severity degrees to construct a data set;
the processing unit is used for preprocessing the face image, segmenting a face region and aligning the face region to obtain a face region picture;
the training unit is used for inputting the face region picture into a pre-established initial neural network model; and training the initial neural network model by taking the face region picture as training data to obtain a face color spot and pox severity classification model.
9. A computer-readable storage medium storing a program for electronic data exchange, wherein the program causes a terminal to execute the method provided by any one of claims 1-3.
10. A computer-readable storage medium storing a program for electronic data exchange, wherein the program causes a terminal to execute the method as provided in claim 4.
CN202110510735.9A 2021-05-11 2021-05-11 Intelligent face pockmark and color spot analysis method and device and storage medium Pending CN115410240A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110510735.9A CN115410240A (en) 2021-05-11 2021-05-11 Intelligent face pockmark and color spot analysis method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110510735.9A CN115410240A (en) 2021-05-11 2021-05-11 Intelligent face pockmark and color spot analysis method and device and storage medium

Publications (1)

Publication Number Publication Date
CN115410240A true CN115410240A (en) 2022-11-29

Family

ID=84154987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110510735.9A Pending CN115410240A (en) 2021-05-11 2021-05-11 Intelligent face pockmark and color spot analysis method and device and storage medium

Country Status (1)

Country Link
CN (1) CN115410240A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274266A (en) * 2023-11-22 2023-12-22 深圳市宗匠科技有限公司 Method, device, equipment and storage medium for grading acne severity
CN117333487A (en) * 2023-12-01 2024-01-02 深圳市宗匠科技有限公司 Acne classification method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274266A (en) * 2023-11-22 2023-12-22 深圳市宗匠科技有限公司 Method, device, equipment and storage medium for grading acne severity
CN117274266B (en) * 2023-11-22 2024-03-12 深圳市宗匠科技有限公司 Method, device, equipment and storage medium for grading acne severity
CN117333487A (en) * 2023-12-01 2024-01-02 深圳市宗匠科技有限公司 Acne classification method, device, equipment and storage medium
CN117333487B (en) * 2023-12-01 2024-03-29 深圳市宗匠科技有限公司 Acne classification method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN112131978B (en) Video classification method and device, electronic equipment and storage medium
WO2019128507A1 (en) Image processing method and apparatus, storage medium and electronic device
WO2020182121A1 (en) Expression recognition method and related device
EP1693782B1 (en) Method for facial features detection
CN110555481A (en) Portrait style identification method and device and computer readable storage medium
CN103617432A (en) Method and device for recognizing scenes
CN110555420B (en) Fusion model network and method based on pedestrian regional feature extraction and re-identification
WO2022021029A1 (en) Detection model training method and device, detection model using method and storage medium
CN112446322B (en) Eyeball characteristic detection method, device, equipment and computer readable storage medium
WO2023284182A1 (en) Training method for recognizing moving target, method and device for recognizing moving target
CN113627402B (en) Image identification method and related device
CN113298158B (en) Data detection method, device, equipment and storage medium
CN111444826A (en) Video detection method and device, storage medium and computer equipment
CN115410240A (en) Intelligent face pockmark and color spot analysis method and device and storage medium
CN112052730B (en) 3D dynamic portrait identification monitoring equipment and method
CN109670517A (en) Object detection method, device, electronic equipment and target detection model
WO2023279799A1 (en) Object identification method and apparatus, and electronic system
CN115205247A (en) Method, device and equipment for detecting defects of battery pole piece and storage medium
US20210012503A1 (en) Apparatus and method for generating image
Diyasa et al. Multi-face Recognition for the Detection of Prisoners in Jail using a Modified Cascade Classifier and CNN
CN112036284A (en) Image processing method, device, equipment and storage medium
CN111783674A (en) Face recognition method and system based on AR glasses
CN113591562A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113706550A (en) Image scene recognition and model training method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination