WO2020155984A1 - 人脸表情图像处理方法、装置和电子设备 - Google Patents
人脸表情图像处理方法、装置和电子设备 Download PDFInfo
- Publication number
- WO2020155984A1 WO2020155984A1 PCT/CN2019/129140 CN2019129140W WO2020155984A1 WO 2020155984 A1 WO2020155984 A1 WO 2020155984A1 CN 2019129140 W CN2019129140 W CN 2019129140W WO 2020155984 A1 WO2020155984 A1 WO 2020155984A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- facial expression
- facial
- processing
- face
- Prior art date
Links
- 230000008921 facial expression Effects 0.000 title claims abstract description 311
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 212
- 230000001815 facial effect Effects 0.000 claims abstract description 99
- 230000000694 effects Effects 0.000 claims abstract description 63
- 230000004044 response Effects 0.000 claims abstract description 37
- 238000003860 storage Methods 0.000 claims abstract description 21
- 238000000034 method Methods 0.000 claims description 52
- 230000014509 gene expression Effects 0.000 description 46
- 230000006870 function Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 11
- 238000000605 extraction Methods 0.000 description 10
- 238000001514 detection method Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000007781 pre-processing Methods 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000003796 beauty Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Definitions
- the present disclosure relates to the field of image processing, and in particular to a method, device, electronic device, and computer-readable storage medium for processing facial expression images.
- smart terminals can be used to listen to music, play games, chat online, and take photos.
- the camera technology of the smart terminal the camera pixel has reached more than 10 million pixels, with higher definition and the camera effect comparable to professional cameras.
- embodiments of the present disclosure provide a method for processing facial expression images, including:
- the first face image is overlaid on the position of the face image to obtain a first image effect.
- the acquiring the first image, where the first image includes a face image includes:
- the recognizing the facial expression of the facial image includes:
- performing first processing on the facial image to obtain the first facial image includes:
- first processing is performed on the face image to obtain the first face image.
- acquiring a processing configuration file corresponding to the first facial expression includes:
- acquiring a processing configuration file corresponding to the first facial expression includes:
- the processing parameters in the processing configuration file are set according to the level of the first facial expression.
- the performing first processing on the face image according to the processing configuration file to obtain the first face image includes:
- the segmented face image is enlarged to obtain an enlarged face image.
- the covering the first face image on the position of the face image to obtain the first image effect includes:
- the first face image is overlaid on the face image, and the first positioning feature point is overlapped with the second positioning feature point to obtain a first image effect.
- the acquiring the first image, where the first image includes a face image includes:
- the first image includes at least two face images.
- the recognizing the facial expression of the facial image includes:
- performing first processing on the facial image to obtain the first facial image includes:
- first processing is performed on the facial image corresponding to the first facial expression to obtain the first facial image.
- first processing is performed on the face image corresponding to the first facial expression to obtain the first face image.
- the covering the first face image on the position of the face image to obtain the first image effect includes:
- the at least one first face image is overlaid on the position of the face image corresponding to the first face image to obtain a first image effect.
- a facial expression image processing device including:
- the first image acquisition module is configured to acquire a first image, and the first image includes a face image
- the facial expression recognition module is used to recognize the facial expressions of the facial image
- the first processing module is configured to perform first processing on the face image in response to recognizing that the facial expression is the first facial expression to obtain the first facial image;
- the facial expression image processing module is used to overlay the first facial image on the position of the facial image to obtain the first image effect.
- the first image acquisition module further includes:
- the first video acquisition module is configured to acquire a first video, and at least one video frame in the first video includes a face image.
- the facial expression recognition module further includes:
- a face recognition module for recognizing a face image in the first image
- An expression feature extraction module for extracting facial expression features from the face image
- the facial expression recognition sub-module is used to recognize facial expressions according to the facial expression features.
- the first processing module further includes:
- a processing configuration file obtaining module configured to obtain a processing configuration file corresponding to the first facial expression in response to recognizing that the facial expression is a first facial expression
- the first face image processing module is configured to perform first processing on the face image according to the processing configuration file to obtain a first face image.
- processing configuration file obtaining module further includes:
- the first facial expression recognition module is used to recognize the facial expression as the first facial expression
- the first processing configuration file obtaining module is configured to obtain a processing configuration file corresponding to the first facial expression when the level of the first facial expression reaches a preset level.
- processing configuration file obtaining module further includes:
- the second facial expression recognition module is used to recognize the facial expression as the first facial expression
- a second processing configuration file obtaining module configured to obtain a processing configuration file corresponding to the first facial expression
- An expression level judgment module configured to obtain a processing configuration file corresponding to the first facial expression
- the processing parameter setting module is configured to set the processing parameters in the processing configuration file according to the level of the first facial expression.
- the first face image processing module further includes:
- a face segmentation module configured to segment the face image from the first image
- the magnification module is configured to perform magnification processing on the segmented face image according to the processing configuration file to obtain the magnified face image.
- the facial expression image processing module further includes:
- a positioning feature point acquisition module configured to acquire a first positioning feature point on the first face image and a second positioning feature point on the face image
- the covering module is used for covering the first face image on the face image, and making the first positioning feature point coincide with the second positioning feature point to obtain a first image effect.
- a facial expression image processing device including:
- the second image acquisition module is configured to acquire a first image, and the first image includes at least two face images;
- the third facial expression recognition module is used to recognize the facial expression of each of the at least two facial images
- the second processing module is configured to, in response to recognizing that at least one of the facial expressions is a first facial expression, perform first processing on the facial image corresponding to the first facial expression to obtain the first person Face image
- the first facial expression image processing module is configured to overlay the at least one first facial image on the position of the facial image corresponding to the first facial image to obtain a first image effect.
- the second processing module further includes:
- a corresponding processing configuration file obtaining module configured to obtain a first processing configuration file corresponding to the first facial expression of the facial image in response to recognizing that at least one of the facial expressions is a first facial facial expression
- the second processing submodule is configured to perform first processing on the face image corresponding to the first facial expression according to the first processing configuration file to obtain a first face image.
- an embodiment of the present disclosure provides an electronic device, including: at least one processor; and,
- the device can execute any of the facial expression image processing methods described in the foregoing first aspect.
- embodiments of the present disclosure provide a non-transitory computer-readable storage medium, characterized in that the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to make a computer execute the aforementioned first aspect Any of the aforementioned facial expression image processing methods.
- the present disclosure discloses a method, device, electronic equipment and computer-readable storage medium for processing facial expression images.
- the method for processing a facial expression image includes: acquiring a first image, the first image including a facial image; recognizing the facial expression of the facial image; responding to recognizing that the facial expression is the first person
- For facial expressions perform first processing on the face image to obtain a first face image; overlay the first face image on the position of the face image to obtain a first image effect.
- the embodiment of the present disclosure controls the generation result of the face image effect through the expression of the face, which solves the technical problems of complex image effect production, fixed processing effect, and inability to flexibly configure the processing effect in the prior art.
- FIG. 1 is a flowchart of Embodiment 1 of a facial expression image processing method provided by an embodiment of the disclosure
- FIGS. 2a-2e are schematic diagrams of specific examples of facial expression image processing methods provided by embodiments of the disclosure.
- FIG. 3 is a flowchart of Embodiment 2 of a method for processing facial expression images provided by an embodiment of the disclosure
- Embodiment 4 is a schematic structural diagram of Embodiment 1 of a facial expression image processing apparatus provided by an embodiment of the disclosure
- Embodiment 2 of a facial expression image processing apparatus provided by an embodiment of the disclosure
- Fig. 6 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
- FIG. 1 is a flowchart of the first implementation of a facial expression image processing method provided by an embodiment of the present disclosure.
- the facial expression image processing method provided in this embodiment may be executed by a facial expression image processing device.
- the device can be implemented as software, or as a combination of software and hardware.
- the facial expression image processing device can be integrated in a certain device in the facial expression image processing system, such as a facial expression image processing server or a facial expression image Processing terminal equipment. As shown in Figure 1, the method includes the following steps:
- Step S101 Obtain a first image, where the first image includes a face image
- the obtaining the first image includes obtaining the first image from a local storage space or obtaining the first image from a network storage space. No matter where the first image is obtained from, the storage that needs to obtain the first image is preferred. Address, and then obtain the first image from the storage address.
- the first image may be a video image or a picture, or a picture with dynamic effects, which will not be repeated here.
- the acquiring the first image includes acquiring the first video, and at least one video frame in the first video includes a face image.
- the first video can be obtained through an image sensor, which refers to various devices that can collect images, and typical image sensors are video cameras, cameras, cameras, etc.
- the image sensor may be a camera on a mobile terminal, such as a front or rear camera on a smart phone, and the video image collected by the camera may be directly displayed on the display screen of the phone. In this step, Obtain the image and video taken by the image sensor for further image recognition in the next step.
- the first image includes a face image, which is the basis of facial expressions.
- the picture includes at least one face image
- the first image is a video
- at least one of the video frames in the first image includes at least one face image.
- Step S102 Recognizing the facial expression of the facial image
- recognizing the facial expression of the facial image includes: recognizing the facial image in the first image; extracting facial facial expression features from the facial image; according to the facial facial expression Features recognize facial expressions.
- Face detection is a process in which an image or a group of image sequences is given arbitrarily, and a certain strategy is adopted to search it to determine the position and area of all faces.
- face detection methods can be divided into four categories: (1) A method based on prior knowledge, which forms a rule base of typical faces to encode faces, and locates faces through the relationship between facial features; (2) Feature invariance method, which finds stable features when the pose, viewing angle or lighting conditions change, and then uses these features to determine the face; (3) Template matching method, which stores several standard faces Patterns are used to describe the entire face and facial features separately, and then calculate the correlation between the input image and the stored pattern and use it for detection; (4) Appearance-based method, which is opposite to the template matching method, which is performed from the training image set Learn to obtain models and use these models for detection.
- An implementation of the method (4) can be used here to illustrate the process of face detection: first, features need to be extracted to complete the modeling, this embodiment uses Haar features as the key feature for judging the face, and the Haar feature is a kind of Simple rectangular features, fast extraction speed.
- the feature template used in the calculation of Haar features is composed of two or more congruent rectangles using a simple combination of rectangles, among which there are black and white rectangles in the feature template;
- Use the AdaBoost algorithm to find a part of the key features from a large number of Haar features, and use these features to generate an effective classifier.
- the constructed classifier can detect the face in the image.
- multiple face feature points can be detected, and 106 feature points can typically be used to identify a face.
- Face image preprocessing mainly includes denoising, normalization of scale and gray level, etc.
- the input image usually has a more complex scene.
- the face image size, aspect ratio, lighting conditions, partial coverage, and head deflection obtained by face detection are usually different.
- the facial expression features are extracted.
- Motion-based feature extraction methods mainly describe expression changes based on changes in the relative positions and distances of facial feature points in sequence images, including optical flow, motion models, feature point tracking, etc. These methods are robust; based on Deformation feature extraction methods are mainly used to extract features from static images.
- the model features are obtained by comparing the appearance or texture of natural expression models. Typical algorithms are based on active appearance model (AAM) and point distribution model (PDM), and based on texture features Gabor transform and local binary mode LBP.
- AAM active appearance model
- PDM point distribution model
- facial expression classification is to send the expression features extracted in the previous stage to a trained classifier or regressor, and let the classifier or regressor give a predicted value to judge the expression category corresponding to the expression feature.
- the common expression classification algorithms mainly include linear classifiers, neural network classifiers, support vector machines SVM, hidden Markov models and other classification and recognition methods.
- Step S103 In response to recognizing that the facial expression is the first facial expression, perform first processing on the facial image to obtain the first facial image;
- performing first processing on the facial image to obtain the first facial image includes: responding to recognizing the The facial expression is a first facial expression, and a processing configuration file corresponding to the first facial expression is acquired; according to the processing configuration file, first processing is performed on the facial image to obtain the first facial image.
- facial expressions such as smile, sadness, anger, etc.
- different processing profiles can be set for each facial expression, so that each expression can be processed differently.
- the human face is enlarged to obtain an enlarged human face; optionally, when the facial expression is recognized as sad, Add a teardrop sticker or a sticker of dark clouds and lightning on a human face to obtain a human face with a sticker; optionally, when it is recognized that the facial expression is angry, the human face is rendered red and the nostrils are enlarged.
- acquiring a processing configuration file corresponding to the first facial expression includes: recognizing that the facial expression is the first facial expression Facial expression; when the level of the first facial expression reaches a preset level, obtain a processing configuration file corresponding to the first facial expression.
- the level represents the level of facial expressions. Taking a smile as an example, a smile is a low-level smile, and a big laugh is a high-level smile. Smile, other expressions and so on.
- the judging the level of the facial expression includes: comparing the facial expression with a preset template expression; and comparing the level of the template expression with the highest degree of matching with the facial expression As the level of the facial expression.
- the expression is a smile.
- the smile can be divided into multiple levels, such as 100 levels, and each level has a standard template facial expression image corresponding to it.
- the facial expressions recognized in the step are compared with the 100-level template facial expression images, and the level corresponding to the template facial expression image with the highest matching degree is used as the facial expression level.
- the judging the level of the facial expression includes: comparing the facial expression with a preset template expression; and using the facial expression and the preset template expression similarity as the person The level of facial expression.
- the template facial expression image may have only one, and the recognized facial expression is compared with the template facial expression image, and the result of the comparison is a similarity percentage, such as after comparison If it is obtained that the similarity between the facial expression and the template facial expression image is 90%, the level of the facial expression can be obtained as 90%.
- an expression level is preset, which is a condition for triggering the first processing.
- set smile level 50 as the preset expression level, then when the first expression is recognized as smile level 50 In the case of the above smile, a processing profile corresponding to the smile is obtained.
- acquiring a processing configuration file corresponding to the first facial expression includes: recognizing that the facial expression is the first facial expression Facial expression; obtaining a processing configuration file corresponding to the first facial expression; determining the level of the first facial expression; setting the processing parameters in the processing configuration file according to the level of the first facial expression .
- the manner of determining the level of the first facial expression may be the same as the manner in the foregoing embodiment, and will not be repeated here.
- the level of the first facial expression is used as a reference for the setting of the processing parameter in the processing configuration file, so that the expression can be used to control the effect of the processing.
- the first facial expression is a smile
- a processing configuration file corresponding to the smile is acquired, and the processing configuration file is configured to cut out and enlarge the human face, and also needs to be set
- the magnification factor is used to control the magnification.
- the level of the smile can be used to control the magnification.
- the control magnification here can directly use the level as the magnification, or it can be the corresponding relationship between the level and the magnification.
- the smile is magnified by 1 times for levels 1-10, and the smile is magnified by 1.1 times for levels 11-20, and so on. In this way, as the degree of smile on the face becomes higher and higher, the face will be magnified more.
- the aforementioned expressions, levels, and processing parameters are all examples, and are not enough to limit the present disclosure. In fact, the expression level can be used to control any processing parameter to form a variety of control effects, which will not be repeated here.
- the performing first processing on the face image according to the processing configuration file to obtain the first face image includes: segmenting the face image from the first image; according to the The configuration file is processed, and the segmented face image is enlarged to obtain an enlarged face image.
- the face can be segmented from the first image according to the face contour recognized in step S102 to form a matting effect.
- the segmentation can also be Perform preprocessing on the face image that comes out.
- the preprocessing can be to blur the edges of the face image. Any blurring method can be used for the blurring.
- An optional blurring method is Gaussian blur, which is understandable Yes, any blurring can be used for the blurring here, so I won't repeat it here.
- the position of the pixel in the enlarged image can be calculated based on the position of the pixel in the original image, and then the color value of the pixel of the enlarged image can be interpolated. Specifically, assuming that the position of the pixel on the original image is (x, y) and the position of the pixel on the enlarged image is (u, v), the corresponding position of (u, v) can be calculated by the following formula 1.
- ⁇ _1 is the magnification of the pixel in the X-axis direction
- ⁇ _2 is the magnification of the pixel in the Y-axis direction.
- ⁇ _1 ⁇ _2.
- a 100*100 image is enlarged to 200*200; but ⁇ _1 And ⁇ _2 may not be equal, for example, a 100*100 image is enlarged to 200*300.
- the pixel point (10, 20) in the enlarged image corresponds to the pixel point (5, 10) in the original image, and the color value of the pixel point (5, 10) in the original image is assigned to The pixel point (10,20) in the enlarged image.
- the color value of the pixel point (x, y) in the original image can be smoothed and then assigned to the pixel point of the enlarged image.
- the point (x, y) The average color of the surrounding 2*2 pixels is used as the color value of the pixel corresponding to the point (x, y) in the enlarged image.
- Step S104 overlay the first face image on the position of the face image to obtain a first image effect.
- the first face image obtained through the first processing in step S103 is overlaid to the position where the face image is located to obtain the first image effect.
- the covering the first face image on the position of the face image to obtain the first image effect includes: acquiring the first positioning feature point on the first face image and A second positioning feature point on the face image; overlaying the first face image on the face image, and making the first positioning feature point coincide with the second positioning feature point, Get the first image effect.
- the first positioning feature point and the second positioning feature point may be the central feature point on the face image, for example, the feature point on the nose tip on the first face image and the nose tip on the face image. Feature points. In this way, the first face image and its corresponding face image can completely cover and fit.
- the first locating feature point and the second locating feature point can also be feature points set according to specific needs to achieve other coverage effects, which are not limited here.
- a first image is acquired, and the first image includes a face image.
- the first image is a video image frame collected by an image sensor, and the video image frame includes Face image;
- the facial expression of the facial image is recognized; in response to recognizing that the facial expression is the first facial expression, the first processing is performed on the facial image, Obtain a first face image; overlay the first face image on the position of the face image to obtain a first image effect.
- the face expression is a smile, and the face is recognized according to Smile, generating the effect of magnifying the face.
- the human face did not smile at first, and the image did not change.
- the human face smiled, but the smile was not enough to trigger the generation of the image effect.
- the smile level of the face is further increased, triggering the zooming effect of the face, superimposing the enlarged face to the position of the original face, and strengthening and highlighting the smile of the face, as shown in Figure 2d-2e.
- the smile disappears, the big head The effect gradually disappears and the image returns to its original state.
- FIG. 3 is a flowchart of the second embodiment of a facial expression image processing method provided by an embodiment of the disclosure.
- the facial expression image processing method provided in this embodiment may be executed by a facial expression image processing device.
- the processing device may be implemented as software, or as a combination of software and hardware.
- the facial expression image processing device may be integrated in a device in an image processing system, such as an image processing server or an image processing terminal device. As shown in Figure 3, the method includes the following steps:
- Step S301 Acquire a first image, where the first image includes at least two face images;
- Step S302 Recognizing the facial expression of each of the at least two facial images
- Step S303 in response to recognizing that at least one of the facial expressions is a first facial expression, perform first processing on a facial image corresponding to the first facial facial expression to obtain a first facial image;
- Step S304 Overlay the at least one first face image on the position of the face image corresponding to the first face image to obtain a first image effect.
- the recognition of multiple faces is involved, that is, the first image includes multiple face images. At this time, each face image is processed as described in the first embodiment. In the first image Separate image effects can be achieved for different faces and different expressions.
- first processing is performed on the face image corresponding to the first facial expression to obtain the first face image.
- a processing configuration file is separately set for each different expression of each face, so that each different expression of each face is processed independently without interfering with each other.
- an independent processing configuration file is generated for each expression of each face.
- the configuration file is independent, and the expression of each face can be independently configured to produce different image effects for multiple expressions of multiple faces.
- the present disclosure discloses a method, device, electronic equipment and computer-readable storage medium for processing facial expression images.
- the method for processing a facial expression image includes: acquiring a first image, the first image including a facial image; recognizing the facial expression of the facial image; responding to recognizing that the facial expression is the first person
- For facial expressions perform first processing on the face image to obtain a first face image; overlay the first face image on the position of the face image to obtain a first image effect.
- the embodiment of the present disclosure controls the generation result of the face image effect through the expression of the face, which solves the technical problems of complex image effect production, fixed processing effect, and inability to flexibly configure the processing effect in the prior art.
- the apparatus 400 includes: a first image acquisition module 401, a facial expression recognition module 402, and a first processing Module 403 and facial expression image processing module 404. among them,
- the first image acquisition module 401 is configured to acquire a first image, and the first image includes a face image;
- the facial expression recognition module 402 is used to recognize the facial expression of the facial image
- the first processing module 403 is configured to, in response to recognizing that the facial expression is the first facial expression, perform first processing on the facial image to obtain the first facial image;
- the facial expression image processing module 404 is configured to overlay the first facial image on the position of the facial image to obtain a first image effect.
- the first image acquisition module 401 further includes:
- the first video acquisition module is configured to acquire a first video, and at least one video frame in the first video includes a face image.
- the facial expression recognition module 402 further includes:
- a face recognition module for recognizing a face image in the first image
- An expression feature extraction module for extracting facial expression features from the face image
- the facial expression recognition sub-module is used to recognize facial expressions according to the facial expression features.
- the first processing module 403 further includes:
- a processing configuration file obtaining module configured to obtain a processing configuration file corresponding to the first facial expression in response to recognizing that the facial expression is a first facial expression
- the first face image processing module is configured to perform first processing on the face image according to the processing configuration file to obtain a first face image.
- processing configuration file obtaining module further includes:
- the first facial expression recognition module is used to recognize the facial expression as the first facial expression
- the first processing configuration file obtaining module is configured to obtain a processing configuration file corresponding to the first facial expression when the level of the first facial expression reaches a preset level.
- processing configuration file obtaining module further includes:
- the second facial expression recognition module is used to recognize the facial expression as the first facial expression
- a second processing configuration file obtaining module configured to obtain a processing configuration file corresponding to the first facial expression
- An expression level judgment module configured to obtain a processing configuration file corresponding to the first facial expression
- the processing parameter setting module is configured to set the processing parameters in the processing configuration file according to the level of the first facial expression.
- the first face image processing module further includes:
- a face segmentation module configured to segment the face image from the first image
- the enlargement module is configured to perform enlargement processing on the segmented face image according to the processing configuration file to obtain an enlarged face image.
- the facial expression image processing module 404 further includes:
- a positioning feature point acquisition module configured to acquire a first positioning feature point on the first face image and a second positioning feature point on the face image
- the covering module is used for covering the first face image on the face image, and making the first positioning feature point coincide with the second positioning feature point to obtain a first image effect.
- the device shown in FIG. 4 can execute the method of the embodiment shown in FIG. Refer to the description in the embodiment shown in FIG. 1 for the execution process and technical effects of this technical solution, and will not be repeated here.
- FIG. 5 is a schematic structural diagram of Embodiment 1 of a facial expression image processing apparatus provided by an embodiment of the disclosure.
- the apparatus 500 includes: a second image acquisition module 501, a third facial expression recognition module 502, and a first The second processing module 503 and the first facial expression image processing module 504. among them,
- the second image acquisition module 501 is configured to acquire a first image, and the first image includes at least two face images;
- the third facial expression recognition module 502 is configured to recognize the facial expression of each of the at least two facial images
- the second processing module 503 is configured to, in response to recognizing that at least one of the facial expressions is a first facial expression, perform first processing on the facial image corresponding to the first facial expression to obtain the first facial expression.
- the first facial expression image processing module 504 is configured to overlay the at least one first facial image on the position of the facial image corresponding to the first facial image to obtain a first image effect.
- the second processing module 503 further includes:
- a corresponding processing configuration file obtaining module configured to obtain a first processing configuration file corresponding to the first facial expression of the facial image in response to recognizing that at least one of the facial expressions is a first facial facial expression
- the second processing submodule is configured to perform first processing on the face image corresponding to the first facial expression according to the first processing configuration file to obtain a first face image.
- the device shown in FIG. 5 can execute the method of the embodiment shown in FIG. 3.
- FIG. 6 shows a schematic structural diagram of an electronic device 600 suitable for implementing embodiments of the present disclosure.
- the electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (such as Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
- the electronic device shown in FIG. 6 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
- the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which can be loaded into a random access device according to a program stored in a read-only memory (ROM) 602 or from a storage device 608.
- the program in the memory (RAM) 603 executes various appropriate actions and processing.
- the RAM 603 also stores various programs and data required for the operation of the electronic device 600.
- the processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
- An input/output (I/O) interface 605 is also connected to the bus 604.
- the following devices can be connected to the I/O interface 605: including input devices 606 such as touch screen, touch panel, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; including, for example, liquid crystal display (LCD), speakers, An output device 607 such as a vibrator; a storage device 608 such as a magnetic tape, a hard disk, etc.; and a communication device 609.
- the communication device 609 may allow the electronic device 600 to perform wireless or wired communication with other devices to exchange data.
- FIG. 6 shows an electronic device 600 having various devices, it should be understood that it is not required to implement or have all the illustrated devices. It may alternatively be implemented or provided with more or fewer devices.
- the process described above with reference to the flowchart can be implemented as a computer software program.
- the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
- the computer program may be downloaded and installed from the network through the communication device 609, or installed from the storage device 608, or installed from the ROM602.
- the processing device 601 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
- the aforementioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
- the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
- the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
- the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
- the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires a first image, the first image includes a face image; Describe the facial expression of the facial image; in response to recognizing that the facial expression is the first facial expression, perform first processing on the facial image to obtain the first facial image; The image is overlaid on the position of the face image to obtain the first image effect.
- the computer program code used to perform the operations of the present disclosure may be written in one or more programming languages or a combination thereof.
- the above-mentioned programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language.
- the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
- LAN local area network
- WAN wide area network
- each block in the flowchart or block diagram can represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logic function Executable instructions.
- the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
- the units involved in the embodiments described in the present disclosure can be implemented in software or hardware. Among them, the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
一种人脸表情图像处理方法、装置、电子设备和计算机可读存储介质。其中该人脸表情图像处理方法包括:获取第一图像,所述第一图像中包括人脸图像(S101);识别所述人脸图像的人脸表情(S102);响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像(S103);将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果(S104)。通过人脸的表情来控制人脸图像效果的生成结果,解决了现有技术中的图像效果制作复杂和处理效果固定、无法灵活配置处理效果的技术问题。
Description
相关申请的交叉引用
本申请要求于2019年01月31日提交的,申请号为201910101335.5、发明名称为“人脸表情图像处理方法、装置和电子设备”的中国专利申请的优先权,该申请的全文通过引用结合在本申请中。
本公开涉及图像处理领域,尤其涉及一种人脸表情图像处理方法、装置、电子设备及计算机可读存储介质。
随着计算机技术的发展,智能终端的应用范围得到了广泛的提高,例如可以通过智能终端听音乐、玩游戏、上网聊天和拍照等。对于智能终端的拍照技术来说,其拍照像素已经达到千万像素以上,具有较高的清晰度和媲美专业相机的拍照效果。
目前在采用智能终端进行拍照时,不仅可以使用出厂时内置的拍照软件实现传统功能的拍照效果,还可以通过从网络端下载应用程序(Application,简称为:APP)来实现具有附加功能的拍照效果,例如可以实现暗光检测、美颜相机和超级像素等功能的APP。通过组合各种基本的人脸表情图像处理可以形成各种特效效果,比如美颜、滤镜、大眼瘦脸等等。
现有的图像特效,一般是使用特效资源对图像做后期处理之后得到,比如通过后期的制作对视频中的人脸做一些处理,但是这种制作方式需要花费很多时间,且制作过程负责;现在技术中,也可以实时对视频图像做一些固定的处理,比如加一些滤镜,对人脸做一些美颜处理等,但是这些处理较为固定,无法灵活的配置处理效果。
发明内容
第一方面,本公开实施例提供一种人脸表情图像处理方法,包括:
获取第一图像,所述第一图像中包括人脸图像;
识别所述人脸图像的人脸表情;
响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像;
将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果。
进一步的,所述获取第一图像,所述第一图像中包括人脸图像,包括:
获取第一视频,所述第一视频中的至少一个视频帧中包括人脸图像。
进一步的,所述识别所述人脸图像的人脸表情,包括:
识别所述第一图像中的人脸图像;
在所述人脸图像中提取人脸表情特征;
根据所述人脸表情特征对人脸表情进行识别。
进一步的,所述响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像,包括:
响应于识别出所述人脸表情为第一人脸表情,获取与所述第一人脸表情对应的处理配置文件;
根据所述处理配置文件,对所述人脸图像做第一处理,得到第一人脸图像。
进一步的,所述响应于识别出所述人脸表情为第一人脸表情,获取与所述第一人脸表情对应的处理配置文件,包括:
识别所述人脸表情为第一人脸表情;
当所述第一人脸表情的等级达到预设的等级,获取与所述第一人脸表情对应的处理配置文件。
进一步的,所述响应于识别出所述人脸表情为第一人脸表情,获取与所述第一人脸表情对应的处理配置文件,包括:
识别所述人脸表情为第一人脸表情;
获取与所述第一人脸表情对应的处理配置文件;
判断所述第一人脸表情的等级;
根据所述第一人脸表情的等级设置所述处理配置文件中的处理参数。
进一步的,所述根据所述处理配置文件,对所述人脸图像做第一处理,得到第一人脸图像,包括:
从第一图像中分割所述人脸图像;
根据所述处理配置文件,对所述分割出来的人脸图像进行放大处理,得到放大后的人脸图像。
进一步的,所述将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果,包括:
获取所述第一人脸图像上的第一定位特征点和所述人脸图像的上的第二定位特征点;
将所述第一人脸图像覆盖于所述人脸图像上,且使所述第一定位特征点与所述第二定位特征点重合,得到第一图像效果。
进一步的,所述获取第一图像,所述第一图像中包括人脸图像,包括:
获取第一图像,所述第一图像中包括至少两个人脸图像。
进一步的,所述识别所述人脸图像的人脸表情,包括:
识别所述至少两个人脸图像中的每一个人脸图像的人脸表情。
进一步的,所述响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像,包括:
响应于识别出所述人脸表情中的至少一个为第一人脸表情,对所述第一人脸表情所对应的人脸图像做第一处理,得到第一人脸图像。
进一步的,所述响应于识别出所述人脸表情中的至少一个为第一人脸表情,对所述第一人脸表情所对应的人脸图像做第一处理,得到第一人脸图像,包括:
响应于识别出所述人脸表情中的至少一个为第一人脸表情;
获取与所述人脸图像的第一人脸表情对应的第一处理配置文件;
根据所述第一处理配置文件,对所述第一人脸表情所对应的人脸图像做第一处理,得到第一人脸图像。
进一步的,所述将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果,包括:
将所述至少一个第一人脸图像覆盖在所述第一人脸图像所对应的人脸图像的位置,得到第一图像效果。
第二方面,本公开实施例提供一种人脸表情图像处理装置,包括:
第一图像获取模块,用于获取第一图像,所述第一图像中包括人脸图像;
人脸表情识别模块,用于识别所述人脸图像的人脸表情;
第一处理模块,用于响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像;
人脸表情图像处理模块,用于将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果。
进一步的,所述第一图像获取模块,还包括:
第一视频获取模块,用于获取第一视频,所述第一视频中的至少一个视频帧中包括人脸图像。
进一步的,所述人脸表情识别模块,还包括:
人脸识别模块,用于识别所述第一图像中的人脸图像;
表情特征提取模块,用于在所述人脸图像中提取人脸表情特征;
表情识别子模块,用于根据所述人脸表情特征对人脸表情进行识别。
进一步的,所述第一处理模块,还包括:
处理配置文件获取模块,用于响应于识别出所述人脸表情为第一人脸表情,获取与所述第一人脸表情对应的处理配置文件;
第一人脸图像处理模块,用于根据所述处理配置文件,对所述人脸图像做第一处理,得到第一人脸图像。
进一步的,所述处理配置文件获取模块,还包括:
第一人脸表情识别模块,用于识别所述人脸表情为第一人脸表情;
第一处理配置文件获取模块,用于当所述第一人脸表情的等级达到预设的等级,获取与所述第一人脸表情对应的处理配置文件。
进一步的,所述处理配置文件获取模块,还包括:
第二人脸表情识别模块,用于识别所述人脸表情为第一人脸表情;
第二处理配置文件获取模块,用于获取与所述第一人脸表情对应的处理配置文件;
表情等级判断模块,用于获取与所述第一人脸表情对应的处理配置文件;
处理参数设置模块,用于根据所述第一人脸表情的等级设置所述处理配置文件中的处理参数。
进一步的,所述第一人脸图像处理模块,还包括:
人脸分割模块,用于从第一图像中分割所述人脸图像;
放大模块,用于根据所述处理配置文件,对所述分割出来的人脸图像进 行放大处理,得到放大后的人脸图像。
进一步的,所述人脸表情图像处理模块,还包括:
定位特征点获取模块,用于获取所述第一人脸图像上的第一定位特征点和所述人脸图像的上的第二定位特征点;
覆盖模块,用于将所述第一人脸图像覆盖于所述人脸图像上,且使所述第一定位特征点与所述第二定位特征点重合,得到第一图像效果。
第三方面,本公开实施例提供一种人脸表情图像处理装置,包括:
第二图像获取模块,用于获取第一图像,所述第一图像中包括至少两个人脸图像;
第三人脸表情识别模块,用于识别所述至少两个人脸图像中的每一个人脸图像的人脸表情;
第二处理模块,用于响应于识别出所述人脸表情中的至少一个为第一人脸表情,对所述第一人脸表情所对应的人脸图像做第一处理,得到第一人脸图像;
第一人脸表情图像处理模块,用于将所述至少一个第一人脸图像覆盖在所述第一人脸图像所对应的人脸图像的位置,得到第一图像效果。
进一步的,所述第二处理模块,还包括:
对应处理配置文件获取模块,用于响应于识别出所述人脸表情中的至少一个为第一人脸表情,获取与所述人脸图像的第一人脸表情对应的第一处理配置文件;
第二处理子模块,用于根据所述第一处理配置文件,对所述第一人脸表情所对应的人脸图像做第一处理,得到第一人脸图像。
第四方面,本公开实施例提供一种电子设备,包括:至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有能被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行前述第一方面中的任一所述人脸表情图像处理方法。
第五方面,本公开实施例提供一种非暂态计算机可读存储介质,其特征在于,该非暂态计算机可读存储介质存储计算机指令,该计算机指令用于使 计算机执行前述第一方面中的任一所述人脸表情图像处理方法。
本公开公开了一种人脸表情图像处理方法、装置、电子设备和计算机可读存储介质。其中该人脸表情图像处理方法包括:获取第一图像,所述第一图像中包括人脸图像;识别所述人脸图像的人脸表情;响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像;将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果。本公开实施例通过人脸的表情来控制人脸图像效果的生成结果,解决了现有技术中的图像效果制作复杂和处理效果固定、无法灵活配置处理效果的技术问题。
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的人脸表情图像处理方法实施例一的流程图;
图2a-2e为本公开实施例提供的人脸表情图像处理方法的具体实例示意图;
图3为本公开实施例提供的人脸表情图像处理方法实施例二的流程图
图4为本公开实施例提供的人脸表情图像处理装置实施例一的结构示意图;
图5为本公开实施例提供的人脸表情图像处理装置实施例二的结构示意图;
图6为根据本公开实施例提供的电子设备的结构示意图。
以下通过特定的具体实例说明本公开的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本公开的其他优点与功效。显然,所描述 的实施例仅仅是本公开一部分实施例,而不是全部的实施例。本公开还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本公开的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
需要说明的是,下文描述在所附权利要求书的范围内的实施例的各种方面。应显而易见,本文中所描述的方面可体现于广泛多种形式中,且本文中所描述的任何特定结构及/或功能仅为说明性的。基于本公开,所属领域的技术人员应了解,本文中所描述的一个方面可与任何其它方面独立地实施,且可以各种方式组合这些方面中的两者或两者以上。举例来说,可使用本文中所阐述的任何数目个方面来实施设备及/或实践方法。另外,可使用除了本文中所阐述的方面中的一或多者之外的其它结构及/或功能性实施此设备及/或实践此方法。
还需要说明的是,以下实施例中所提供的图示仅以示意方式说明本公开的基本构想,图式中仅显示与本公开中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。
另外,在以下描述中,提供具体细节是为了便于透彻理解实例。然而,所属领域的技术人员将理解,可在没有这些特定细节的情况下实践所述方面。
图1为本公开实施例提供的人脸表情图像处理方法实施一的流程图,本实施例提供的该人脸表情图像处理方法可以由一人脸表情图像处理装置来执行,该人脸表情图像处理装置可以实现为软件,或者实现为软件和硬件的组合,该人脸表情图像处理装置可以集成设置在人脸表情图像处理系统中的某设备中,比如人脸表情图像处理服务器或者人脸表情图像处理终端设备中。如图1所示,该方法包括如下步骤:
步骤S101,获取第一图像,所述第一图像中包括人脸图像;
在一个实施例中,所述获取第一图像,包括从本地存储空间中获取第一图像或者从网络存储空间中获取第一图像,无论从哪里获取第一图像,首选需要获取第一图像的存储地址,之后从该存储地址获取第一图像,所述第一 图像可以是视频图像也可以是图片,或者是带有动态效果的图片,在此不再赘述。
在一个实施例中,所述获取第一图像,包括获取第一视频,所述第一视频中的至少一个视频帧中包括人脸图像。在该实施例中,所述第一视频可以通过图像传感器来获取,图像传感器指可以采集图像的各种设备,典型的图像传感器为摄像机、摄像头、相机等。在该实施例中,所述图像传感器可以是移动终端上的摄像头,比如智能手机上的前置或者后置摄像头,摄像头采集的视频图像可以直接显示在手机的显示屏上,在该步骤中,获取图像传感器所拍摄的图像视频,用于在下一步进一步识别图像。
在该步骤中,所述第一图像中包括人脸图像,人脸图像是人脸表情的基础,在该实施例中,如果所述第一图像为图片,则图片中至少包括一个人脸图像,如果所述第一图像为视频,则所述第一图像中的视频帧中至少有一个视频帧中包括至少一个人脸图像。
步骤S102:识别所述人脸图像的人脸表情;
在一个实施例中,识别所述人脸图像的人脸表情,包括:识别所述第一图像中的人脸图像;在所述人脸图像中提取人脸表情特征;根据所述人脸表情特征对人脸表情进行识别。
首先需要对图像中的人脸进行检测,人脸检测是任意给定一个图像或者一组图像序列,采用一定策略对其进行搜索,以确定所有人脸的位置和区域的一个过程,从各种不同图像或图像序列中确定人脸是否存在,并确定人脸数量和空间分布的过程。通常人脸检测的方法可以分为4类:(1)基于先验知识的方法,该方法将典型的人脸形成规则库对人脸进行编码,通过面部特征之间的关系进行人脸定位;(2)特征不变方法,该方法在姿态、视角或光照条件改变的情况下找到稳定的特征,然后使用这些特征确定人脸;(3)模板匹配方法,该方法存储几种标准的人脸模式,用来分别描述整个人脸和面部特征,然后计算输入图像和存储的模式间的相互关系并用于检测;(4)基于外观的方法,该方法与模板匹配方法相反,从训练图像集中进行学习从而获得模型,并将这些模型用于检测。在此可以使用第(4)种方法中的一个实现方式来说明人脸检测的过程:首先需要提取特征完成建模,本实施例使用Haar特征作为判断人脸的关键特征,Haar特征是一种简单的矩形特征,提取速度快,一 般Haar特征的计算所使用的特征模板采用简单的矩形组合由两个或多个全等的矩形组成,其中特征模板内有黑色和白色两种矩形;之后,使用AdaBoost算法从大量的Haar特征中找到起关键作用的一部分特征,并用这些特征产生有效的分类器,通过构建出的分类器可以对图像中的人脸进行检测。在人脸检测过程中,可以检测到多个人脸特征点,典型的可以使用106个特征点来识别人脸。
在检测到人脸图像之后,可以进一步对所述人脸图像做预处理,以便下一步识别人脸的表情。图像预处理的好坏直接影响表情特征提取的准确性和表情分类的效果,从而影响表情识别的准确率。人脸图像预处理主要包括去噪,进行尺度、灰度的归一化等。输入的图像通常具有比较复杂的场景,由人脸检测获取的人脸图像大小、长宽比例、光照条件、局部是否遮、头部偏转通常是不一样的,为了后续提取特征的统一处理,就需要将它们的尺寸、光照、头部姿态的矫正等进行归一化处理,改善图像质量,为进一步分析和理解面部表情做好准备。
在预处理之后,对人脸表情特征进行提取。面部表情特征提取的方法很多,根据图片的来源是否为静态还是动态的分为基于运动和基于形变的表情特征提取。基于运动的特征提取方法,主要根据序列图像中面部特征点的相对位置和距离的变动来描述表情变化,具体有光流法、运动模型、特征点跟踪等,此类方法鲁棒性好;基于形变的特征提取方法,主要用于静态图片提取特征,依靠与自然表情模型的外观或纹理对比获取模型特征,典型的算法有基于活动外观模型(AAM)和点分布模型(PDM)、基于纹理特征Gabor变换和局部二进制模式LBP。
提取人脸表情特征之后,进行人脸表情分类。表情分类即把前一阶段提取到的表情特征送入训练好的分类器或回归器,让分类器或回归器给出一个预测的值,判断表情特征所对应的表情类别。目前常见的表情分类的算法主要有线性分类器、神经网络分类器、支持向量机SVM、隐马尔可夫模型等分类识别方法。
可以理解的是,上述提到的人脸检测、人脸图像预处理、表情特征提取以及人脸表情分类的方法均为便于理解的举例,实际上任何可以识别人脸表情的方法均可以用到本公开的技术方案中,在此不再赘述。
步骤S103:响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像;
在一个实施例中,所述响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像,包括:响应于识别出所述人脸表情为第一人脸表情,获取与所述第一人脸表情对应的处理配置文件;根据所述处理配置文件,对所述人脸图像做第一处理,得到第一人脸图像。在该实施例中,所述人脸表情可以有多种,典型的如笑容、悲伤、愤怒等等,对于每种人脸表情可以设置不同的处理配置文件,以对每种表情做不同的处理,可选的,当识别出所述人脸表情为笑容,则将所述人脸做放大处理,得到放大的人脸;可选的,当识别出所述人脸表情为悲伤,在所述人脸上加上泪珠贴纸或乌云雷电的贴纸,得到带有贴纸的人脸;可选的,当识别出所述人脸表情为愤怒,则将所述人脸渲染成红色并将鼻孔放大。
在一个实施例中,所述响应于识别出所述人脸表情为第一人脸表情,获取与所述第一人脸表情对应的处理配置文件,包括:识别所述人脸表情为第一人脸表情;当所述第一人脸表情的等级达到预设的等级,获取与所述第一人脸表情对应的处理配置文件。在该实施例中,识别人脸表情之后,需要进一步判断人脸表情的级别,所述级别代表人脸表情的程度,以笑容为例,微笑为等级较低的笑容,大笑为等级较高的笑容,其他表情以此类推。在该实施例中,所述判断所述人脸表情的等级,包括:将所述人脸表情与预设的模板表情进行对比;将与所述人脸表情的匹配度最高的模板表情的等级作为所述人脸表情的等级。可选的所述表情为笑容,可以将所述笑容分为多个等级,如可以分为100个等级,每个等级有一个标准的模板人脸表情图像与之对应,在判断所述人脸表情的等级时,将步骤识别出的人脸表情与这100个等级的模板人脸表情图像作对比,将匹配度最高的模板人脸表情图像所对应的等级作为所述人脸表情的等级。可选的,所述判断所述人脸表情的等级,包括:将所述人脸表情与预设的模板表情进行对比;将所述人脸表情与预设的模板表情相似度作为所述人脸表情的等级。在该实施例中,所述模板人脸表情图像可以只有1个,所识别出的人脸表情与所述模板人脸表情图像作对比,所述对比的结果为一个相似度百分比,如对比之后得到人脸表情与所述模板人脸表情图像的相似度为90%,则可以得到所述人脸表情的等级为90级。在 该实施例中,预先设置了一个表情等级,该等级为触发第一处理的条件,可选的设置笑容50级为预设的表情等级,则当识别出所述第一表情为笑容50级以上的笑容时,获取与所述笑容对应的处理配置文件。
在一个实施例中,所述响应于识别出所述人脸表情为第一人脸表情,获取与所述第一人脸表情对应的处理配置文件,包括:识别所述人脸表情为第一人脸表情;获取与所述第一人脸表情对应的处理配置文件;判断所述第一人脸表情的等级;根据所述第一人脸表情的等级设置所述处理配置文件中的处理参数。在该实施例中,断所述第一人脸表情的等级的方式可以与上述实施例中的方式相同,在此不再赘述。在该实施例中,将第一人脸表情的等级作为处理配置文件中的处理参数的设置参考,由此可以使用表情来控制处理的效果。可选的,当所述第一人脸表情为笑容,当识别到笑容时,获取与笑容对应的处理配置文件,该处理配置文件中配置对人脸进行抠图并放大处理,此外还需要设置放大系数,用来控制放大的倍数,此时可以使用所述笑容的等级来控制放大倍数,此处的控制放大倍数可以是直接使用等级作为放大的倍数,也可以是等级与倍数的对应关系,可选的,笑容1-10级放大1倍,笑容11-20级放大1.1倍,以此类推,这样,当人脸的笑容程度越来越高,人脸会被放大的越多。可以理解的是上述表情、等级以及处理参数均为举例,不够成对本公开的限制,实际上可以使用表情的等级来控制任何处理参数,形成多种多样的控制效果,在此不再赘述。
在一个实施例中,所述根据所述处理配置文件,对所述人脸图像做第一处理,得到第一人脸图像,包括:从第一图像中分割所述人脸图像;根据所述处理配置文件,对所述分割出来的人脸图像进行放大处理,得到放大后的人脸图像。在该实施例中,可以根据步骤S102中识别出的人脸轮廓,将所述人脸从第一图像中分割出来,以形成抠图的效果,为了使图像更自然,还可以对所述分割出来的人脸图像进行预处理,所述预处理可以是对人脸图像的边缘进行模糊处理,所述模糊处理可以使用任何的模糊处理方式,一种可选的模糊方式为高斯模糊,可以理解是,此处的模糊处理可以使用任何模糊处理,在此不再赘述。
对于放大处理,可以基于放大后的图像的像素点的位置计算其在原始图像中的位置,之后插值出放大后的图像的像素点的颜色值。具体的,假设原 始图像上的像素点位置为(x,y),放大后的图像上的像素点的位置为(u,v),则可以通过以下公式1计算(u,v)位置所对应的(x,y)位置:
其中,α_1为像素点在X轴方向的放大倍数,α_2为像素点在Y轴方向的放大倍数,一般情况下,α_1=α_2,比如将一个100*100的图像放大为200*200;但是α_1和α_2也可能不相等,比如将一个100*100的图像放大为200*300。以下为一个计算实例,设放大处理后的图像上的一个像素点的坐标为(10,20),在X轴方向和Y轴方向上的放大倍数均为2,则:
也就是说放大之后的图像中的像素点(10,20)在原始图像中对应的像素点为(5,10),此时将原始图像中的像素点(5,10)的颜色值赋值为放大之后的图像中的像素点(10,20)。可选的,为了图像更加平滑,可以将原始图像中的(x,y)点的像素点的颜色值平滑之后再赋值给放大之后的图像的像素点,可选的,可以将点(x,y)周围2*2的像素点的平均颜色作为放大之后的图像中点(x,y)所对应的像素点的颜色值。
可以理解的是,在该实施例中,所述上述表情和处理仅仅为举例,不够成对本公开的限制,实际上任何表情和处理均可以应用到本公开的技术方案中。
步骤S104:将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果。
在该步骤中,将步骤S103中经过第一处理得到的第一人脸图像覆盖到所述人脸图像所在的位置,得到第一图像效果。
在一个实施例中,所述将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果,包括:获取所述第一人脸图像上的第一定位特征点和所述人脸图像的上的第二定位特征点;将所述第一人脸图像覆盖于所述人脸图像上,且使所述第一定位特征点与所述第二定位特征点重合,得到第一图像效果。在该实施例中,所述第一定位特征点和第二定位特征点可以是人脸图 像上的中心特征点,如可以是第一人脸图像上鼻尖的特征点和人脸图像上鼻尖的特征点。这样第一人脸图像和其对应的人脸图像可以完全覆盖贴合。当然所述第一定位特征点和第二定位特征点也可以是根据具体需要设置的特征点,以达到其他覆盖效果,在此不做限制。
可以理解的是,上述利用特征点重合的方式使第一人脸图像覆盖人脸图像的方式仅仅是举例,实际上任何覆盖方式均可以应用到本公开中来,在此不再赘述。
如图2a-2e所示,为上述实施例的一个具体实例。如图2a所示,获取第一图像,所述第一图像中包括人脸图像,在该实例中,所述第一图像为通过图像传感器采集到的视频图像帧,所述视频图像帧中包括人脸图像;如图2a-2e所示,识别所述人脸图像的人脸表情;响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像;将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果,在该具体实例中,所述人脸表情为笑容,根据识别出人脸笑容,生成将人脸放大的效果。如图2a所示,刚开始人脸没有笑容,图像没有发生任何变化,如图2b所示,人脸出现微笑,但是微笑的程度不足以触发图像效果的生成,如图2c所示,当人脸的笑容程度进一步变高,触发人脸的放大效果,将放大的人脸叠加到原来人脸的位置,对人脸的笑容做强化突出,如图2d-2e所示,当笑容消失,大头效果逐渐消失,图像恢复原样。
图3为本公开实施例提供的人脸表情图像处理方法实施例二的流程图,本实施例提供的该人脸表情图像处理方法可以由一人脸表情图像处理装置来执行,该人脸表情图像处理装置可以实现为软件,或者实现为软件和硬件的组合,该人脸表情图像处理装置可以集成设置在图像处理系统中的某设备中,比如图像处理服务器或者图像处理终端设备中。如图3所示,该方法包括如下步骤:
步骤S301,获取第一图像,所述第一图像中包括至少两个人脸图像;
步骤S302,识别所述至少两个人脸图像中的每一个人脸图像的人脸表情;
步骤S303,响应于识别出所述人脸表情中的至少一个为第一人脸表情,对所述第一人脸表情所对应的人脸图像做第一处理,得到第一人脸图像;
步骤S304,将所述至少一个第一人脸图像覆盖在所述第一人脸图像所对应的人脸图像的位置,得到第一图像效果。
该实施例中,涉及多个人脸的识别,也就是第一图像中包括了多个人脸图像,这时候,对每个人脸图像均进行如实施例一中所述的处理,在第一图像中可以针对不同的人脸、不同的表情分别实现单独的图像效果。
进一步的,所述响应于识别出所述人脸表情中的至少一个为第一人脸表情,对所述第一人脸表情所对应的人脸图像做第一处理,得到第一人脸图像,包括:
响应于识别出所述人脸表情中的至少一个为第一人脸表情,获取与所述人脸图像的第一人脸表情对应的第一处理配置文件;
根据所述第一处理配置文件,对所述第一人脸表情所对应的人脸图像做第一处理,得到第一人脸图像。
在该实施例中,对每个人脸的每种不同的表情单独设置处理配置文件,以使每个人脸的每种不同的表情都独立处理,互不干扰。
在该步骤中,针对每个人脸的每种表情,均生成一个独立的处理配置文件。比如当识别到第一图像中包括3个人脸,则将人脸编号为face1、face2和face3,检测到face1人脸的表情为笑脸,将该表情对应的处理配置文件命名为face1.ID1,之后根据该处理配置文件中的配置参数来显示图像效果;检测到face2人脸的表情为愤怒,则将该表情对应的处理配置文件命名为face2.ID2,之后根据该处理配置文件中的配置参数来显示图像效果;检测到face3人脸的表情为笑脸,将该表情对应的处理配置文件命名为face3.ID1,之后根据该处理配置文件中的配置参数来显示图像效果。这样对于每个人脸的每种表情来说,其配置文件都是独立的,可以对每个人脸的表情进行独立的配置,以产生对多个人脸的多个表情产生不同图像效果的效果。
可以理解的,对于单个人脸的表情识别、等级判断以及图像效果的生成,可以使用实施例一中的技术方案,在此不再赘述。
本公开公开了一种人脸表情图像处理方法、装置、电子设备和计算机可读存储介质。其中该人脸表情图像处理方法包括:获取第一图像,所述第一图像中包括人脸图像;识别所述人脸图像的人脸表情;响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像; 将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果。本公开实施例通过人脸的表情来控制人脸图像效果的生成结果,解决了现有技术中的图像效果制作复杂和处理效果固定、无法灵活配置处理效果的技术问题。
在上文中,虽然按照上述的顺序描述了上述方法实施例中的各个步骤,本领域技术人员应清楚,本公开实施例中的步骤并不必然按照上述顺序执行,其也可以倒序、并行、交叉等其他顺序执行,而且,在上述步骤的基础上,本领域技术人员也可以再加入其他步骤,这些明显变型或等同替换的方式也应包含在本公开的保护范围之内,在此不再赘述。
图4为本公开实施例提供的人脸表情图像处理装置实施例一的结构示意图,如图4所示,该装置400包括:第一图像获取模块401、人脸表情识别模块402、第一处理模块403和人脸表情图像处理模块404。其中,
第一图像获取模块401,用于获取第一图像,所述第一图像中包括人脸图像;
人脸表情识别模块402,用于识别所述人脸图像的人脸表情;
第一处理模块403,用于响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像;
人脸表情图像处理模块404,用于将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果。
进一步的,所述第一图像获取模块401,还包括:
第一视频获取模块,用于获取第一视频,所述第一视频中的至少一个视频帧中包括人脸图像。
进一步的,所述人脸表情识别模块402,还包括:
人脸识别模块,用于识别所述第一图像中的人脸图像;
表情特征提取模块,用于在所述人脸图像中提取人脸表情特征;
表情识别子模块,用于根据所述人脸表情特征对人脸表情进行识别。
进一步的,所述第一处理模块403,还包括:
处理配置文件获取模块,用于响应于识别出所述人脸表情为第一人脸表情,获取与所述第一人脸表情对应的处理配置文件;
第一人脸图像处理模块,用于根据所述处理配置文件,对所述人脸图像做第一处理,得到第一人脸图像。
进一步的,所述处理配置文件获取模块,还包括:
第一人脸表情识别模块,用于识别所述人脸表情为第一人脸表情;
第一处理配置文件获取模块,用于当所述第一人脸表情的等级达到预设的等级,获取与所述第一人脸表情对应的处理配置文件。
进一步的,所述处理配置文件获取模块,还包括:
第二人脸表情识别模块,用于识别所述人脸表情为第一人脸表情;
第二处理配置文件获取模块,用于获取与所述第一人脸表情对应的处理配置文件;
表情等级判断模块,用于获取与所述第一人脸表情对应的处理配置文件;
处理参数设置模块,用于根据所述第一人脸表情的等级设置所述处理配置文件中的处理参数。
进一步的,所述第一人脸图像处理模块,还包括:
人脸分割模块,用于从第一图像中分割所述人脸图像;
放大模块,用于根据所述处理配置文件,对所述分割出来的人脸图像进行放大处理,得到放大后的人脸图像。
进一步的,所述人脸表情图像处理模块404,还包括:
定位特征点获取模块,用于获取所述第一人脸图像上的第一定位特征点和所述人脸图像的上的第二定位特征点;
覆盖模块,用于将所述第一人脸图像覆盖于所述人脸图像上,且使所述第一定位特征点与所述第二定位特征点重合,得到第一图像效果。
图4所示装置可以执行图1所示实施例的方法,本实施例未详细描述的部分,可参考对图1所示实施例的相关说明。该技术方案的执行过程和技术效果参见图1所示实施例中的描述,在此不再赘述。
图5为本公开实施例提供的人脸表情图像处理装置实施例一的结构示意图,如图5所示,该装置500包括:第二图像获取模块501、第三人脸表情识别模块502、第二处理模块503和第一人脸表情图像处理模块504。其中,
第二图像获取模块501,用于获取第一图像,所述第一图像中包括至少两个人脸图像;
第三人脸表情识别模块502,用于识别所述至少两个人脸图像中的每一个人脸图像的人脸表情;
第二处理模块503,用于响应于识别出所述人脸表情中的至少一个为第一人脸表情,对所述第一人脸表情所对应的人脸图像做第一处理,得到第一人脸图像;
第一人脸表情图像处理模块504,用于将所述至少一个第一人脸图像覆盖在所述第一人脸图像所对应的人脸图像的位置,得到第一图像效果。
进一步的,所述第二处理模块503,还包括:
对应处理配置文件获取模块,用于响应于识别出所述人脸表情中的至少一个为第一人脸表情,获取与所述人脸图像的第一人脸表情对应的第一处理配置文件;
第二处理子模块,用于根据所述第一处理配置文件,对所述第一人脸表情所对应的人脸图像做第一处理,得到第一人脸图像。
图5所示装置可以执行图3所示实施例的方法,本实施例未详细描述的部分,可参考对图3所示实施例的相关说明。该技术方案的执行过程和技术效果参见图3所示实施例中的描述,在此不再赘述。
下面参考图6,其示出了适于用来实现本公开实施例的电子设备600的结构示意图。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、图像传感器、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允 许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM602被安装。在该计算机程序被处理装置601执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取第一图像,所述第一图像中包括人脸图像;识别所述人脸图像的人脸表情;响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像;将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下, 由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
Claims (16)
- 一种人脸表情图像处理方法,其特征在于,包括:获取第一图像,所述第一图像中包括人脸图像;识别所述人脸图像的人脸表情;响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像;将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果。
- 如权利要求1所述的人脸表情图像处理方法,其特征在于,所述获取第一图像,所述第一图像中包括人脸图像,包括:获取第一视频,所述第一视频中的至少一个视频帧中包括人脸图像。
- 如权利要求1所述的人脸表情图像处理方法,其特征在于,所述识别所述人脸图像的人脸表情,包括:识别所述第一图像中的人脸图像;在所述人脸图像中提取人脸表情特征;根据所述人脸表情特征对人脸表情进行识别。
- 如权利要求1所述的人脸表情图像处理方法,其特征在于,所述响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像,包括:响应于识别出所述人脸表情为第一人脸表情,获取与所述第一人脸表情对应的处理配置文件;根据所述处理配置文件,对所述人脸图像做第一处理,得到第一人脸图像。
- 如权利要求4所述的人脸表情图像处理方法,其特征在于,所述响应于识别出所述人脸表情为第一人脸表情,获取与所述第一人脸表情对应的处理配置文件,包括:识别所述人脸表情为第一人脸表情;当所述第一人脸表情的等级达到预设的等级,获取与所述第一人脸表情对应的处理配置文件。
- 如权利要求4所述的人脸表情图像处理方法,其特征在于,所述响应于识别出所述人脸表情为第一人脸表情,获取与所述第一人脸表情对应的 处理配置文件,包括:识别所述人脸表情为第一人脸表情;获取与所述第一人脸表情对应的处理配置文件;判断所述第一人脸表情的等级;根据所述第一人脸表情的等级设置所述处理配置文件中的处理参数。
- 如权利要求4所述的人脸表情图像处理方法,其特征在于,所述根据所述处理配置文件,对所述人脸图像做第一处理,得到第一人脸图像,包括:从第一图像中分割所述人脸图像;根据所述处理配置文件,对所述分割出来的人脸图像进行放大处理,得到放大后的人脸图像。
- 如权利要求1所述的人脸表情图像处理方法,其特征在于,所述将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果,包括:获取所述第一人脸图像上的第一定位特征点和所述人脸图像的上的第二定位特征点;将所述第一人脸图像覆盖于所述人脸图像上,且使所述第一定位特征点与所述第二定位特征点重合,得到第一图像效果。
- 如权利要求1所述的人脸表情图像处理方法,其特征在于,所述获取第一图像,所述第一图像中包括人脸图像,包括:获取第一图像,所述第一图像中包括至少两个人脸图像。
- 如权利要求9所述的人脸表情图像处理方法,其特征在于,所述识别所述人脸图像的人脸表情,包括:识别所述至少两个人脸图像中的每一个人脸图像的人脸表情。
- 如权利要求10所述的人脸表情图像处理方法,其特征在于,所述响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像,包括:响应于识别出所述人脸表情中的至少一个为第一人脸表情,对所述第一人脸表情所对应的人脸图像做第一处理,得到第一人脸图像。
- 如权利要求11所述的人脸表情图像处理方法,其特征在于,所述响应于识别出所述人脸表情中的至少一个为第一人脸表情,对所述第一人脸 表情所对应的人脸图像做第一处理,得到第一人脸图像,包括:响应于识别出所述人脸表情中的至少一个为第一人脸表情;获取与所述人脸图像的第一人脸表情对应的第一处理配置文件;根据所述第一处理配置文件,对所述第一人脸表情所对应的人脸图像做第一处理,得到第一人脸图像。
- 如权利要求11或12所述的人脸表情图像处理方法,其特征在于,所述将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果,包括:将所述至少一个第一人脸图像覆盖在所述第一人脸图像所对应的人脸图像的位置,得到第一图像效果。
- 一种人脸表情图像处理装置,其特征在于,包括:第一图像获取模块,用于获取第一图像,所述第一图像中包括人脸图像;人脸表情识别模块,用于识别所述人脸图像的人脸表情;第一处理模块,用于响应于识别出所述人脸表情为第一人脸表情,对所述人脸图像做第一处理,得到第一人脸图像;人脸表情图像处理模块,用于将所述第一人脸图像覆盖在所述人脸图像的位置,得到第一图像效果。
- 一种电子设备,包括:存储器,用于存储非暂时性计算机可读指令;以及处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现根据权利要求1-13中任意一项所述的人脸表情图像处理方法。
- 一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行权利要求1-13中任意一项所述的人脸表情图像处理方法。。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/426,840 US20220207917A1 (en) | 2019-01-31 | 2019-12-27 | Facial expression image processing method and apparatus, and electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910101335.5A CN111507142A (zh) | 2019-01-31 | 2019-01-31 | 人脸表情图像处理方法、装置和电子设备 |
CN201910101335.5 | 2019-01-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020155984A1 true WO2020155984A1 (zh) | 2020-08-06 |
Family
ID=71841614
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/129140 WO2020155984A1 (zh) | 2019-01-31 | 2019-12-27 | 人脸表情图像处理方法、装置和电子设备 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20220207917A1 (zh) |
CN (1) | CN111507142A (zh) |
WO (1) | WO2020155984A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112307942A (zh) * | 2020-10-29 | 2021-02-02 | 广东富利盛仿生机器人股份有限公司 | 一种面部表情量化表示方法、系统及介质 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112532896A (zh) * | 2020-10-28 | 2021-03-19 | 北京达佳互联信息技术有限公司 | 视频的制作方法、装置、电子设备以及存储介质 |
US11763496B2 (en) * | 2021-09-30 | 2023-09-19 | Lemon Inc. | Social networking based on asset items |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104780339A (zh) * | 2015-04-16 | 2015-07-15 | 美国掌赢信息科技有限公司 | 一种即时视频中的表情特效动画加载方法和电子设备 |
CN104780458A (zh) * | 2015-04-16 | 2015-07-15 | 美国掌赢信息科技有限公司 | 一种即时视频中的特效加载方法和电子设备 |
US20180115746A1 (en) * | 2015-11-17 | 2018-04-26 | Tencent Technology (Shenzhen) Company Limited | Video calling method and apparatus |
CN108229269A (zh) * | 2016-12-31 | 2018-06-29 | 深圳市商汤科技有限公司 | 人脸检测方法、装置和电子设备 |
CN108495049A (zh) * | 2018-06-15 | 2018-09-04 | Oppo广东移动通信有限公司 | 拍摄控制方法及相关产品 |
CN108734126A (zh) * | 2018-05-21 | 2018-11-02 | 深圳市梦网科技发展有限公司 | 一种美颜方法、美颜装置及终端设备 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106372622A (zh) * | 2016-09-30 | 2017-02-01 | 北京奇虎科技有限公司 | 一种人脸表情分类方法及装置 |
CN107705356A (zh) * | 2017-09-11 | 2018-02-16 | 广东欧珀移动通信有限公司 | 图像处理方法和装置 |
CN108022206A (zh) * | 2017-11-30 | 2018-05-11 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
CN108198159A (zh) * | 2017-12-28 | 2018-06-22 | 努比亚技术有限公司 | 一种图像处理方法、移动终端以及计算机可读存储介质 |
CN108830784A (zh) * | 2018-05-31 | 2018-11-16 | 北京市商汤科技开发有限公司 | 一种图像处理方法、装置和计算机存储介质 |
CN108985241B (zh) * | 2018-07-23 | 2023-05-02 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、计算机设备及存储介质 |
CN109034063A (zh) * | 2018-07-27 | 2018-12-18 | 北京微播视界科技有限公司 | 人脸特效的多人脸跟踪方法、装置和电子设备 |
-
2019
- 2019-01-31 CN CN201910101335.5A patent/CN111507142A/zh active Pending
- 2019-12-27 US US17/426,840 patent/US20220207917A1/en active Pending
- 2019-12-27 WO PCT/CN2019/129140 patent/WO2020155984A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104780339A (zh) * | 2015-04-16 | 2015-07-15 | 美国掌赢信息科技有限公司 | 一种即时视频中的表情特效动画加载方法和电子设备 |
CN104780458A (zh) * | 2015-04-16 | 2015-07-15 | 美国掌赢信息科技有限公司 | 一种即时视频中的特效加载方法和电子设备 |
US20180115746A1 (en) * | 2015-11-17 | 2018-04-26 | Tencent Technology (Shenzhen) Company Limited | Video calling method and apparatus |
CN108229269A (zh) * | 2016-12-31 | 2018-06-29 | 深圳市商汤科技有限公司 | 人脸检测方法、装置和电子设备 |
CN108734126A (zh) * | 2018-05-21 | 2018-11-02 | 深圳市梦网科技发展有限公司 | 一种美颜方法、美颜装置及终端设备 |
CN108495049A (zh) * | 2018-06-15 | 2018-09-04 | Oppo广东移动通信有限公司 | 拍摄控制方法及相关产品 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112307942A (zh) * | 2020-10-29 | 2021-02-02 | 广东富利盛仿生机器人股份有限公司 | 一种面部表情量化表示方法、系统及介质 |
Also Published As
Publication number | Publication date |
---|---|
US20220207917A1 (en) | 2022-06-30 |
CN111507142A (zh) | 2020-08-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10599914B2 (en) | Method and apparatus for human face image processing | |
WO2019242416A1 (zh) | 视频图像处理方法及装置、计算机可读介质和电子设备 | |
CN112070015B (zh) | 一种融合遮挡场景的人脸识别方法、系统、设备及介质 | |
WO2021213067A1 (zh) | 物品显示方法、装置、设备及存储介质 | |
WO2020155984A1 (zh) | 人脸表情图像处理方法、装置和电子设备 | |
US11409794B2 (en) | Image deformation control method and device and hardware device | |
JP7383714B2 (ja) | 動物顔部の画像処理方法と装置 | |
US11042259B2 (en) | Visual hierarchy design governed user interface modification via augmented reality | |
US20120154638A1 (en) | Systems and Methods for Implementing Augmented Reality | |
AU2021333957B2 (en) | Information display method and device, and storage medium | |
CN110619656B (zh) | 基于双目摄像头的人脸检测跟踪方法、装置及电子设备 | |
WO2020192195A1 (zh) | 图像处理方法、装置和电子设备 | |
US20230087489A1 (en) | Image processing method and apparatus, device, and storage medium | |
CN111199169A (zh) | 图像处理方法和装置 | |
US20240095886A1 (en) | Image processing method, image generating method, apparatus, device, and medium | |
CN110059739B (zh) | 图像合成方法、装置、电子设备和计算机可读存储介质 | |
CN110222576B (zh) | 拳击动作识别方法、装置和电子设备 | |
CN111507139A (zh) | 图像效果生成方法、装置和电子设备 | |
WO2020155981A1 (zh) | 表情图像效果生成方法、装置和电子设备 | |
CN111107264A (zh) | 图像处理方法、装置、存储介质以及终端 | |
WO2020215854A1 (zh) | 渲染图像的方法、装置、电子设备和计算机可读存储介质 | |
CN111353929A (zh) | 图像处理方法、装置和电子设备 | |
CN111079662A (zh) | 一种人物识别方法、装置、机器可读介质及设备 | |
CN111079472A (zh) | 图像对比方法和装置 | |
US20240193851A1 (en) | Generation of a 360-degree object view by leveraging available images on an online platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19913354 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/12/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19913354 Country of ref document: EP Kind code of ref document: A1 |