WO2019128507A1 - Image processing method and apparatus, storage medium and electronic device - Google Patents

Image processing method and apparatus, storage medium and electronic device Download PDF

Info

Publication number
WO2019128507A1
WO2019128507A1 PCT/CN2018/115467 CN2018115467W WO2019128507A1 WO 2019128507 A1 WO2019128507 A1 WO 2019128507A1 CN 2018115467 W CN2018115467 W CN 2018115467W WO 2019128507 A1 WO2019128507 A1 WO 2019128507A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
target
image
sample
obtaining
Prior art date
Application number
PCT/CN2018/115467
Other languages
French (fr)
Chinese (zh)
Inventor
陈岩
刘耀勇
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019128507A1 publication Critical patent/WO2019128507A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, storage medium, and electronic device.
  • the existing electronic devices generally have a photographing function, and the light intensity at the time of photographing has a certain influence on the photographs taken. For example, in the case of low light at night, the noise of the photos taken by the camera of the electronic device is more serious, and the picture quality is poor. When the light is ideal, the noise is weaker in the photos taken during the day, and the picture quality is better. it is good.
  • the embodiment of the present application provides an image processing method, device, storage medium, and electronic device, which can reduce noise and improve image quality.
  • an embodiment of the present application provides an image processing method, including:
  • the face image in the target picture is processed based on the target face image.
  • an image processing apparatus including:
  • An information acquiring module configured to acquire posture information of a face image in a target image
  • An image obtaining module configured to acquire a target sample face image from the preset face image set according to the posture information
  • An adjustment module configured to extract an expression feature of the face image, and adjust a target sample face image according to the expression feature to obtain a target face image
  • a processing module configured to process the face image in the target image based on the target face image.
  • the embodiment of the present application further provides a storage medium, where the storage medium stores a plurality of instructions, and the instructions are adapted to be loaded by a processor to perform the following steps:
  • the face image in the target picture is processed based on the target face image.
  • an embodiment of the present application further provides an electronic device, including a processor and a memory, where the processor is electrically connected to the memory, the memory is used to store instructions and data, and the processor is configured to perform the following steps. :
  • the face image in the target picture is processed based on the target face image.
  • FIG. 1 is a schematic diagram of a scenario structure of an electronic device for implementing deep learning according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an application scenario of an image processing method provided by an embodiment of the present application.
  • FIG. 4 is another application scenario diagram of an image processing method provided by an embodiment of the present application.
  • FIG. 5 is still another application scenario diagram of an image processing method provided by an embodiment of the present application.
  • FIG. 6 is another schematic flowchart of an image processing apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 8 is another schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of still another structure of an image processing apparatus according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 11 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the embodiment of the present application provides an image processing method, device, storage medium, and electronic device. The details will be described separately below.
  • FIG. 1 is a schematic diagram of a scenario in which an electronic device implements deep learning according to an embodiment of the present disclosure.
  • the electronic device can record the input and output data during the processing.
  • the electronic device may include a data collection and statistics system and a prediction system with feedback adjustment.
  • the electronic device can acquire a large amount of image classification result data of the user through the data acquisition system, and make corresponding statistics, and extract image features of the image, and analyze and process the extracted image features based on machine depth learning.
  • the electronic device predicts the classification result of the image through the prediction system.
  • the prediction system reversely reciprocates the weights of the weighting items according to the final result of the user behavior. After a number of iterative corrections, the weights of the weighting items of the prediction system are finally converged to form a learned database.
  • the electronic device may be a mobile terminal, such as a mobile phone, a tablet computer, or the like, or may be a conventional PC (Personal Computer), etc., which is not limited in this embodiment of the present application.
  • a mobile terminal such as a mobile phone, a tablet computer, or the like
  • PC Personal Computer
  • An embodiment of the present application provides an image processing method, including:
  • the face image in the target picture is processed based on the target face image.
  • the step of determining posture information of the face image in the target image includes:
  • the posture information includes a deflection angle; and the step of acquiring a corresponding sample facial image from the preset facial image set according to the posture information comprises:
  • the sample face image corresponding to the target sample deflection angle is used as the target sample face image.
  • the step of extracting the expression feature of the face image and adjusting the target sample face image according to the expression feature includes:
  • the expression feature is processed based on a preset algorithm model to obtain an expression feature parameter
  • the target sample face image is adjusted according to the expression feature parameter.
  • the step of processing the face image in the target picture based on the target face image includes:
  • the face image in the target picture is processed based on the position information and the target face image.
  • the step of processing the face image in the target picture based on the location information and the target face image includes:
  • the face image in the target screen is replaced with the processed target face image.
  • the method further includes:
  • the color of the current face image is adjusted based on the color adjustment parameter.
  • an image processing method is provided. As shown in FIG. 2, the flow may be as follows:
  • the target screen may specifically be that the electronic device collects the target image through the camera.
  • the camera can be a digital camera or an analog camera.
  • the digital camera converts the analog image signal generated by the image acquisition device into a digital signal, which is then stored in a computer.
  • the image signal captured by the analog camera must be converted to a digital mode by a specific image capture card and compressed before being converted to a computer for use.
  • the digital camera captures the image directly and transmits it to the computer via a serial, parallel or USB interface.
  • the electronic device generally adopts a digital camera to convert the collected image into data in real time and display it on the display interface of the electronic device (ie, the preview frame of the camera).
  • the image processing method provided by the embodiment of the present application is mainly applied to a scene in which noise is affected during nighttime image capturing. Especially in the case of self-timer, taking the mobile phone as an example, the pixels of the front camera are generally poor, and shooting at night with insufficient light is more likely to cause serious noise effects.
  • the target picture includes one or more person images, and at least one recognizable face image exists.
  • the target image may further include a scene image such as a building, an animal or a plant.
  • the target image is tracked and analyzed in real time, the face image is recognized based on the image recognition technology, and the key points in the face image are detected to determine the posture of the face. That is, in some embodiments, the step of "determining the pose information of the face image in the target screen" may include the following process:
  • the facial feature point may specifically be a feature point obtained according to "two eyes + mouth” or "two eyes + nose” in the face image.
  • the preset facial feature vector may be a feature vector when the face pose is positive.
  • the gesture information may be a gesture relative to the front side. For example, referring to FIG. 3, A is an eyebrow, an eye B1 (left), an eye B2 (right), a nose C, a mouth D, an ear E, and a face F, wherein two eyes B1, B2, and a nose C are used as features, further Feature points are selected from the features as facial feature points.
  • a vector formed by a positional relationship between feature points is used as a feature vector.
  • the feature points selected in Fig. 3 are the inner corner of the eye B1, the inner corner of the eye B2, and the tip of the nose C (because the eyeball is rotated, it is possible to select a marker point with a fixed position) as the facial feature point, and Three vectors constituting a triangular region are formed, and these three vectors can be used as preset facial feature vectors.
  • the facial feature vector of the real-time face can be detected and compared with the preset facial feature vector.
  • the posture information of the current face image can be determined according to the difference value between the two calculations, such as the left side, the right side, the upper side, the lower side, and the upper right side. Left and right, etc.
  • the face image set includes sample face images of a plurality of different postures of the same person, and the sample image is an expressionless image, that is, a face image that does not express emotions.
  • the embodiment of the present application is mainly directed to the noise influence problem of nighttime image capturing. Therefore, the sample face images in the constructed face image set are images with higher image quality. In practice, these high-resolution sample face images can be taken by the user during daylight hours.
  • the first is to collect a plurality of photos of different postures, specifically to obtain photos of different angles.
  • the angle of deflection of the face relative to the plane of the camera lens can then be analyzed by the camera's shooting parameters or the positional relationship between the lens and the subject.
  • the collected face image as the sample face image and the corresponding deflection angle as the sample deflection angle, and establishing a mapping relationship between the captured face image and the deflection angle, the sample face image,
  • the sample deflection angle and the mapping relationship between the two are added to the preset face image set to complete the construction of the set.
  • the pose information includes a deflection angle.
  • the step of “acquiring the corresponding sample face image from the preset face image set according to the posture information” may include the following processes:
  • the sample face image corresponding to the target sample deflection angle is used as the target sample face image.
  • the deflection angle may be a deflection angle in six degrees of freedom.
  • a large number of face images of different postures can be obtained to increase the density of the deflection angle in the sample face image and reduce the interval value between the deflection angles.
  • the expression of the face image in the target picture needs to be migrated to the expressionless face image in real time.
  • the deep learning technology of the electronic device may be used to dynamically migrate the expression in the face image to the unrecognized sample face image in the preset face image set, thereby better retaining the original image expression information and the high frequency information.
  • the step of "extracting the expression features of the face image and adjusting the target sample face image according to the expression features” may include the following process:
  • the expression feature is processed based on a preset algorithm model to obtain an expression feature parameter
  • the target sample face image is adjusted according to the expression feature parameter.
  • the expression features are extracted, specifically, the color features, the texture features, the shape features, and the spatial relationship features in the face image are extracted, thereby identifying the facial features of the face according to the extracted image features, such as eyes and mouths. , nose, eyebrows, ears. If the recognition accuracy is to be improved, the electronic device can be trained based on the machine's deep learning technology to obtain a high-precision algorithm model, and the expression features are analyzed and processed to obtain accurate expression feature parameters.
  • the expression of the target sample face image may be adjusted according to the obtained expression feature parameter, and the expression of the target sample face image is adjusted to be consistent with the original face image expression in the target image to achieve the target image.
  • the expression of the face image in the migration migrates to the target sample face image.
  • a color feature is a global feature that describes the surface properties of a scene corresponding to an image or image area.
  • a texture feature is also a global feature that also describes the surface properties of a scene corresponding to an image or image region.
  • the shape feature is a local feature. There are two types of representation methods, one is the contour feature, which is mainly for the outer boundary of the object, and the other is the regional feature, which is related to the entire shape region.
  • the spatial relationship feature refers to the mutual spatial position or relative direction relationship between multiple objects segmented in the image. These relationships can also be divided into connection/adjacency relationship, overlap/overlap relationship, and inclusion/inclusion relationship.
  • Image feature extraction is the use of a computer to extract image information to determine whether a point of each image belongs to an image feature.
  • the result of feature extraction is to divide the points on the image into different subsets, which tend to belong to isolated points, continuous curves, or continuous regions.
  • Features are the starting point for many computer image analysis algorithms.
  • One of the most important features of feature extraction is "reproducibility,” meaning that features extracted from different images of the same scene should be identical.
  • the image features of the face image can be extracted by using Fourier transform method, window Fourier transform method, wavelet transform method, least square method, boundary direction histogram method, and texture feature extraction based on Tamura texture feature.
  • the step of "processing the face image in the target image based on the target face image” may include the following process:
  • the face image in the target picture is processed based on the position information and the target face image.
  • the edge feature points of the face image in the target picture can be referred to as shown in FIG. 4 . Obtain relative position information of these edge feature points relative to each other.
  • the step of "processing the face image in the target image based on the location information and the target face image” may include the following process:
  • the target face image is processed based on the face mask to obtain the processed target face image
  • the face image in the target screen is replaced with the processed target face image.
  • the right image in FIG. 4 is a human face mask generated based on the position information of the edge feature points in the left figure.
  • the face mask of the high-definition is exchanged with the face mask to the face image with a more serious noise influence on the target image.
  • the specific method may be: overlaying the face mask on the high-definition target face image, and extracting an overlapping area image of the target face image overlapping the face mask area as the processed target face image.
  • the left image is a face image in the target picture
  • the right is a face image in the target picture after replacing the face.
  • the processed target face image may be merged with the target image based on the Poisson fusion technique to cover the original face image in the target image.
  • the face image in the target picture is replaced with the processed target face image.
  • Poisson fusion technology can better eliminate the boundary between the target face image and the target image, making the picture more natural and unobtrusive, achieving seamless splicing.
  • an embodiment of the present application provides an image processing method for acquiring posture information of a face image in a target image, acquiring a target sample face image from a preset face image set according to the posture information, and extracting a face image.
  • the expression features are adjusted according to the expression features to obtain the target face image; the face image in the target image is processed based on the target face image.
  • This scheme can replace the face image in the more serious picture with the pre-stored high-definition face image, which can reduce the influence of noise and improve the image quality.
  • another image processing method is also provided. As shown in FIG. 6, the flow may be as follows:
  • the face image set includes sample face images of a plurality of different postures of the same person, and the sample image is an expressionless image, that is, a face image that does not express emotions.
  • the embodiment of the present application is mainly directed to the noise influence problem of nighttime image capturing. Therefore, the sample face images in the constructed face image set are images with higher image quality. In practice, these high-resolution sample face images can be taken by the user during daylight hours.
  • the first is to collect a plurality of photos of different postures, specifically to obtain photos of different angles.
  • the angle of deflection of the face relative to the plane of the camera lens can then be analyzed by the camera's shooting parameters or the positional relationship between the lens and the subject.
  • the collected face image as the sample face image and the corresponding deflection angle as the sample deflection angle, and establishing a mapping relationship between the captured face image and the deflection angle, the sample face image,
  • the sample deflection angle and the mapping relationship between the two are added to the preset face image set to complete the construction of the set.
  • the image processing method provided by the embodiment of the present application is mainly applied to a scene in which noise is affected during nighttime image capturing.
  • the target screen may specifically be that the electronic device collects the target image through the camera.
  • the target picture includes one or more person images, and at least one recognizable face image exists.
  • the target image may further include a scene image such as a building, an animal or a plant.
  • the target image is tracked and analyzed in real time, the face image is recognized based on the image recognition technology, and the key points in the face image are detected to determine the posture of the face.
  • the attitude information includes a deflection angle.
  • the sample deflection angle corresponding to each sample face image in the preset face image set may be obtained, and a plurality of sample deflection angles are obtained, and then the target sample deflection with the smallest difference between the deflection angles and the deflection angle is selected from the plurality of sample deflection angles. Angle, the sample face image corresponding to the target sample deflection angle is used as the target sample face image.
  • the deflection angle may be a deflection angle in six degrees of freedom.
  • a large number of face images of different postures can be obtained to increase the density of the deflection angle in the sample face image and reduce the interval value between the deflection angles.
  • Extract an expression feature of the face image and process the expression feature based on the preset algorithm model to obtain an expression feature parameter.
  • the expression of the face image in the target picture needs to be migrated to the expressionless face image in real time.
  • the deep learning technology of the electronic device may be used to dynamically migrate the expression in the face image to the unrecognized sample face image in the preset face image set, thereby better retaining the original image expression information and the high frequency information.
  • the expression features are extracted, specifically, the color features, the texture features, the shape features, and the spatial relationship features in the face image are extracted, thereby identifying the facial features of the face according to the extracted image features, such as eyes and mouths. , nose, eyebrows, ears. If the recognition accuracy is to be improved, the electronic device can be trained based on the machine's deep learning technology to obtain a high-precision algorithm model, and the expression features are analyzed and processed to obtain accurate expression feature parameters.
  • the expression of the target sample face image may be adjusted according to the obtained expression feature parameter, and the expression of the target sample face image is adjusted to be consistent with the original face image expression in the target image to achieve the target.
  • the expression of the face image in the picture is moved to the target sample face image.
  • the acquired location information is relative location information between the edge feature points.
  • the face mask of the high-definition is exchanged with the face mask to the face image with a more serious noise influence on the target image.
  • the specific method may be: overlaying the face mask on the high-definition target face image, and extracting an overlapping area image of the target face image overlapping the face mask area.
  • the processed target face image may be merged with the target image based on the Poisson fusion technique to cover the original person in the target image.
  • the face image thereby replacing the face image in the target picture with the processed target face image.
  • Poisson fusion technology can better eliminate the boundary between the target face image and the target image, making the picture more natural and unobtrusive, achieving seamless splicing.
  • the acquired color information may include various colors such as color temperature, hue, brightness, saturation, and the like. Specifically, the acquired color information may be analyzed and processed based on a correlation algorithm to obtain a target color parameter. Then, the color information of the target face image is obtained, and the acquired color information is also analyzed and processed to obtain the target color parameter. Finally, the difference parameter value between the target color parameter and the target color parameter is obtained, and the difference parameter value is used as the final color adjustment parameter.
  • the color of the current face image is adjusted according to the color adjustment parameter, so that the face image parameter makes the face light color more natural and closer to the real scene.
  • the image processing method provided by the embodiment of the present application constructs a face image database having a high-resolution sample face image, and matches the target sample face image according to the posture information of the face image in the target image, and the target image is obtained.
  • the expression of the face image in the migration migrates to the sample face image to obtain the target face image.
  • the edge feature points are detected on the face image in the target image, the face mask is generated according to the position information of the edge feature points, and the target face image is processed based on the face mask, and the face image in the target image is replaced.
  • the color information of the original face image before the face image replacement is acquired, the color adjustment parameter is generated according to the color information, and the color of the replaced face image is adjusted based on the color adjustment parameter.
  • the scheme can replace the face image in the more severe picture with the pre-stored high-definition face image, which can reduce the influence of noise and improve the image quality.
  • an image processing apparatus which may be integrated in an electronic device in the form of software or hardware, and the electronic device may specifically include a mobile phone, a tablet computer, a notebook computer, and the like.
  • the image processing apparatus 30 may include an information acquisition module 31, an image acquisition module 32, an adjustment module 33, and a processing module 34, wherein:
  • the information acquiring module 31 is configured to acquire posture information of a face image in the target image
  • the image obtaining module 32 is configured to obtain a target sample face image from the preset face image set according to the posture information
  • the adjusting module 33 is configured to extract an expression feature of the face image, and adjust the target sample face image according to the expression feature to obtain the target face image;
  • the processing module 34 is configured to process the face image in the target image based on the target face image.
  • the adjustment module 33 can include:
  • the first processing sub-module 332 is configured to process the expression feature based on the preset algorithm model to obtain an expression feature parameter
  • the adjustment sub-module 333 is configured to adjust the target sample face image according to the expression feature parameter.
  • the processing module 34 can include:
  • the obtaining sub-module 341 is configured to perform edge feature point detection on the face image, and acquire location information of the edge feature point;
  • the second processing sub-module 342 is configured to process the facial image in the target image based on the location information and the target facial image.
  • the information acquisition module 31 can be used to:
  • the attitude information includes a deflection angle
  • the image acquisition module 32 can be configured to:
  • the sample face image corresponding to the target sample deflection angle is used as the target sample face image.
  • the image processing apparatus acquires the posture information of the face image in the target image; acquires the target sample face image from the preset face image set according to the posture information; and extracts the expression of the face image. Feature, and adjusting the target sample face image according to the expression feature to obtain the target face image; and processing the face image in the target image based on the target face image.
  • the scheme can replace the face image in the more severe picture with the pre-stored high-definition face image, which can reduce the influence of noise and improve the image quality.
  • an electronic device is further provided, and the electronic device may be a device such as a smart phone or a tablet computer.
  • the electronic device 400 includes a processor 401 and a memory 402.
  • the processor 401 is electrically connected to the memory 402.
  • the processor 401 is a control center of the electronic device 400, which connects various parts of the entire electronic device using various interfaces and lines, executes the electronic by running or loading an application stored in the memory 402, and calling data stored in the memory 402.
  • the various functions and processing data of the device enable overall monitoring of the electronic device.
  • the processor 401 in the electronic device 400 loads the instructions corresponding to the process of one or more applications into the memory 402 according to the following steps, and is stored and stored in the memory 402 by the processor 401.
  • the application thus implementing various functions:
  • the face image in the target picture is processed based on the target face image.
  • the processor 401 is configured to perform the following steps:
  • the pose information includes a deflection angle; the processor 401 also performs the following steps:
  • the sample face image corresponding to the target sample deflection angle is used as the target sample face image.
  • processor 401 also performs the following steps:
  • the expression feature is processed based on a preset algorithm model to obtain an expression feature parameter
  • the target sample face image is adjusted according to the expression feature parameter.
  • the processor 401 further performs the following steps: performing edge feature point detection on the face image, and acquiring location information of the edge feature points; and processing the face image in the target image based on the location information and the target face image .
  • processor 401 also performs the following steps:
  • the target face image is processed based on the face mask to obtain the processed target face image
  • the face image in the target screen is replaced with the processed target face image.
  • the processor 401 after replacing the face image in the target picture with the processed target face image, the processor 401 further performs the following steps:
  • the color of the current face image is adjusted based on the color adjustment parameter.
  • Memory 402 can be used to store applications and data.
  • the application stored in the memory 402 contains instructions that are executable in the processor.
  • Applications can form various functional modules.
  • the processor 401 executes various functional applications and data processing by running an application stored in the memory 402.
  • the electronic device 400 further includes a display screen 403, a control circuit 404, a radio frequency circuit 405, an input unit 406, an audio circuit 407, a sensor 408, and a power source 409.
  • the processor 401 is electrically connected to the display screen 403, the control circuit 404, the radio frequency circuit 405, the input unit 406, the audio circuit 407, the sensor 408, and the power source 409, respectively.
  • the display screen 403 can be used to display information entered by the user or information provided to the user as well as various graphical user interfaces of the electronic device, which can be composed of images, text, icons, video, and any combination thereof.
  • the display screen 403 can be used as a screen in the embodiment of the present application for displaying information.
  • the control circuit 404 is electrically connected to the display screen 403 for controlling the display screen 403 to display information.
  • the radio frequency circuit 405 is configured to transmit and receive radio frequency signals to establish wireless communication with network devices or other electronic devices through wireless communication, and to transmit and receive signals with network devices or other electronic devices.
  • the input unit 406 can be configured to receive input digits, character information, or user characteristic information (eg, fingerprints), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function controls.
  • the input unit 406 can include a fingerprint identification module.
  • the audio circuit 407 can provide an audio interface between the user and the electronic device through a speaker and a microphone.
  • Sensor 408 is used to collect external environmental information.
  • Sensor 408 can include ambient brightness sensors, acceleration sensors, light sensors, motion sensors, and other sensors.
  • Power source 409 is used to power various components of electronic device 400.
  • the power supply 409 can be logically coupled to the processor 401 through a power management system to enable functions such as managing charging, discharging, and power management through the power management system.
  • the camera 410 is used for collecting external images, and can be a digital camera or an analog camera. In some embodiments, camera 410 may convert the acquired external picture into data for transmission to processor 401 to perform image processing operations.
  • the electronic device 400 may further include a Bluetooth module or the like, and details are not described herein again.
  • the electronic device acquires the posture information of the face image in the target image; acquires the target sample face image from the preset face image set according to the posture information; and extracts the expression feature of the face image. And adjusting the target sample face image according to the expression feature to obtain the target face image; and processing the face image in the target image based on the target face image.
  • the scheme can replace the face image in the more severe picture with the pre-stored high-definition face image, which can reduce the influence of noise and improve the image quality.
  • a further embodiment of the present application further provides a storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the steps of any of the image processing methods described above.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Read Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
  • ROM Read Only Memory
  • RAM Random Access Memory

Abstract

Disclosed are an image processing method and apparatus, a storage medium and an electronic device. The image processing method comprises: acquiring posture information of a facial image in a target picture; acquiring a target sample facial image from a pre-set facial image set according to the posture information; extracting an expression feature of the facial image, and adjusting the target sample facial image according to the expression feature to obtain a target facial image; and processing the facial image in the target picture based on the target facial image.

Description

图像处理方法、装置、存储介质及电子设备Image processing method, device, storage medium and electronic device
本申请要求于2017年12月28日提交中国专利局、申请号为201711466330.X、发明名称为“图像处理方法、装置、存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed on December 28, 2017, the Chinese Patent Office, the application number is 201711466330.X, and the invention name is "image processing method, device, storage medium and electronic device". The citations are incorporated herein by reference.
技术领域Technical field
本申请涉及图像处理技术领域,尤其涉及一种图像处理方法、装置、存储介质及电子设备。The present application relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, storage medium, and electronic device.
背景技术Background technique
随着互联网的发展和移动通信网络的发展,同时也伴随着电子设备的处理能力和存储能力的迅猛发展,海量的应用程序得到了迅速传播和使用。现有的电子设备一般具有拍照功能,而拍照时光线强弱对所拍摄到的照片会造成一定影响。比如,在夜晚弱光情况下,利用电子设备的摄像头所拍摄的照片中噪声影响比较严重,照片画质较差;而光线比较理想的白天,所拍摄的照片中噪声较弱,照片画质更好。With the development of the Internet and the development of mobile communication networks, and with the rapid development of processing capabilities and storage capabilities of electronic devices, massive applications have been rapidly spread and used. The existing electronic devices generally have a photographing function, and the light intensity at the time of photographing has a certain influence on the photographs taken. For example, in the case of low light at night, the noise of the photos taken by the camera of the electronic device is more serious, and the picture quality is poor. When the light is ideal, the noise is weaker in the photos taken during the day, and the picture quality is better. it is good.
发明内容Summary of the invention
本申请实施例提供一种图像处理方法、装置、存储介质及电子设备,可以减少噪点,提升图像画质。The embodiment of the present application provides an image processing method, device, storage medium, and electronic device, which can reduce noise and improve image quality.
第一方面,本申请实施例提供一种图像处理方法,包括:In a first aspect, an embodiment of the present application provides an image processing method, including:
获取目标画面中人脸图像的姿态信息;Obtaining posture information of a face image in the target screen;
根据所述姿态信息从预设人脸图像集合中获取目标样本人脸图像;Obtaining a target sample face image from the preset face image set according to the posture information;
提取所述人脸图像的表情特征,并根据所述表情特征对目标样本人脸图像进行调整,得到目标人脸图像;Extracting an expression feature of the face image, and adjusting the target sample face image according to the expression feature to obtain a target face image;
基于目标人脸图像对所述目标画面中人脸图像进行处理。The face image in the target picture is processed based on the target face image.
第二方面,本申请实施例提供了一种图像处理装置,包括:In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
信息获取模块,用于获取目标画面中人脸图像的姿态信息;An information acquiring module, configured to acquire posture information of a face image in a target image;
图像获取模块,用于根据所述姿态信息从预设人脸图像集合中获取目标样本人脸图像;An image obtaining module, configured to acquire a target sample face image from the preset face image set according to the posture information;
调整模块,用于提取所述人脸图像的表情特征,并根据所述表情特征对目标样本人脸图像进行调整,得到目标人脸图像;An adjustment module, configured to extract an expression feature of the face image, and adjust a target sample face image according to the expression feature to obtain a target face image;
处理模块,用于基于目标人脸图像对所述目标画面中人脸图像进行处理。And a processing module, configured to process the face image in the target image based on the target face image.
第三方面,本申请实施例还提供了一种存储介质,所述存储介质中存储有多条指令,所述指令适于由处理器加载以执行以下步骤:In a third aspect, the embodiment of the present application further provides a storage medium, where the storage medium stores a plurality of instructions, and the instructions are adapted to be loaded by a processor to perform the following steps:
获取目标画面中人脸图像的姿态信息;Obtaining posture information of a face image in the target screen;
根据所述姿态信息从预设人脸图像集合中获取目标样本人脸图像;Obtaining a target sample face image from the preset face image set according to the posture information;
提取所述人脸图像的表情特征,并根据所述表情特征对目标样本人脸图像进行调整,得到目标人脸图像;Extracting an expression feature of the face image, and adjusting the target sample face image according to the expression feature to obtain a target face image;
基于目标人脸图像对所述目标画面中人脸图像进行处理。The face image in the target picture is processed based on the target face image.
第四方面,本申请实施例还提供了一种电子设备,包括处理器、存储器,所述处理器与所述存储器电性连接,所述存储器用于存储指令和数据;处理器用于执行以下步骤:In a fourth aspect, an embodiment of the present application further provides an electronic device, including a processor and a memory, where the processor is electrically connected to the memory, the memory is used to store instructions and data, and the processor is configured to perform the following steps. :
获取目标画面中人脸图像的姿态信息;Obtaining posture information of a face image in the target screen;
根据所述姿态信息从预设人脸图像集合中获取目标样本人脸图像;Obtaining a target sample face image from the preset face image set according to the posture information;
提取所述人脸图像的表情特征,并根据所述表情特征对目标样本人脸图像进行调整,得到目标人脸图像;Extracting an expression feature of the face image, and adjusting the target sample face image according to the expression feature to obtain a target face image;
基于目标人脸图像对所述目标画面中人脸图像进行处理。The face image in the target picture is processed based on the target face image.
附图说明DRAWINGS
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present application. Other drawings can also be obtained from those skilled in the art based on these drawings without paying any creative effort.
图1是本申请实施例提供的电子设备实现深度学习的场景构架示意图。FIG. 1 is a schematic diagram of a scenario structure of an electronic device for implementing deep learning according to an embodiment of the present application.
图2是本申请实施例提供的图像处理方法的一种流程示意图。FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
图3是本申请实施例提供的图像处理方法的一种应用场景图。FIG. 3 is a schematic diagram of an application scenario of an image processing method provided by an embodiment of the present application.
图4是本申请实施例提供的图像处理方法的另一种应用场景图。FIG. 4 is another application scenario diagram of an image processing method provided by an embodiment of the present application.
[根据细则91更正 08.01.2019] 
图5是本申请实施例提供的图像处理方法的再一种应用场景图。
[Correct according to Rule 91 08.01.2019]
FIG. 5 is still another application scenario diagram of an image processing method provided by an embodiment of the present application.
图6是本申请实施例提供的图像处理装置的另一种流程示意图。FIG. 6 is another schematic flowchart of an image processing apparatus according to an embodiment of the present application.
图7是本申请实施例提供的图像处理装置的一种结构示意图。FIG. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
图8是本申请实施例提供的图像处理装置的另一种结构示意图。FIG. 8 is another schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
图9是本申请实施例提供的图像处理装置的再一种结构示意图。FIG. 9 is a schematic diagram of still another structure of an image processing apparatus according to an embodiment of the present application.
图10是本申请实施例提供的电子设备的一种结构示意图。FIG. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
图11是本申请实施例提供的电子设备的另一种结构示意图。FIG. 11 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application are clearly and completely described in the following with reference to the drawings in the embodiments of the present application. It is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present application without creative efforts are within the scope of the present application.
本申请实施例提供一种图像处理方法、装置、存储介质及电子设备。以下将分别进行详细说明。The embodiment of the present application provides an image processing method, device, storage medium, and electronic device. The details will be described separately below.
请参阅图1,图1为本申请实施例提供的电子设备实现深度学习的场景示意图。Referring to FIG. 1 , FIG. 1 is a schematic diagram of a scenario in which an electronic device implements deep learning according to an embodiment of the present disclosure.
当用户通过电子设备中的图像处理功能对图像进行处理时,电子设备可记录处理过程中的输入输出数据。其中,电子设备中可以包括数据采集统计系统与带回馈调整的预测系统。电子设备可通过数据采集系统获取用户大量的图像分类结果数据,并作出相应的统计,并提取图像的图像特征,基于机器深度学习对所提取到的图像特征进行分析处理。在输入图像时,电子设备通过预测系统预测图像的分类结果。在用户做出最终选择行为后,所述预测系统根据用户行为的最终结果,反向回馈调整各权重项的权值。经过多次的迭代更正以后,使得所述预测系统的各个权重项的权值最终收敛,形成学习得到的数据库。When the user processes the image through the image processing function in the electronic device, the electronic device can record the input and output data during the processing. The electronic device may include a data collection and statistics system and a prediction system with feedback adjustment. The electronic device can acquire a large amount of image classification result data of the user through the data acquisition system, and make corresponding statistics, and extract image features of the image, and analyze and process the extracted image features based on machine depth learning. When an image is input, the electronic device predicts the classification result of the image through the prediction system. After the user makes the final selection behavior, the prediction system reversely reciprocates the weights of the weighting items according to the final result of the user behavior. After a number of iterative corrections, the weights of the weighting items of the prediction system are finally converged to form a learned database.
电子设备可以为移动终端,如手机、平板电脑等,也可以为传统的PC(Personal Computer,个人电脑)等,本申请实施例对此不进行限定。The electronic device may be a mobile terminal, such as a mobile phone, a tablet computer, or the like, or may be a conventional PC (Personal Computer), etc., which is not limited in this embodiment of the present application.
本申请实施例提供一种图像处理方法,包括:An embodiment of the present application provides an image processing method, including:
获取目标画面中人脸图像的姿态信息;Obtaining posture information of a face image in the target screen;
根据所述姿态信息从预设人脸图像集合中获取目标样本人脸图像;Obtaining a target sample face image from the preset face image set according to the posture information;
提取所述人脸图像的表情特征,并根据所述表情特征对目标样本人脸图像进行调整,得到目标人脸图像;Extracting an expression feature of the face image, and adjusting the target sample face image according to the expression feature to obtain a target face image;
基于目标人脸图像对所述目标画面中人脸图像进行处理。The face image in the target picture is processed based on the target face image.
在一些实施例中,确定目标画面中人脸图像的姿态信息的步骤,包括:In some embodiments, the step of determining posture information of the face image in the target image includes:
确定目标画面中人脸图像的面部特征点;Determining facial feature points of the face image in the target picture;
根据所述面部特征点生成面部特征向量;Generating a facial feature vector according to the facial feature point;
获取所述面部特征向量与预设面部特征向量之间的差异值;Obtaining a difference value between the facial feature vector and the preset facial feature vector;
根据所述差异值获取所述人脸图像的姿态信息。Obtaining the posture information of the face image according to the difference value.
在一些实施例中,所述姿态信息包括偏转角度;根据所述姿态信息从预设人脸图像集合中获取对应的样本人脸图像的步骤包括:In some embodiments, the posture information includes a deflection angle; and the step of acquiring a corresponding sample facial image from the preset facial image set according to the posture information comprises:
获取预设人脸图像集合中每一样本人脸图像对应的样本偏转角度,得到多个样本偏转角度;Obtaining a sample deflection angle corresponding to each sample face image in the preset face image set, to obtain a plurality of sample deflection angles;
从多个样本偏转角度中选中与所述偏转角度之间差值最小的目标样本偏转角度;Selecting a target sample deflection angle that is the smallest difference from the deflection angle from a plurality of sample deflection angles;
将所述目标样本偏转角度对应的样本人脸图像作为目标样本人脸图像。The sample face image corresponding to the target sample deflection angle is used as the target sample face image.
在一些实施例中,提取所述人脸图像的表情特征,并根据所述表情特征对目标样本人脸图像进行调整的步骤,包括:In some embodiments, the step of extracting the expression feature of the face image and adjusting the target sample face image according to the expression feature includes:
提取所述人脸图像的表情特征;Extracting an expression feature of the face image;
基于预设算法模型对所述表情特征进行处理,得到表情特征参数;The expression feature is processed based on a preset algorithm model to obtain an expression feature parameter;
根据所述表情特征参数对目标样本人脸图像进行调整。The target sample face image is adjusted according to the expression feature parameter.
在一些实施例中,基于目标人脸图像对所述目标画面中人脸图像进行处理的步骤,包括:In some embodiments, the step of processing the face image in the target picture based on the target face image includes:
对所述人脸图像进行边缘特征点检测,并获取所述边缘特征点的位置信息;Performing edge feature point detection on the face image, and acquiring location information of the edge feature point;
基于所述位置信息和所述目标人脸图像对所述目标画面中人脸图像进行处理。The face image in the target picture is processed based on the position information and the target face image.
在一些实施例中,基于所述位置信息和所述目标人脸图像对所述目标画面中人脸图像进行处理的步骤,包括:In some embodiments, the step of processing the face image in the target picture based on the location information and the target face image includes:
根据所述位置信息生成人脸掩膜;Generating a face mask according to the location information;
基于所述人脸掩膜对目标人脸图像进行处理,得到处理后的目标人脸图像;Processing the target face image based on the face mask to obtain the processed target face image;
将目标画面中人脸图像替换为处理后的目标人脸图像。The face image in the target screen is replaced with the processed target face image.
在一些实施例中,将目标画面中人脸图像替换为处理后的目标人脸图像之后,还包括:In some embodiments, after replacing the face image in the target picture with the processed target face image, the method further includes:
获取人脸图像替换前原始人脸图像的颜色信息;Obtaining color information of the original face image before the face image is replaced;
根据所述颜色信息生成颜色调整参数;Generating a color adjustment parameter according to the color information;
基于所述颜色调整参数对当前人脸图像的颜色进行调整。The color of the current face image is adjusted based on the color adjustment parameter.
在一实施例中,提供一种图像处理方法,如图2所示,流程可以如下:In an embodiment, an image processing method is provided. As shown in FIG. 2, the flow may be as follows:
101、获取目标画面中人脸图像的姿态信息。101. Acquire posture information of a face image in a target screen.
在一些实施例中,该目标画面具体可以为电子设备通过摄像头采集目标画面。该摄像头可以为数字摄像头,也可为模拟摄像头。数字摄像头可以将图像采集设备产生的模拟图像信号转换成数字信号,进而将其储存在计算机里。模拟摄像头捕捉到的图像信号必须经过特定的图像捕捉卡将模拟信号转换成数字模式,并加以压缩后才可以转换到计算机上运用。数字摄像头可以直接捕捉影像,然后通过串、并口或者USB接口传到计算机里。本申请实施例中,电子设备一般采用数字摄像头,以实时将所采集的画面转换成数据在电子设备的显示界面(即相机的预览框)上实时显示。In some embodiments, the target screen may specifically be that the electronic device collects the target image through the camera. The camera can be a digital camera or an analog camera. The digital camera converts the analog image signal generated by the image acquisition device into a digital signal, which is then stored in a computer. The image signal captured by the analog camera must be converted to a digital mode by a specific image capture card and compressed before being converted to a computer for use. The digital camera captures the image directly and transmits it to the computer via a serial, parallel or USB interface. In the embodiment of the present application, the electronic device generally adopts a digital camera to convert the collected image into data in real time and display it on the display interface of the electronic device (ie, the preview frame of the camera).
本申请实施例所提供的图像处理方法,主要应用于夜间图像拍摄过程中存在噪点影响的场景下。尤其是自拍时,以手机为例,其前置摄像头的像素一般较差,在光线不足的夜间拍摄更加容易产生严重的噪声影响。其中,目标画面包括有一个或多个人物图像,且至少存在一个可识别到的人脸图像。另外,该目标画面中还可进一步包括景物图像,如建筑物、动植物等。The image processing method provided by the embodiment of the present application is mainly applied to a scene in which noise is affected during nighttime image capturing. Especially in the case of self-timer, taking the mobile phone as an example, the pixels of the front camera are generally poor, and shooting at night with insufficient light is more likely to cause serious noise effects. The target picture includes one or more person images, and at least one recognizable face image exists. In addition, the target image may further include a scene image such as a building, an animal or a plant.
本申请实施例中,需实时对目标画面进行跟踪分析,基于图像识别技术识别出其中的人脸图像,并对该人脸图像中的关键点进行检测,以确定人脸的姿态。也即,在一些实施例中,步骤“确定目标画面中人脸图像的姿态信息”可以包括以下流程:In the embodiment of the present application, the target image is tracked and analyzed in real time, the face image is recognized based on the image recognition technology, and the key points in the face image are detected to determine the posture of the face. That is, in some embodiments, the step of "determining the pose information of the face image in the target screen" may include the following process:
确定目标画面中人脸图像的面部特征点;Determining facial feature points of the face image in the target picture;
根据面部特征点生成面部特征向量;Generating a facial feature vector from the facial feature points;
获取面部特征向量与预设面部特征向量之间的差异值;Obtaining a difference value between the facial feature vector and the preset facial feature vector;
根据差异值获取人脸图像的姿态信息。Obtaining the posture information of the face image according to the difference value.
其中,面部特征点具体可以是根据人脸图像中的“两只眼睛+嘴巴”,或者“两只眼 睛+鼻子”而得到的特征点。而预设面部特征向量可以为人脸姿态为正面时的特征向量。姿态信息可以是相对于该正面而言的姿态。比如参考图3,A为眉毛、眼睛B1(左)和眼睛B2(右)、鼻子C、嘴巴D、耳朵E以及脸蛋F,其中将两只眼睛B1、B2以及鼻子C作为特征部,进一步的从特征部中选取特征点,以作为面部特征点。将特征点相互之间的位置关系所构成的向量作为特征向量。The facial feature point may specifically be a feature point obtained according to "two eyes + mouth" or "two eyes + nose" in the face image. The preset facial feature vector may be a feature vector when the face pose is positive. The gesture information may be a gesture relative to the front side. For example, referring to FIG. 3, A is an eyebrow, an eye B1 (left), an eye B2 (right), a nose C, a mouth D, an ear E, and a face F, wherein two eyes B1, B2, and a nose C are used as features, further Feature points are selected from the features as facial feature points. A vector formed by a positional relationship between feature points is used as a feature vector.
如图3所选取的特征点为眼睛B1的内眼角、眼睛B2的内眼角以及鼻子C的鼻尖(由于眼球是会转动的,因此可以选择位置固定不变的标识点)作为面部特征点,并形成构成三角区域的三个向量,图中这三个向量即可以作为预设面部特征向量。实际应用中,人脸一旦发生姿态改变,则该三个向量的大小和/或方向也会发生变化,因此可所检测到实时人脸的面部特征向量,将其与预设面部特征向量进行比较,便可以根据所计算两者间的差异值来确定当前人脸图像的姿态信息,如偏左、偏右、偏上、偏下、偏右上。偏左下等等。The feature points selected in Fig. 3 are the inner corner of the eye B1, the inner corner of the eye B2, and the tip of the nose C (because the eyeball is rotated, it is possible to select a marker point with a fixed position) as the facial feature point, and Three vectors constituting a triangular region are formed, and these three vectors can be used as preset facial feature vectors. In practical applications, once the pose changes, the size and/or direction of the three vectors will also change. Therefore, the facial feature vector of the real-time face can be detected and compared with the preset facial feature vector. Then, the posture information of the current face image can be determined according to the difference value between the two calculations, such as the left side, the right side, the upper side, the lower side, and the upper right side. Left and right, etc.
102、根据姿态信息从预设人脸图像集合中获取目标样本人脸图像。102. Acquire a target sample face image from the preset face image set according to the posture information.
在本申请实施例中,需要预先构建人脸图像集合。需要说明的是,人脸图像集合中包括有同一人物的多个不同姿态的样本人脸图像,且该样本图像为无表情图像,即不会表现出喜怒哀乐的脸部图像。本由于本申请实施例主要针对夜间图像拍摄的噪声影响问题,因此,所构建的人脸图像集合中的样本人脸图像都为画质较高的图像。实际应用中,这些高清的样本人脸图像可以由用户在光线良好的白天拍摄而得到。In the embodiment of the present application, it is necessary to construct a face image set in advance. It should be noted that the face image set includes sample face images of a plurality of different postures of the same person, and the sample image is an expressionless image, that is, a face image that does not express emotions. The embodiment of the present application is mainly directed to the noise influence problem of nighttime image capturing. Therefore, the sample face images in the constructed face image set are images with higher image quality. In practice, these high-resolution sample face images can be taken by the user during daylight hours.
在构建预设人脸图像集合时,首先是采集多张不同姿态的照片,具体可以使获取不同角度的照片。然后可通过相机的拍摄参数或者镜头与被拍摄者之间的位置关系,分析出人脸相对于摄像头镜头所在平面的偏转角度。最后,将所采集到的人脸图像作为样本人脸图像、对应的偏转角度作为样本偏转角度,并建立所拍摄出的人脸图像与偏转角度之间的映射关系后,将样本人脸图像、样本偏转角度以及两者之间的映射关系添加到预设人脸图像集合中,以完成集合的构建。When constructing a preset face image collection, the first is to collect a plurality of photos of different postures, specifically to obtain photos of different angles. The angle of deflection of the face relative to the plane of the camera lens can then be analyzed by the camera's shooting parameters or the positional relationship between the lens and the subject. Finally, using the collected face image as the sample face image and the corresponding deflection angle as the sample deflection angle, and establishing a mapping relationship between the captured face image and the deflection angle, the sample face image, The sample deflection angle and the mapping relationship between the two are added to the preset face image set to complete the construction of the set.
也即,在一些实施例中姿态信息包括偏转角度。则步骤“根据姿态信息从预设人脸图像集合中获取对应的样本人脸图像”的步骤可以包括以下流程:That is, in some embodiments the pose information includes a deflection angle. Then, the step of “acquiring the corresponding sample face image from the preset face image set according to the posture information” may include the following processes:
获取预设人脸图像集合中每一样本人脸图像对应的样本偏转角度,得到多个样本偏转角度;Obtaining a sample deflection angle corresponding to each sample face image in the preset face image set, to obtain a plurality of sample deflection angles;
从多个样本偏转角度中选中与该偏转角度之间差值最小的目标样本偏转角度;Selecting, from a plurality of sample deflection angles, a target sample deflection angle that is the smallest difference from the deflection angle;
将目标样本偏转角度对应的样本人脸图像作为目标样本人脸图像。The sample face image corresponding to the target sample deflection angle is used as the target sample face image.
其中,该偏转角度可以是在六个自由度上的偏转角度。为了提升人脸图像与样本人脸图像的匹配度,可以获取大量不同姿态的人脸图像,以增加样本人脸图像中偏转角度的密度,减小偏转角度之间的间隔值。Wherein, the deflection angle may be a deflection angle in six degrees of freedom. In order to improve the matching degree between the face image and the sample face image, a large number of face images of different postures can be obtained to increase the density of the deflection angle in the sample face image and reduce the interval value between the deflection angles.
103、提取人脸图像的表情特征,并根据表情特征对目标样本人脸图像进行调整,得到目标人脸图像。103. Extract an expression feature of the face image, and adjust the target sample face image according to the expression feature to obtain the target face image.
本实施例中,需将目标画面中的人脸图像的表情,实时迁移到无表情的样本人脸图像中。具体实施时,可以利用电子设备的深度学习技术将人脸图像中表情实时迁移到预设人脸图像集合中的无表情样本人脸图像中,更好的保留原始图像表情信息和高频信息。In this embodiment, the expression of the face image in the target picture needs to be migrated to the expressionless face image in real time. In a specific implementation, the deep learning technology of the electronic device may be used to dynamically migrate the expression in the face image to the unrecognized sample face image in the preset face image set, thereby better retaining the original image expression information and the high frequency information.
也即,在一些实施例中,步骤“提取人脸图像的表情特征,并根据表情特征对目标样本人脸图像进行调整”可以包括以下流程:That is, in some embodiments, the step of "extracting the expression features of the face image and adjusting the target sample face image according to the expression features" may include the following process:
提取人脸图像的表情特征;Extracting facial expression features of the face image;
基于预设算法模型对表情特征进行处理,得到表情特征参数;The expression feature is processed based on a preset algorithm model to obtain an expression feature parameter;
根据表情特征参数对目标样本人脸图像进行调整。The target sample face image is adjusted according to the expression feature parameter.
在一些实施例中,提取表情特征,具体可以为提取人脸图像中的颜色特征、纹理特征、形状特征、空间关系特征,从而根据所提取的图像特征识别出人脸的五官,诸如眼睛、嘴 巴、鼻子、眉毛、耳朵。而若要提高识别精度,则可基于机器的深度学习技术,对该电子设备进行训练得到一高精确度的算法模型,对表情特征进行分析处理,从而得到精确的表情特征参数。In some embodiments, the expression features are extracted, specifically, the color features, the texture features, the shape features, and the spatial relationship features in the face image are extracted, thereby identifying the facial features of the face according to the extracted image features, such as eyes and mouths. , nose, eyebrows, ears. If the recognition accuracy is to be improved, the electronic device can be trained based on the machine's deep learning technology to obtain a high-precision algorithm model, and the expression features are analyzed and processed to obtain accurate expression feature parameters.
随后,可按照得到的表情特征参数对无表情的目标样本人脸图像进行调整,将该目标样本人脸图像的表情调整为与该目标画面中的原始人脸图像表情一致,以实现将目标画面中的人脸图像的表情迁移到目标样本人脸图像中。Then, the expression of the target sample face image may be adjusted according to the obtained expression feature parameter, and the expression of the target sample face image is adjusted to be consistent with the original face image expression in the target image to achieve the target image. The expression of the face image in the migration migrates to the target sample face image.
颜色特征是一种全局特征,描述了图像或图像区域所对应的景物的表面性质。纹理特征也是一种全局特征,也描述了图像或图像区域所对应景物的表面性质。形状特征是一种局部特征,有两类表示方法,一类是轮廓特征,主要针对物体的外边界;另一类是区域特征,其关系到整个形状区域。空间关系特征,指图像中分割出来的多个目标之间的相互的空间位置或相对方向关系,这些关系也可分为连接/邻接关系、交叠/重叠关系和包含/包容关系等。A color feature is a global feature that describes the surface properties of a scene corresponding to an image or image area. A texture feature is also a global feature that also describes the surface properties of a scene corresponding to an image or image region. The shape feature is a local feature. There are two types of representation methods, one is the contour feature, which is mainly for the outer boundary of the object, and the other is the regional feature, which is related to the entire shape region. The spatial relationship feature refers to the mutual spatial position or relative direction relationship between multiple objects segmented in the image. These relationships can also be divided into connection/adjacency relationship, overlap/overlap relationship, and inclusion/inclusion relationship.
图像特征提取是使用计算机提取图像信息,决定每个图像的点是否属于一个图像特征。特征提取的结果是把图像上的点分为不同的子集,这些子集往往属于孤立的点、连续的曲线或者连续的区域。特征是许多计算机图像分析算法的起点。特征提取最重要的一个特性是“可重复性”,即同一场景的不同图像所提取的特征应该是相同的。Image feature extraction is the use of a computer to extract image information to determine whether a point of each image belongs to an image feature. The result of feature extraction is to divide the points on the image into different subsets, which tend to belong to isolated points, continuous curves, or continuous regions. Features are the starting point for many computer image analysis algorithms. One of the most important features of feature extraction is "reproducibility," meaning that features extracted from different images of the same scene should be identical.
具体实施过程中,可利用傅立叶变换法、窗口傅立叶变换法、小波变换法、最小二乘法、边界方向直方图法、基于Tamura纹理特征的纹理特征提取等,提取人脸图像的图像特征。In the specific implementation process, the image features of the face image can be extracted by using Fourier transform method, window Fourier transform method, wavelet transform method, least square method, boundary direction histogram method, and texture feature extraction based on Tamura texture feature.
104、基于目标人脸图像对目标画面中人脸图像进行处理。104. Process the face image in the target image based on the target face image.
在一些实施例中,步骤“基于目标人脸图像对目标画面中人脸图像进行处理”可以包括以下流程:In some embodiments, the step of "processing the face image in the target image based on the target face image" may include the following process:
对人脸图像进行边缘特征点检测,并获取边缘特征点的位置信息;Performing edge feature point detection on the face image, and acquiring position information of the edge feature points;
基于位置信息和目标人脸图像对目标画面中人脸图像进行处理。The face image in the target picture is processed based on the position information and the target face image.
其中,目标画面中人脸图像的边缘特征点,可参考图4中作图所示。获取这些边缘特征点相互之间的相对位置信息。The edge feature points of the face image in the target picture can be referred to as shown in FIG. 4 . Obtain relative position information of these edge feature points relative to each other.
在一些实施例中,步骤“基于位置信息和所述目标人脸图像对目标画面中人脸图像进行处理的”可以包括以下流程:In some embodiments, the step of "processing the face image in the target image based on the location information and the target face image" may include the following process:
根据该位置信息生成人脸掩膜;Generating a face mask based on the location information;
基于该人脸掩膜对目标人脸图像进行处理,得到处理后的目标人脸图像;The target face image is processed based on the face mask to obtain the processed target face image;
将目标画面中人脸图像替换为处理后的目标人脸图像。The face image in the target screen is replaced with the processed target face image.
继续参考图4,图4中右图为基于左图中边缘特征点的位置信息而生成的人脸掩膜。利用人脸掩膜将高清的目标人脸图像交换到目标画面中噪声影响较严重的人脸图像上。其具体可以为:将人脸掩膜覆盖于高清的目标人脸图像上,提取目标人脸图像中与人脸掩膜区域交叠的交叠区域图像,以作为处理后的目标人脸图像。比如,参考图5,其中左图为目标画面中的人脸图像,右为替换人脸后的目标画面中的人脸图像。With continued reference to FIG. 4, the right image in FIG. 4 is a human face mask generated based on the position information of the edge feature points in the left figure. The face mask of the high-definition is exchanged with the face mask to the face image with a more serious noise influence on the target image. The specific method may be: overlaying the face mask on the high-definition target face image, and extracting an overlapping area image of the target face image overlapping the face mask area as the processed target face image. For example, referring to FIG. 5, the left image is a face image in the target picture, and the right is a face image in the target picture after replacing the face.
在将目标画面中人脸图像替换为处理后的目标人脸图像时,可通基于泊松融合技术,将处理后的目标人脸图像与目标画面融合,覆盖目标画面中原有的人脸图像,从而实现将目标画面中人脸图像替换为处理后的目标人脸图像。其中,泊松融合技术可以较好地消除目标人脸图像与目标画面的交界,使得画面更加自然且不突兀,实现无缝拼接。When the face image in the target picture is replaced with the processed target face image, the processed target face image may be merged with the target image based on the Poisson fusion technique to cover the original face image in the target image. Thereby, the face image in the target picture is replaced with the processed target face image. Among them, Poisson fusion technology can better eliminate the boundary between the target face image and the target image, making the picture more natural and unobtrusive, achieving seamless splicing.
由上可知,本申请实施例提供了一种图像处理方法,通过获取目标画面中人脸图像的姿态信息;根据姿态信息从预设人脸图像集合中获取目标样本人脸图像;提取人脸图像的表情特征,并根据表情特征对目标样本人脸图像进行调整,得到目标人脸图像;基于目标人脸图像对目标画面中人脸图像进行处理。该方案可将噪声较严重的画面中的人脸图像替 换成预存的高清人脸图像,可降低噪声影响,提升图像画质。As can be seen from the above, an embodiment of the present application provides an image processing method for acquiring posture information of a face image in a target image, acquiring a target sample face image from a preset face image set according to the posture information, and extracting a face image. The expression features are adjusted according to the expression features to obtain the target face image; the face image in the target image is processed based on the target face image. This scheme can replace the face image in the more serious picture with the pre-stored high-definition face image, which can reduce the influence of noise and improve the image quality.
在一实施例中,还提供另一种图像处理方法,如图6所示,流程可以如下:In an embodiment, another image processing method is also provided. As shown in FIG. 6, the flow may be as follows:
201、构建人脸图像数据库。201. Construct a face image database.
在本申请实施例中,需要预先构建人脸图像集合。需要说明的是,人脸图像集合中包括有同一人物的多个不同姿态的样本人脸图像,且该样本图像为无表情图像,即不会表现出喜怒哀乐的脸部图像。本由于本申请实施例主要针对夜间图像拍摄的噪声影响问题,因此,所构建的人脸图像集合中的样本人脸图像都为画质较高的图像。实际应用中,这些高清的样本人脸图像可以由用户在光线良好的白天拍摄而得到。In the embodiment of the present application, it is necessary to construct a face image set in advance. It should be noted that the face image set includes sample face images of a plurality of different postures of the same person, and the sample image is an expressionless image, that is, a face image that does not express emotions. The embodiment of the present application is mainly directed to the noise influence problem of nighttime image capturing. Therefore, the sample face images in the constructed face image set are images with higher image quality. In practice, these high-resolution sample face images can be taken by the user during daylight hours.
在构建预设人脸图像集合时,首先是采集多张不同姿态的照片,具体可以使获取不同角度的照片。然后可通过相机的拍摄参数或者镜头与被拍摄者之间的位置关系,分析出人脸相对于摄像头镜头所在平面的偏转角度。最后,将所采集到的人脸图像作为样本人脸图像、对应的偏转角度作为样本偏转角度,并建立所拍摄出的人脸图像与偏转角度之间的映射关系后,将样本人脸图像、样本偏转角度以及两者之间的映射关系添加到预设人脸图像集合中,以完成集合的构建。When constructing a preset face image collection, the first is to collect a plurality of photos of different postures, specifically to obtain photos of different angles. The angle of deflection of the face relative to the plane of the camera lens can then be analyzed by the camera's shooting parameters or the positional relationship between the lens and the subject. Finally, using the collected face image as the sample face image and the corresponding deflection angle as the sample deflection angle, and establishing a mapping relationship between the captured face image and the deflection angle, the sample face image, The sample deflection angle and the mapping relationship between the two are added to the preset face image set to complete the construction of the set.
202、获取目标画面中人脸图像的姿态信息,根据姿态信息从预设人脸图像集合中获取目标样本人脸图像。202. Acquire posture information of a face image in the target image, and obtain a target sample face image from the preset face image set according to the posture information.
本申请实施例所提供的图像处理方法,主要应用于夜间图像拍摄过程中存在噪点影响的场景下。该目标画面具体可以为电子设备通过摄像头采集目标画面。其中,目标画面包括有一个或多个人物图像,且至少存在一个可识别到的人脸图像。另外,该目标画面中还可进一步包括景物图像,如建筑物、动植物等。The image processing method provided by the embodiment of the present application is mainly applied to a scene in which noise is affected during nighttime image capturing. The target screen may specifically be that the electronic device collects the target image through the camera. The target picture includes one or more person images, and at least one recognizable face image exists. In addition, the target image may further include a scene image such as a building, an animal or a plant.
本申请实施例中,需实时对目标画面进行跟踪分析,基于图像识别技术识别出其中的人脸图像,并对该人脸图像中的关键点进行检测,以确定人脸的姿态。In the embodiment of the present application, the target image is tracked and analyzed in real time, the face image is recognized based on the image recognition technology, and the key points in the face image are detected to determine the posture of the face.
在一些实施例中姿态信息包括偏转角度。则可以获取预设人脸图像集合中每一样本人脸图像对应的样本偏转角度,得到多个样本偏转角度,然后从多个样本偏转角度中选中与该偏转角度之间差值最小的目标样本偏转角度,再将目标样本偏转角度对应的样本人脸图像作为目标样本人脸图像。In some embodiments the attitude information includes a deflection angle. The sample deflection angle corresponding to each sample face image in the preset face image set may be obtained, and a plurality of sample deflection angles are obtained, and then the target sample deflection with the smallest difference between the deflection angles and the deflection angle is selected from the plurality of sample deflection angles. Angle, the sample face image corresponding to the target sample deflection angle is used as the target sample face image.
其中,该偏转角度可以是在六个自由度上的偏转角度。为了提升人脸图像与样本人脸图像的匹配度,可以获取大量不同姿态的人脸图像,以增加样本人脸图像中偏转角度的密度,减小偏转角度之间的间隔值。Wherein, the deflection angle may be a deflection angle in six degrees of freedom. In order to improve the matching degree between the face image and the sample face image, a large number of face images of different postures can be obtained to increase the density of the deflection angle in the sample face image and reduce the interval value between the deflection angles.
203、提取人脸图像的表情特征,并基于预设算法模型对表情特征进行处理,得到表情特征参数。203. Extract an expression feature of the face image, and process the expression feature based on the preset algorithm model to obtain an expression feature parameter.
本实施例中,需将目标画面中的人脸图像的表情,实时迁移到无表情的样本人脸图像中。具体实施时,可以利用电子设备的深度学习技术将人脸图像中表情实时迁移到预设人脸图像集合中的无表情样本人脸图像中,更好的保留原始图像表情信息和高频信息。In this embodiment, the expression of the face image in the target picture needs to be migrated to the expressionless face image in real time. In a specific implementation, the deep learning technology of the electronic device may be used to dynamically migrate the expression in the face image to the unrecognized sample face image in the preset face image set, thereby better retaining the original image expression information and the high frequency information.
在一些实施例中,提取表情特征,具体可以为提取人脸图像中的颜色特征、纹理特征、形状特征、空间关系特征,从而根据所提取的图像特征识别出人脸的五官,诸如眼睛、嘴巴、鼻子、眉毛、耳朵。而若要提高识别精度,则可基于机器的深度学习技术,对该电子设备进行训练得到一高精确度的算法模型,对表情特征进行分析处理,从而得到精确的表情特征参数。In some embodiments, the expression features are extracted, specifically, the color features, the texture features, the shape features, and the spatial relationship features in the face image are extracted, thereby identifying the facial features of the face according to the extracted image features, such as eyes and mouths. , nose, eyebrows, ears. If the recognition accuracy is to be improved, the electronic device can be trained based on the machine's deep learning technology to obtain a high-precision algorithm model, and the expression features are analyzed and processed to obtain accurate expression feature parameters.
204、根据表情特征参数对目标样本人脸图像进行调整,以得到目标人样图像。204. Adjust the target sample face image according to the expression feature parameter to obtain the target human sample image.
具体地,可按照得到的表情特征参数对无表情的目标样本人脸图像进行调整,将该目标样本人脸图像的表情调整为与该目标画面中的原始人脸图像表情一致,以实现将目标画面中的人脸图像的表情迁移到目标样本人脸图像中。Specifically, the expression of the target sample face image may be adjusted according to the obtained expression feature parameter, and the expression of the target sample face image is adjusted to be consistent with the original face image expression in the target image to achieve the target. The expression of the face image in the picture is moved to the target sample face image.
205、对目标画面中人脸图像进行边缘特征点检测以边缘特征点的位置信息。205. Perform edge feature point detection on the face image in the target image to position information of the edge feature point.
其中,所获取的位置信息为边缘特征点相互之间的相对位置信息。The acquired location information is relative location information between the edge feature points.
206、根据位置信息生成人脸掩膜,基于人脸掩膜对目标人脸图像进行处理。206. Generate a face mask according to the location information, and process the target face image based on the face mask.
基于左图中边缘特征点的位置信息而生成的人脸掩膜。利用人脸掩膜将高清的目标人脸图像交换到目标画面中噪声影响较严重的人脸图像上。其具体可以为:将人脸掩膜覆盖于高清的目标人脸图像上,提取目标人脸图像中与人脸掩膜区域交叠的交叠区域图像。A face mask generated based on position information of edge feature points in the left figure. The face mask of the high-definition is exchanged with the face mask to the face image with a more serious noise influence on the target image. The specific method may be: overlaying the face mask on the high-definition target face image, and extracting an overlapping area image of the target face image overlapping the face mask area.
207、将目标画面中人脸图像替换为处理后的目标人脸图像。207. Replace the face image in the target screen with the processed target face image.
具体地,在将目标画面中人脸图像替换为处理后的目标人脸图像时,可通基于泊松融合技术,将处理后的目标人脸图像与目标画面融合,覆盖目标画面中原有的人脸图像,从而实现将目标画面中人脸图像替换为处理后的目标人脸图像。其中,泊松融合技术可以较好地消除目标人脸图像与目标画面的交界,使得画面更加自然且不突兀,实现无缝拼接。Specifically, when the face image in the target image is replaced with the processed target face image, the processed target face image may be merged with the target image based on the Poisson fusion technique to cover the original person in the target image. The face image, thereby replacing the face image in the target picture with the processed target face image. Among them, Poisson fusion technology can better eliminate the boundary between the target face image and the target image, making the picture more natural and unobtrusive, achieving seamless splicing.
208、获取人脸图像替换前原始人脸图像的颜色信息,根据颜色信息生成颜色调整参数。208. Acquire color information of the original face image before replacing the face image, and generate a color adjustment parameter according to the color information.
其中,所获取的颜色信息可以包括多种,比如色温、色调、亮度、饱和度等。具体地,可以基于相关算法对所获取的颜色信息进行分析处理,得到目标颜色参数。然后获取目标人脸图像的颜色信息,同样对所获取的颜色信息进行分析处理,得到目标颜色参数。最后,求取目标颜色参数与目标颜色参数之间的差异参数值,并将该差异参数值作为最终的颜色调整参数。The acquired color information may include various colors such as color temperature, hue, brightness, saturation, and the like. Specifically, the acquired color information may be analyzed and processed based on a correlation algorithm to obtain a target color parameter. Then, the color information of the target face image is obtained, and the acquired color information is also analyzed and processed to obtain the target color parameter. Finally, the difference parameter value between the target color parameter and the target color parameter is obtained, and the difference parameter value is used as the final color adjustment parameter.
209、基于颜色调整参数对当前人脸图像的颜色进行调整。209. Adjust the color of the current face image based on the color adjustment parameter.
具体地,在获取颜色调整参数后,根据颜色调整参数对当前人脸图像的颜色进行调整,以使得人脸图像参数使人脸光线色彩更自然,更加贴近真实场景。Specifically, after the color adjustment parameter is acquired, the color of the current face image is adjusted according to the color adjustment parameter, so that the face image parameter makes the face light color more natural and closer to the real scene.
由上可知,本申请实施例提供的图像处理方法,通过构建具有高清样本人脸图像的人脸图像数据库,根据目标画面中人脸图像的姿态信息从中匹配目标样本人脸图像,并将目标画面中的人脸图像的表情迁移到该样本人脸图像中,以得到目标人脸图像。接着,对目标画面中人脸图像进行边缘特征点检测,根据边缘特征点的位置信息生成人脸掩膜,并基于人脸掩膜对目标人脸图像进行处理,将目标画面中人脸图像替换为处理后的目标人脸图像,最后,获取人脸图像替换前原始人脸图像的颜色信息,根据颜色信息生成颜色调整参数,并基于颜色调整参数对替换后的人脸图像的颜色进行调整。该方案可将噪声较严重的画面中的人脸图像替换成预存的高清人脸图像,可降低噪声影响,提升图像画质。It can be seen from the above that the image processing method provided by the embodiment of the present application constructs a face image database having a high-resolution sample face image, and matches the target sample face image according to the posture information of the face image in the target image, and the target image is obtained. The expression of the face image in the migration migrates to the sample face image to obtain the target face image. Then, the edge feature points are detected on the face image in the target image, the face mask is generated according to the position information of the edge feature points, and the target face image is processed based on the face mask, and the face image in the target image is replaced. For the processed target face image, finally, the color information of the original face image before the face image replacement is acquired, the color adjustment parameter is generated according to the color information, and the color of the replaced face image is adjusted based on the color adjustment parameter. The scheme can replace the face image in the more severe picture with the pre-stored high-definition face image, which can reduce the influence of noise and improve the image quality.
在本申请又一实施例中,还提供一种图像处理装置,该图像处理装置可以软件或硬件的形式集成在电子设备中,该电子设备具体可以包括手机、平板电脑、笔记本电脑等设备。如图7所示,该图像处理装置30可以包括信息获取模块31、图像获取模块32、调整模块33、以及处理模块34,其中:In another embodiment of the present application, an image processing apparatus is further provided, which may be integrated in an electronic device in the form of software or hardware, and the electronic device may specifically include a mobile phone, a tablet computer, a notebook computer, and the like. As shown in FIG. 7, the image processing apparatus 30 may include an information acquisition module 31, an image acquisition module 32, an adjustment module 33, and a processing module 34, wherein:
信息获取模块31,用于获取目标画面中人脸图像的姿态信息;The information acquiring module 31 is configured to acquire posture information of a face image in the target image;
图像获取模块32,用于根据姿态信息从预设人脸图像集合中获取目标样本人脸图像;The image obtaining module 32 is configured to obtain a target sample face image from the preset face image set according to the posture information;
调整模块33,用于提取人脸图像的表情特征,并根据表情特征对目标样本人脸图像进行调整,得到目标人脸图像;The adjusting module 33 is configured to extract an expression feature of the face image, and adjust the target sample face image according to the expression feature to obtain the target face image;
处理模块34,用于基于目标人脸图像对目标画面中人脸图像进行处理。The processing module 34 is configured to process the face image in the target image based on the target face image.
在一些实施例中,参考图8,调整模块33可以包括:In some embodiments, referring to FIG. 8, the adjustment module 33 can include:
提取子模块331,用于提取人脸图像的表情特征;An extraction sub-module 331, configured to extract an expression feature of the face image;
第一处理子模块332,用于基于预设算法模型对表情特征进行处理,得到表情特征参数;The first processing sub-module 332 is configured to process the expression feature based on the preset algorithm model to obtain an expression feature parameter;
调整子模块333,用于根据表情特征参数对目标样本人脸图像进行调整。The adjustment sub-module 333 is configured to adjust the target sample face image according to the expression feature parameter.
在一些实施例中,参考图9,处理模块34可以包括:In some embodiments, referring to FIG. 9, the processing module 34 can include:
获取子模块341,用于对人脸图像进行边缘特征点检测,并获取边缘特征点的位置信息;The obtaining sub-module 341 is configured to perform edge feature point detection on the face image, and acquire location information of the edge feature point;
第二处理子模块342,用于基于位置信息和目标人脸图像对目标画面中人脸图像进行处理。The second processing sub-module 342 is configured to process the facial image in the target image based on the location information and the target facial image.
在一些实施例中,所述信息获取模块31可以用于:In some embodiments, the information acquisition module 31 can be used to:
确定目标画面中人脸图像的面部特征点;Determining facial feature points of the face image in the target picture;
根据所述面部特征点生成面部特征向量;Generating a facial feature vector according to the facial feature point;
获取所述面部特征向量与预设面部特征向量之间的差异值;Obtaining a difference value between the facial feature vector and the preset facial feature vector;
根据所述差异值获取所述人脸图像的姿态信息。Obtaining the posture information of the face image according to the difference value.
在一些实施例中,所述姿态信息包括偏转角度,所述图像获取模块32可以用于:In some embodiments, the attitude information includes a deflection angle, and the image acquisition module 32 can be configured to:
获取预设人脸图像集合中每一样本人脸图像对应的样本偏转角度,得到多个样本偏转角度;Obtaining a sample deflection angle corresponding to each sample face image in the preset face image set, to obtain a plurality of sample deflection angles;
从多个样本偏转角度中选中与所述偏转角度之间差值最小的目标样本偏转角度;Selecting a target sample deflection angle that is the smallest difference from the deflection angle from a plurality of sample deflection angles;
将所述目标样本偏转角度对应的样本人脸图像作为目标样本人脸图像。The sample face image corresponding to the target sample deflection angle is used as the target sample face image.
由上可知,本申请实施例提供的图像处理装置,通过获取目标画面中人脸图像的姿态信息;根据姿态信息从预设人脸图像集合中获取目标样本人脸图像;提取人脸图像的表情特征,并根据表情特征对目标样本人脸图像进行调整,得到目标人脸图像;基于目标人脸图像对目标画面中人脸图像进行处理。该方案可将噪声较严重的画面中的人脸图像替换成预存的高清人脸图像,可降低噪声影响,提升图像画质。It can be seen that the image processing apparatus provided by the embodiment of the present application acquires the posture information of the face image in the target image; acquires the target sample face image from the preset face image set according to the posture information; and extracts the expression of the face image. Feature, and adjusting the target sample face image according to the expression feature to obtain the target face image; and processing the face image in the target image based on the target face image. The scheme can replace the face image in the more severe picture with the pre-stored high-definition face image, which can reduce the influence of noise and improve the image quality.
在本申请又一实施例中还提供一种电子设备,该电子设备可以是智能手机、平板电脑等设备。如图10所示,电子设备400包括处理器401、存储器402。其中,处理器401与存储器402电性连接。In another embodiment of the present application, an electronic device is further provided, and the electronic device may be a device such as a smart phone or a tablet computer. As shown in FIG. 10, the electronic device 400 includes a processor 401 and a memory 402. The processor 401 is electrically connected to the memory 402.
处理器401是电子设备400的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或加载存储在存储器402内的应用程序,以及调用存储在存储器402内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。The processor 401 is a control center of the electronic device 400, which connects various parts of the entire electronic device using various interfaces and lines, executes the electronic by running or loading an application stored in the memory 402, and calling data stored in the memory 402. The various functions and processing data of the device enable overall monitoring of the electronic device.
在本实施例中,电子设备400中的处理器401会按照如下的步骤,将一个或一个以上的应用程序的进程对应的指令加载到存储器402中,并由处理器401来运行存储在存储器402中的应用程序,从而实现各种功能:In this embodiment, the processor 401 in the electronic device 400 loads the instructions corresponding to the process of one or more applications into the memory 402 according to the following steps, and is stored and stored in the memory 402 by the processor 401. In the application, thus implementing various functions:
获取目标画面中人脸图像的姿态信息;Obtaining posture information of a face image in the target screen;
根据姿态信息从预设人脸图像集合中获取目标样本人脸图像;Obtaining a target sample face image from the preset face image set according to the posture information;
提取人脸图像的表情特征,并根据表情特征对目标样本人脸图像进行调整,得到目标人脸图像;Extracting the expression features of the face image, and adjusting the target sample face image according to the expression features to obtain the target face image;
基于目标人脸图像对目标画面中人脸图像进行处理。The face image in the target picture is processed based on the target face image.
在一些实施例中,处理器401用于执行以下步骤:In some embodiments, the processor 401 is configured to perform the following steps:
确定目标画面中人脸图像的面部特征点;Determining facial feature points of the face image in the target picture;
根据面部特征点生成面部特征向量;Generating a facial feature vector from the facial feature points;
获取面部特征向量与预设面部特征向量之间的差异值;Obtaining a difference value between the facial feature vector and the preset facial feature vector;
根据差异值获取人脸图像的姿态信息。Obtaining the posture information of the face image according to the difference value.
在一些实施例中,姿态信息包括偏转角度;处理器401还执行以下步骤:In some embodiments, the pose information includes a deflection angle; the processor 401 also performs the following steps:
获取预设人脸图像集合中每一样本人脸图像对应的样本偏转角度,得到多个样本偏转角度;Obtaining a sample deflection angle corresponding to each sample face image in the preset face image set, to obtain a plurality of sample deflection angles;
从多个样本偏转角度中选中与偏转角度之间差值最小的目标样本偏转角度;Selecting a target sample deflection angle that is the smallest difference from the deflection angle from the plurality of sample deflection angles;
将目标样本偏转角度对应的样本人脸图像作为目标样本人脸图像。The sample face image corresponding to the target sample deflection angle is used as the target sample face image.
在一些实施例中,处理器401还执行以下步骤:In some embodiments, processor 401 also performs the following steps:
提取人脸图像的表情特征;Extracting facial expression features of the face image;
基于预设算法模型对表情特征进行处理,得到表情特征参数;The expression feature is processed based on a preset algorithm model to obtain an expression feature parameter;
根据表情特征参数对目标样本人脸图像进行调整。The target sample face image is adjusted according to the expression feature parameter.
在一些实施例中,处理器401还执行以下步骤:对人脸图像进行边缘特征点检测,并获取边缘特征点的位置信息;基于位置信息和目标人脸图像对目标画面中人脸图像进行处理。In some embodiments, the processor 401 further performs the following steps: performing edge feature point detection on the face image, and acquiring location information of the edge feature points; and processing the face image in the target image based on the location information and the target face image .
在一些实施例中,处理器401还执行以下步骤:In some embodiments, processor 401 also performs the following steps:
根据位置信息生成人脸掩膜;Generating a face mask based on the location information;
基于人脸掩膜对目标人脸图像进行处理,得到处理后的目标人脸图像;The target face image is processed based on the face mask to obtain the processed target face image;
将目标画面中人脸图像替换为处理后的目标人脸图像。The face image in the target screen is replaced with the processed target face image.
在一些实施例中,在将目标画面中人脸图像替换为处理后的目标人脸图像之后,处理器401还执行以下步骤:In some embodiments, after replacing the face image in the target picture with the processed target face image, the processor 401 further performs the following steps:
获取人脸图像替换前原始人脸图像的颜色信息;Obtaining color information of the original face image before the face image is replaced;
根据所述颜色信息生成颜色调整参数;Generating a color adjustment parameter according to the color information;
基于所述颜色调整参数对当前人脸图像的颜色进行调整。The color of the current face image is adjusted based on the color adjustment parameter.
存储器402可用于存储应用程序和数据。存储器402存储的应用程序中包含有可在处理器中执行的指令。应用程序可以组成各种功能模块。处理器401通过运行存储在存储器402的应用程序,从而执行各种功能应用以及数据处理。 Memory 402 can be used to store applications and data. The application stored in the memory 402 contains instructions that are executable in the processor. Applications can form various functional modules. The processor 401 executes various functional applications and data processing by running an application stored in the memory 402.
在一些实施例中,如图11所示,电子设备400还包括:显示屏403、控制电路404、射频电路405、输入单元406、音频电路407、传感器408以及电源409。其中,处理器401分别与显示屏403、控制电路404、射频电路405、输入单元406、音频电路407、传感器408以及电源409电性连接。In some embodiments, as shown in FIG. 11, the electronic device 400 further includes a display screen 403, a control circuit 404, a radio frequency circuit 405, an input unit 406, an audio circuit 407, a sensor 408, and a power source 409. The processor 401 is electrically connected to the display screen 403, the control circuit 404, the radio frequency circuit 405, the input unit 406, the audio circuit 407, the sensor 408, and the power source 409, respectively.
显示屏403可用于显示由用户输入的信息或提供给用户的信息以及电子设备的各种图形用户接口,这些图形用户接口可以由图像、文本、图标、视频和其任意组合来构成。其中,该显示屏403可以作为本申请实施例中的屏幕,用于显示信息。The display screen 403 can be used to display information entered by the user or information provided to the user as well as various graphical user interfaces of the electronic device, which can be composed of images, text, icons, video, and any combination thereof. The display screen 403 can be used as a screen in the embodiment of the present application for displaying information.
控制电路404与显示屏403电性连接,用于控制显示屏403显示信息。The control circuit 404 is electrically connected to the display screen 403 for controlling the display screen 403 to display information.
射频电路405用于收发射频信号,以通过无线通信与网络设备或其他电子设备建立无线通讯,与网络设备或其他电子设备之间收发信号。The radio frequency circuit 405 is configured to transmit and receive radio frequency signals to establish wireless communication with network devices or other electronic devices through wireless communication, and to transmit and receive signals with network devices or other electronic devices.
输入单元406可用于接收输入的数字、字符信息或用户特征信息(例如指纹),以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。其中,输入单元406可以包括指纹识别模组。The input unit 406 can be configured to receive input digits, character information, or user characteristic information (eg, fingerprints), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function controls. The input unit 406 can include a fingerprint identification module.
音频电路407可通过扬声器、传声器提供用户与电子设备之间的音频接口。The audio circuit 407 can provide an audio interface between the user and the electronic device through a speaker and a microphone.
传感器408用于采集外部环境信息。传感器408可以包括环境亮度传感器、加速度传感器、光传感器、运动传感器、以及其他传感器。 Sensor 408 is used to collect external environmental information. Sensor 408 can include ambient brightness sensors, acceleration sensors, light sensors, motion sensors, and other sensors.
电源409用于给电子设备400的各个部件供电。在一些实施例中,电源409可以通过电源管理系统与处理器401逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。 Power source 409 is used to power various components of electronic device 400. In some embodiments, the power supply 409 can be logically coupled to the processor 401 through a power management system to enable functions such as managing charging, discharging, and power management through the power management system.
摄像头410用于采集外界画面,可以使数字摄像头,也可以为模拟摄像头。在一些实施例中,摄像头410可将采集到的外界画面转换成数据发送给处理器401以执行图像处理操作。The camera 410 is used for collecting external images, and can be a digital camera or an analog camera. In some embodiments, camera 410 may convert the acquired external picture into data for transmission to processor 401 to perform image processing operations.
尽管图11中未示出,电子设备400还可以包括蓝牙模块等,在此不再赘述。Although not shown in FIG. 11, the electronic device 400 may further include a Bluetooth module or the like, and details are not described herein again.
由上可知,本申请实施例提供的电子设备,通过获取目标画面中人脸图像的姿态信息;根据姿态信息从预设人脸图像集合中获取目标样本人脸图像;提取人脸图像的表情特征,并根据表情特征对目标样本人脸图像进行调整,得到目标人脸图像;基于目标人脸图像对目标画面中人脸图像进行处理。该方案可将噪声较严重的画面中的人脸图像替换成预存的高清人脸图像,可降低噪声影响,提升图像画质。It can be seen that the electronic device provided by the embodiment of the present application acquires the posture information of the face image in the target image; acquires the target sample face image from the preset face image set according to the posture information; and extracts the expression feature of the face image. And adjusting the target sample face image according to the expression feature to obtain the target face image; and processing the face image in the target image based on the target face image. The scheme can replace the face image in the more severe picture with the pre-stored high-definition face image, which can reduce the influence of noise and improve the image quality.
本申请又一实施例中还提供一种存储介质,该存储介质中存储有多条指令,该指令适于由处理器加载以执行上述任一图像处理方法的步骤。A further embodiment of the present application further provides a storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the steps of any of the image processing methods described above.
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。A person skilled in the art may understand that all or part of the various steps of the foregoing embodiments may be performed by a program to instruct related hardware. The program may be stored in a computer readable storage medium, and the storage medium may include: Read Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
在描述本申请的概念的过程中使用了术语“一”和“所述”以及类似的词语(尤其是在所附的权利要求书中),应该将这些术语解释为既涵盖单数又涵盖复数。此外,除非本文中另有说明,否则在本文中叙述数值范围时仅仅是通过快捷方法来指代属于相关范围的每个独立的值,而每个独立的值都并入本说明书中,就像这些值在本文中单独进行了陈述一样。另外,除非本文中另有指明或上下文有明确的相反提示,否则本文中所述的所有方法的步骤都可以按任何适当次序加以执行。本申请的改变并不限于描述的步骤顺序。除非另外主张,否则使用本文中所提供的任何以及所有实例或示例性语言(例如,“例如”)都仅仅为了更好地说明本申请的概念,而并非对本申请的概念的范围加以限制。在不脱离精神和范围的情况下,所属领域的技术人员将易于明白多种修改和适应。The terms "a", "an", "the", and "the" In addition, unless otherwise stated herein, the recitation of numerical ranges herein is merely referring to each of the individual These values are stated separately in this article. In addition, the steps of all methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise indicated. Changes to the application are not limited to the sequence of steps described. The use of any and all examples or exemplary language, such as "a" Numerous modifications and adaptations will be apparent to those skilled in the art without departing from the scope of the invention.
以上对本申请实施例所提供的一种图像处理方法、装置、存储介质及电子设备进行了详细介绍,本文中应用程序了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用程序范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。The image processing method, the device, the storage medium and the electronic device provided by the embodiments of the present application are described in detail. In the present application, the principle and the implementation manner of the application are described in the specific examples, and the description of the above embodiments is provided. It is only used to help understand the method of the present application and its core ideas; at the same time, for those skilled in the art, according to the idea of the present application, there will be changes in the scope of specific implementations and applications, in summary, The contents of this specification are not to be construed as limiting the application.

Claims (20)

  1. 一种图像处理方法,其中,包括:An image processing method, comprising:
    获取目标画面中人脸图像的姿态信息;Obtaining posture information of a face image in the target screen;
    根据所述姿态信息从预设人脸图像集合中获取目标样本人脸图像;Obtaining a target sample face image from the preset face image set according to the posture information;
    提取所述人脸图像的表情特征,并根据所述表情特征对目标样本人脸图像进行调整,得到目标人脸图像;Extracting an expression feature of the face image, and adjusting the target sample face image according to the expression feature to obtain a target face image;
    基于目标人脸图像对所述目标画面中人脸图像进行处理。The face image in the target picture is processed based on the target face image.
  2. 如权利要求1所述的图像处理方法,其中,确定目标画面中人脸图像的姿态信息的步骤,包括:The image processing method according to claim 1, wherein the determining the posture information of the face image in the target image comprises:
    确定目标画面中人脸图像的面部特征点;Determining facial feature points of the face image in the target picture;
    根据所述面部特征点生成面部特征向量;Generating a facial feature vector according to the facial feature point;
    获取所述面部特征向量与预设面部特征向量之间的差异值;Obtaining a difference value between the facial feature vector and the preset facial feature vector;
    根据所述差异值获取所述人脸图像的姿态信息。Obtaining the posture information of the face image according to the difference value.
  3. 如权利要求1所述的图像处理方法,其中,所述姿态信息包括偏转角度;根据所述姿态信息从预设人脸图像集合中获取对应的样本人脸图像的步骤包括:The image processing method according to claim 1, wherein the posture information comprises a deflection angle; and the step of acquiring a corresponding sample face image from the preset face image set according to the posture information comprises:
    获取预设人脸图像集合中每一样本人脸图像对应的样本偏转角度,得到多个样本偏转角度;Obtaining a sample deflection angle corresponding to each sample face image in the preset face image set, to obtain a plurality of sample deflection angles;
    从多个样本偏转角度中选中与所述偏转角度之间差值最小的目标样本偏转角度;Selecting a target sample deflection angle that is the smallest difference from the deflection angle from a plurality of sample deflection angles;
    将所述目标样本偏转角度对应的样本人脸图像作为目标样本人脸图像。The sample face image corresponding to the target sample deflection angle is used as the target sample face image.
  4. 如权利要求1所述的图像处理方法,其中,提取所述人脸图像的表情特征,并根据所述表情特征对目标样本人脸图像进行调整的步骤,包括:The image processing method according to claim 1, wherein the step of extracting the expression feature of the face image and adjusting the target sample face image according to the expression feature comprises:
    提取所述人脸图像的表情特征;Extracting an expression feature of the face image;
    基于预设算法模型对所述表情特征进行处理,得到表情特征参数;The expression feature is processed based on a preset algorithm model to obtain an expression feature parameter;
    根据所述表情特征参数对目标样本人脸图像进行调整。The target sample face image is adjusted according to the expression feature parameter.
  5. 如权利要求1所述的图像处理方法,其中,基于目标人脸图像对所述目标画面中人脸图像进行处理的步骤,包括:The image processing method according to claim 1, wherein the step of processing the face image in the target image based on the target face image comprises:
    对所述人脸图像进行边缘特征点检测,并获取所述边缘特征点的位置信息;Performing edge feature point detection on the face image, and acquiring location information of the edge feature point;
    基于所述位置信息和所述目标人脸图像对所述目标画面中人脸图像进行处理。The face image in the target picture is processed based on the position information and the target face image.
  6. 如权利要求5所述的图像处理方法,其中,基于所述位置信息和所述目标人脸图像对所述目标画面中人脸图像进行处理的步骤,包括:The image processing method according to claim 5, wherein the step of processing the face image in the target image based on the position information and the target face image comprises:
    根据所述位置信息生成人脸掩膜;Generating a face mask according to the location information;
    基于所述人脸掩膜对目标人脸图像进行处理,得到处理后的目标人脸图像;Processing the target face image based on the face mask to obtain the processed target face image;
    将目标画面中人脸图像替换为处理后的目标人脸图像。The face image in the target screen is replaced with the processed target face image.
  7. 如权利要求6所述的图像处理方法,其中,将目标画面中人脸图像替换为处理后的目标人脸图像之后,还包括:The image processing method according to claim 6, wherein after the face image in the target image is replaced with the processed target face image, the method further includes:
    获取人脸图像替换前原始人脸图像的颜色信息;Obtaining color information of the original face image before the face image is replaced;
    根据所述颜色信息生成颜色调整参数;Generating a color adjustment parameter according to the color information;
    基于所述颜色调整参数对当前人脸图像的颜色进行调整。The color of the current face image is adjusted based on the color adjustment parameter.
  8. 一种图像处理装置,其中,包括:An image processing apparatus, comprising:
    信息获取模块,用于获取目标画面中人脸图像的姿态信息;An information acquiring module, configured to acquire posture information of a face image in a target image;
    图像获取模块,用于根据所述姿态信息从预设人脸图像集合中获取目标样本人脸图像;An image obtaining module, configured to acquire a target sample face image from the preset face image set according to the posture information;
    调整模块,用于提取所述人脸图像的表情特征,并根据所述表情特征对目标样本人脸图像进行调整,得到目标人脸图像;An adjustment module, configured to extract an expression feature of the face image, and adjust a target sample face image according to the expression feature to obtain a target face image;
    处理模块,用于基于目标人脸图像对所述目标画面中人脸图像进行处理。And a processing module, configured to process the face image in the target image based on the target face image.
  9. 如权利要求8所述的图像处理装置,其中,所述信息获取模块用于:The image processing device according to claim 8, wherein said information acquisition module is configured to:
    确定目标画面中人脸图像的面部特征点;Determining facial feature points of the face image in the target picture;
    根据所述面部特征点生成面部特征向量;Generating a facial feature vector according to the facial feature point;
    获取所述面部特征向量与预设面部特征向量之间的差异值;Obtaining a difference value between the facial feature vector and the preset facial feature vector;
    根据所述差异值获取所述人脸图像的姿态信息。Obtaining the posture information of the face image according to the difference value.
  10. 如权利要求8所述的图像处理装置,其中,所述姿态信息包括偏转角度,所述图像获取模块用于:The image processing device according to claim 8, wherein the posture information includes a deflection angle, and the image acquisition module is configured to:
    获取预设人脸图像集合中每一样本人脸图像对应的样本偏转角度,得到多个样本偏转角度;Obtaining a sample deflection angle corresponding to each sample face image in the preset face image set, to obtain a plurality of sample deflection angles;
    从多个样本偏转角度中选中与所述偏转角度之间差值最小的目标样本偏转角度;Selecting a target sample deflection angle that is the smallest difference from the deflection angle from a plurality of sample deflection angles;
    将所述目标样本偏转角度对应的样本人脸图像作为目标样本人脸图像。The sample face image corresponding to the target sample deflection angle is used as the target sample face image.
  11. 如权利要求8所述的图像处理装置,其中,所述调整模块包括:The image processing device according to claim 8, wherein said adjustment module comprises:
    提取子模块,用于提取所述人脸图像的表情特征;Extracting a sub-module, configured to extract an expression feature of the face image;
    第一处理子模块,用于基于预设算法模型对所述表情特征进行处理,得到表情特征参数;a first processing submodule, configured to process the expression feature based on a preset algorithm model to obtain an expression feature parameter;
    调整子模块,用于根据所述表情特征参数对目标样本人脸图像进行调整。The adjustment submodule is configured to adjust the target sample face image according to the expression feature parameter.
  12. 如权利要求8所述的图像处理装置,其中,所述处理模块包括:The image processing device of claim 8, wherein the processing module comprises:
    获取子模块,用于对所述人脸图像进行边缘特征点检测,并获取所述边缘特征点的位置信息;Obtaining a sub-module, configured to perform edge feature point detection on the face image, and acquire location information of the edge feature point;
    第二处理子模块,用于基于所述位置信息和所述目标人脸图像对所述目标画面中人脸图像进行处理。a second processing submodule configured to process the face image in the target picture based on the location information and the target face image.
  13. 一种存储介质,其中,所述存储介质中存储有多条指令,所述指令适于由处理器加载以执行以下步骤:A storage medium, wherein the storage medium stores a plurality of instructions adapted to be loaded by a processor to perform the following steps:
    获取目标画面中人脸图像的姿态信息;Obtaining posture information of a face image in the target screen;
    根据所述姿态信息从预设人脸图像集合中获取目标样本人脸图像;Obtaining a target sample face image from the preset face image set according to the posture information;
    提取所述人脸图像的表情特征,并根据所述表情特征对目标样本人脸图像进行调整,得到目标人脸图像;Extracting an expression feature of the face image, and adjusting the target sample face image according to the expression feature to obtain a target face image;
    基于目标人脸图像对所述目标画面中人脸图像进行处理。The face image in the target picture is processed based on the target face image.
  14. 一种电子设备,其中,包括处理器和存储器,所述处理器与所述存储器电性连接,所述存储器用于存储指令和数据;所述处理器用于执行以下步骤:An electronic device, comprising a processor and a memory, the processor being electrically connected to the memory, the memory for storing instructions and data; the processor for performing the following steps:
    获取目标画面中人脸图像的姿态信息;Obtaining posture information of a face image in the target screen;
    根据所述姿态信息从预设人脸图像集合中获取目标样本人脸图像;Obtaining a target sample face image from the preset face image set according to the posture information;
    提取所述人脸图像的表情特征,并根据所述表情特征对目标样本人脸图像进行调整,得到目标人脸图像;Extracting an expression feature of the face image, and adjusting the target sample face image according to the expression feature to obtain a target face image;
    基于目标人脸图像对所述目标画面中人脸图像进行处理。The face image in the target picture is processed based on the target face image.
  15. 如权利要求14所述的电子设备,其中,在确定目标画面中人脸图像的姿态信息时,所述处理器用于执行以下步骤:The electronic device of claim 14, wherein the processor is configured to perform the following steps when determining posture information of the face image in the target picture:
    确定目标画面中人脸图像的面部特征点;Determining facial feature points of the face image in the target picture;
    根据所述面部特征点生成面部特征向量;Generating a facial feature vector according to the facial feature point;
    获取所述面部特征向量与预设面部特征向量之间的差异值;Obtaining a difference value between the facial feature vector and the preset facial feature vector;
    根据所述差异值获取所述人脸图像的姿态信息。Obtaining the posture information of the face image according to the difference value.
  16. 如权利要求14所述的电子设备,其中,所述姿态信息包括偏转角度;在根据所述姿态信息从预设人脸图像集合中获取对应的样本人脸图像时,所述处理器用于执行以下步骤:The electronic device of claim 14, wherein the attitude information comprises a deflection angle; and when the corresponding sample face image is acquired from the preset face image set according to the posture information, the processor is configured to perform the following step:
    获取预设人脸图像集合中每一样本人脸图像对应的样本偏转角度,得到多个样本偏转角度;Obtaining a sample deflection angle corresponding to each sample face image in the preset face image set, to obtain a plurality of sample deflection angles;
    从多个样本偏转角度中选中与所述偏转角度之间差值最小的目标样本偏转角度;Selecting a target sample deflection angle that is the smallest difference from the deflection angle from a plurality of sample deflection angles;
    将所述目标样本偏转角度对应的样本人脸图像作为目标样本人脸图像。The sample face image corresponding to the target sample deflection angle is used as the target sample face image.
  17. 如权利要求14所述的电子设备,其中,在提取所述人脸图像的表情特征,并根据所述表情特征对目标样本人脸图像进行调整时,所述处理器用于执行以下步骤:The electronic device according to claim 14, wherein the processor is configured to perform the following steps when extracting an expression feature of the face image and adjusting the target sample face image according to the expression feature:
    提取所述人脸图像的表情特征;Extracting an expression feature of the face image;
    基于预设算法模型对所述表情特征进行处理,得到表情特征参数;The expression feature is processed based on a preset algorithm model to obtain an expression feature parameter;
    根据所述表情特征参数对目标样本人脸图像进行调整。The target sample face image is adjusted according to the expression feature parameter.
  18. 如权利要求14所述的电子设备,其中,在基于目标人脸图像对所述目标画面中人脸图像进行处理时,所述处理器用于执行以下步骤:The electronic device of claim 14, wherein the processor is configured to perform the following steps when processing the face image in the target picture based on the target face image:
    对所述人脸图像进行边缘特征点检测,并获取所述边缘特征点的位置信息;Performing edge feature point detection on the face image, and acquiring location information of the edge feature point;
    基于所述位置信息和所述目标人脸图像对所述目标画面中人脸图像进行处理。The face image in the target picture is processed based on the position information and the target face image.
  19. 如权利要求18所述的电子设备,其中,在基于所述位置信息和所述目标人脸图像对所述目标画面中人脸图像进行处理时,所述处理器用于执行以下步骤:The electronic device according to claim 18, wherein, when the face image in the target picture is processed based on the position information and the target face image, the processor is configured to perform the following steps:
    根据所述位置信息生成人脸掩膜;Generating a face mask according to the location information;
    基于所述人脸掩膜对目标人脸图像进行处理,得到处理后的目标人脸图像;Processing the target face image based on the face mask to obtain the processed target face image;
    将目标画面中人脸图像替换为处理后的目标人脸图像。The face image in the target screen is replaced with the processed target face image.
  20. 如权利要求19所述的电子设备,其中,在将目标画面中人脸图像替换为处理后的目标人脸图像之后,所述处理器用于执行以下步骤:The electronic device of claim 19, wherein after replacing the face image in the target picture with the processed target face image, the processor is configured to perform the following steps:
    获取人脸图像替换前原始人脸图像的颜色信息;Obtaining color information of the original face image before the face image is replaced;
    根据所述颜色信息生成颜色调整参数;Generating a color adjustment parameter according to the color information;
    基于所述颜色调整参数对当前人脸图像的颜色进行调整。The color of the current face image is adjusted based on the color adjustment parameter.
PCT/CN2018/115467 2017-12-28 2018-11-14 Image processing method and apparatus, storage medium and electronic device WO2019128507A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711466330.XA CN109977739A (en) 2017-12-28 2017-12-28 Image processing method, device, storage medium and electronic equipment
CN201711466330.X 2017-12-28

Publications (1)

Publication Number Publication Date
WO2019128507A1 true WO2019128507A1 (en) 2019-07-04

Family

ID=67063079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/115467 WO2019128507A1 (en) 2017-12-28 2018-11-14 Image processing method and apparatus, storage medium and electronic device

Country Status (2)

Country Link
CN (1) CN109977739A (en)
WO (1) WO2019128507A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490067A (en) * 2019-07-11 2019-11-22 平安科技(深圳)有限公司 A kind of face identification method and device based on human face posture
CN110796075A (en) * 2019-10-28 2020-02-14 深圳前海微众银行股份有限公司 Method, device and equipment for acquiring face diversity data and readable storage medium
CN110889894A (en) * 2019-10-25 2020-03-17 中国科学院深圳先进技术研究院 Three-dimensional face reconstruction method and device and terminal equipment
CN111639216A (en) * 2020-06-05 2020-09-08 上海商汤智能科技有限公司 Display method and device of face image, computer equipment and storage medium
CN112102383A (en) * 2020-09-18 2020-12-18 深圳市赛为智能股份有限公司 Image registration method and device, computer equipment and storage medium
CN112330529A (en) * 2020-11-03 2021-02-05 上海镱可思多媒体科技有限公司 Dlid-based face aging method, system and terminal
CN113012042A (en) * 2019-12-20 2021-06-22 海信集团有限公司 Display device, virtual photo generation method, and storage medium
CN113643392A (en) * 2020-05-11 2021-11-12 北京达佳互联信息技术有限公司 Face generation model training method, face image generation method and device

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569790B (en) * 2019-07-30 2022-07-29 北京市商汤科技开发有限公司 Image processing method and device, processor, electronic device and storage medium
CN110543826A (en) * 2019-08-06 2019-12-06 尚尚珍宝(北京)网络科技有限公司 Image processing method and device for virtual wearing of wearable product
CN110516598B (en) * 2019-08-27 2022-03-01 北京百度网讯科技有限公司 Method and apparatus for generating image
CN110503703B (en) * 2019-08-27 2023-10-13 北京百度网讯科技有限公司 Method and apparatus for generating image
CN110738595B (en) * 2019-09-30 2023-06-30 腾讯科技(深圳)有限公司 Picture processing method, device and equipment and computer storage medium
CN110956580B (en) * 2019-11-28 2024-04-16 广州方硅信息技术有限公司 Method, device, computer equipment and storage medium for changing face of image
CN111191564A (en) * 2019-12-26 2020-05-22 三盟科技股份有限公司 Multi-pose face emotion recognition method and system based on multi-angle neural network
CN111582180B (en) * 2020-05-09 2023-04-18 浙江大华技术股份有限公司 License plate positioning method, image processing device and device with storage function
CN111599002A (en) * 2020-05-15 2020-08-28 北京百度网讯科技有限公司 Method and apparatus for generating image
CN112001874A (en) * 2020-08-28 2020-11-27 四川达曼正特科技有限公司 Image fusion method based on wavelet decomposition and Poisson fusion and application thereof
CN112069993B (en) * 2020-09-04 2024-02-13 西安西图之光智能科技有限公司 Dense face detection method and system based on five-sense organ mask constraint and storage medium
CN116433809A (en) * 2022-01-04 2023-07-14 脸萌有限公司 Expression driving method and model training method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320954A (en) * 2014-07-30 2016-02-10 北京三星通信技术研究有限公司 Human face authentication device and method
CN107330904A (en) * 2017-06-30 2017-11-07 北京金山安全软件有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN107358207A (en) * 2017-07-14 2017-11-17 重庆大学 A kind of method for correcting facial image

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123749A (en) * 2014-07-23 2014-10-29 邢小月 Picture processing method and system
CN105118082B (en) * 2015-07-30 2019-05-28 科大讯飞股份有限公司 Individualized video generation method and system
CN105303161A (en) * 2015-09-21 2016-02-03 广东欧珀移动通信有限公司 Method and device for shooting multiple people
CN107292811A (en) * 2016-04-01 2017-10-24 掌赢信息科技(上海)有限公司 A kind of method and electronic equipment of migration of expressing one's feelings
CN107341784A (en) * 2016-04-29 2017-11-10 掌赢信息科技(上海)有限公司 A kind of expression moving method and electronic equipment
CN106599817A (en) * 2016-12-07 2017-04-26 腾讯科技(深圳)有限公司 Face replacement method and device
CN108122271A (en) * 2017-12-15 2018-06-05 南京变量信息科技有限公司 A kind of description photo automatic generation method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320954A (en) * 2014-07-30 2016-02-10 北京三星通信技术研究有限公司 Human face authentication device and method
CN107330904A (en) * 2017-06-30 2017-11-07 北京金山安全软件有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN107358207A (en) * 2017-07-14 2017-11-17 重庆大学 A kind of method for correcting facial image

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490067A (en) * 2019-07-11 2019-11-22 平安科技(深圳)有限公司 A kind of face identification method and device based on human face posture
CN110889894A (en) * 2019-10-25 2020-03-17 中国科学院深圳先进技术研究院 Three-dimensional face reconstruction method and device and terminal equipment
CN110796075A (en) * 2019-10-28 2020-02-14 深圳前海微众银行股份有限公司 Method, device and equipment for acquiring face diversity data and readable storage medium
CN110796075B (en) * 2019-10-28 2024-02-02 深圳前海微众银行股份有限公司 Face diversity data acquisition method, device, equipment and readable storage medium
CN113012042A (en) * 2019-12-20 2021-06-22 海信集团有限公司 Display device, virtual photo generation method, and storage medium
CN113643392A (en) * 2020-05-11 2021-11-12 北京达佳互联信息技术有限公司 Face generation model training method, face image generation method and device
CN113643392B (en) * 2020-05-11 2023-12-26 北京达佳互联信息技术有限公司 Training method of face generation model, and face image generation method and device
CN111639216A (en) * 2020-06-05 2020-09-08 上海商汤智能科技有限公司 Display method and device of face image, computer equipment and storage medium
CN112102383A (en) * 2020-09-18 2020-12-18 深圳市赛为智能股份有限公司 Image registration method and device, computer equipment and storage medium
CN112330529A (en) * 2020-11-03 2021-02-05 上海镱可思多媒体科技有限公司 Dlid-based face aging method, system and terminal

Also Published As

Publication number Publication date
CN109977739A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
WO2019128507A1 (en) Image processing method and apparatus, storage medium and electronic device
CN108229369B (en) Image shooting method and device, storage medium and electronic equipment
WO2019128508A1 (en) Method and apparatus for processing image, storage medium, and electronic device
WO2019137131A1 (en) Image processing method, apparatus, storage medium, and electronic device
CN108764091B (en) Living body detection method and apparatus, electronic device, and storage medium
CN108198130B (en) Image processing method, image processing device, storage medium and electronic equipment
JP6636154B2 (en) Face image processing method and apparatus, and storage medium
CN108717524B (en) Gesture recognition system based on double-camera mobile phone and artificial intelligence system
US11163978B2 (en) Method and device for face image processing, storage medium, and electronic device
CN108200334B (en) Image shooting method and device, storage medium and electronic equipment
US8866931B2 (en) Apparatus and method for image recognition of facial areas in photographic images from a digital camera
CN110210276A (en) A kind of motion track acquisition methods and its equipment, storage medium, terminal
JP7286010B2 (en) Human body attribute recognition method, device, electronic device and computer program
CN108076290B (en) Image processing method and mobile terminal
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
CN108712603B (en) Image processing method and mobile terminal
CN114049681A (en) Monitoring method, identification method, related device and system
WO2021175071A1 (en) Image processing method and apparatus, storage medium, and electronic device
WO2020140723A1 (en) Method, apparatus and device for detecting dynamic facial expression, and storage medium
Loke et al. Indian sign language converter system using an android app
CN109190456B (en) Multi-feature fusion overlook pedestrian detection method based on aggregated channel features and gray level co-occurrence matrix
JP6157165B2 (en) Gaze detection device and imaging device
US8526673B2 (en) Apparatus, system and method for recognizing objects in images using transmitted dictionary data
CN115410240A (en) Intelligent face pockmark and color spot analysis method and device and storage medium
JP4496005B2 (en) Image processing method and image processing apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18897054

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18897054

Country of ref document: EP

Kind code of ref document: A1