CN111445417A - Image processing method, image processing apparatus, electronic device, and medium - Google Patents

Image processing method, image processing apparatus, electronic device, and medium Download PDF

Info

Publication number
CN111445417A
CN111445417A CN202010240796.3A CN202010240796A CN111445417A CN 111445417 A CN111445417 A CN 111445417A CN 202010240796 A CN202010240796 A CN 202010240796A CN 111445417 A CN111445417 A CN 111445417A
Authority
CN
China
Prior art keywords
target
intensity
face
image
initial image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010240796.3A
Other languages
Chinese (zh)
Other versions
CN111445417B (en
Inventor
李镕镕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010240796.3A priority Critical patent/CN111445417B/en
Publication of CN111445417A publication Critical patent/CN111445417A/en
Application granted granted Critical
Publication of CN111445417B publication Critical patent/CN111445417B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, electronic equipment and a medium. The method comprises the following steps: acquiring an initial image; under the condition that the initial image comprises face information, generating a target mapping relation between the intensity of the target expression parameters and facial features according to the facial information of the target face in the initial image, and acquiring the target intensity; and adjusting the characteristics of the target face to the target face characteristics corresponding to the target intensity based on the target mapping relation and the target intensity to obtain a target image. The embodiment of the invention can solve the problem of complex processing process of the facial expression in the image.

Description

Image processing method, image processing apparatus, electronic device, and medium
Technical Field
Embodiments of the present invention relate to the field of image processing, and in particular, to an image processing method and apparatus, an electronic device, and a medium.
Background
At present, a user wants to obtain a facial image with natural expression, the facial image is often modified by using retouching software, and particularly when the facial image is a multi-person image, the operation of modifying the face is very complex and difficult.
Therefore, the existing image processing method has a complex process of processing the facial expression in the image, and is not simple and convenient enough.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, electronic equipment and a medium, and aims to solve the problem that the processing process of facial expressions in an image is complex.
In a first aspect, an embodiment of the present invention provides an image processing method applied to an electronic device, including:
acquiring an initial image;
under the condition that the initial image comprises face information, generating a target mapping relation between the intensity of a target expression parameter and facial features according to the face information of a target face in the initial image, and acquiring the target intensity;
and adjusting the characteristics of the target face to the target face characteristics corresponding to the target intensity based on the target mapping relation and the target intensity to obtain a target image.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including:
the image acquisition module is used for acquiring an initial image;
the mapping module is used for generating a target mapping relation between the intensity of the target expression parameters and the facial features according to the facial information of the target face in the initial image under the condition that the initial image comprises the facial information, and acquiring the target intensity;
and the adjusting module is used for adjusting the characteristics of the target face into target face characteristics corresponding to the target intensity based on the target mapping relation and the target intensity to obtain a target image. .
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the computer program implements the steps of the image processing method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the image processing method according to the first aspect.
In the embodiment of the invention, a target mapping relation between the strength of the target expression parameters and the facial features, for example, a mapping relation between the radian of lips and the overall facial features of the face in smiling, can be generated according to the facial information of the target face in the initial image. And then, acquiring the target intensity, and adjusting the characteristics of the target face in the initial image to be the facial characteristics corresponding to the target intensity. Therefore, in the embodiment, the user does not need to use additional cropping software to crop the image, but can adjust the facial expression in the image directly according to the acquired target intensity and the generated target mapping relationship, the operation is simple, and the process of processing the facial expression in the image by the user is simplified.
Drawings
The present invention will be better understood from the following description of specific embodiments thereof taken in conjunction with the accompanying drawings, in which like or similar reference characters designate like or similar features.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As in the background art, a user often wants to obtain a facial image with natural expression, and therefore the facial image needs to be subjected to retouching. For example, in real daily life, most users do not reach professional shooting levels, and the users may not be good at facing the lens, resulting in a situation where the expression is stiff at the time of shooting and it is difficult to make a natural smile. Particularly, when the face image is a multi-person image, it is very troublesome to modify a plurality of faces and difficult to operate.
The current method for processing facial expressions of human face images mainly comprises the following two methods: the first scheme is that a face model is constructed according to a photographed face image to generate an expression; the second scheme is to output shooting auxiliary information such as voice, multimedia and the like to trigger the shot object to adjust the current expression.
But the expression adjustment is made to a single face to the first scheme, does not handle the condition that has many people's faces in the image, if go on applying the strategy to a single face adjustment expression to each people's face in many people's face images, every people's face is all set as the same expression, can make the image seem very strange and tricky. With the second scheme, outputting the photographing assistant information is not an effective method for everyone.
Therefore, in order to solve the above technical problem, an embodiment of the present invention provides an image processing method, and referring to fig. 1, fig. 1 shows a flowchart of an image processing method provided by an embodiment of the present invention; the method is applied to the electronic equipment and comprises the following steps:
s101, acquiring an initial image;
the initial image here refers to a face image to be expression-adjusted. Here acquiring the initial image may include: and receiving the trigger input of the user to the shooting control, controlling the shooting component to shoot, and taking the picture shot by the shooting component as an initial image. Or acquiring the initial image may further comprise: and receiving image selection input of a user, and calling the pre-stored face image selected by the image selection input as an initial image. Of course, the above are only two specific embodiments, and S101 may also be implemented in other manners, which is not limited in the present invention.
Further, in order to distinguish the facial expression processing from other image processing, before S101, the method may further include: receiving a trigger input of a user to the expression processing control, and entering an expression processing mode in response to the trigger input. That is, S101 specifically is: in the expression processing mode, an initial image is acquired.
Then, the face in the initial image needs to be detected by a face detection algorithm, and after the face is detected and the number of faces is determined, the process goes to S102.
S102, under the condition that the initial image comprises face information, generating a target mapping relation between the intensity of the target expression parameters and facial features according to the facial information of the target face in the initial image;
in this case, the expression processing operation may be performed only when the initial image includes the image of the face.
The target expression parameter herein refers to an expression parameter that has an effect on the expression of the human face, and may include at least one of the following, for example: lip radian, ratio of length to width of the eyes, and eyebrow spacing, among others. The radian of the lips can influence the smile expression of the face, and the bigger the radian of the lips is, the bigger the smile intensity value is; the ratio of the length to the width of the eyes can influence the panic expression of the face, and the smaller the ratio is, the larger the panic intensity value is; the eyebrow spacing affects the angry expression of the face, and the smaller the spacing, the greater the anger intensity value. The intensity of the target expression parameter can be set to be in the range of 1-10. Of course, different intensity ranges may be set for different expression parameters, respectively.
Optionally, different expression modes may be set for different target expression parameters, for example, "smiling face mode", "panic mode", "anger mode", and the like, and in a case where a selection input of the target expression mode by the user is received, the target expression parameter is determined according to the target expression mode.
In addition, because the muscles of the whole face change under different expressions of the face, in order to ensure the naturalness of the face, only the target parameter expression of the face cannot be adjusted, but the characteristics of the whole face need to be adjusted. Therefore, the facial features of the whole face under different lip radians need to be determined, and facial expression adjustment is performed according to the facial features subsequently. The facial features here include a plurality of facial feature points.
S103, acquiring target intensity; fig. 1 is only an example, and there is no restriction on the order between S102 and S103, and both may be executed in parallel.
The target intensity is corresponding to a target face, and when a plurality of target faces are included, a plurality of target intensities are also required to be obtained, where each target intensity corresponds to one target face. Based on this, in this embodiment, the expressions of the faces can be adjusted respectively for the multiple face images, so that the situation that the multiple faces are adjusted to be the same expression is avoided, and the naturalness of the expression adjustment is ensured.
And S104, adjusting the characteristics of the target face to the target face characteristics corresponding to the target intensity based on the target mapping relation and the target intensity to obtain a target image.
In the embodiment of the invention, a target mapping relation between the strength of the target expression parameters and the facial features, for example, a mapping relation between the radian of lips and the overall facial features of the face in smiling, can be generated according to the facial information of the target face in the initial image. And then, acquiring the target intensity, and adjusting the characteristics of the target face in the initial image to be the facial characteristics corresponding to the target intensity. Therefore, in the embodiment, the user does not need to use additional cropping software to crop the image, but can adjust the facial expression in the image directly according to the acquired target intensity and the generated target mapping relationship, the operation is simple, and the process of processing the facial expression in the image by the user is simplified.
In some embodiments of the present invention, the initial image may be a single face image or multiple face images.
In an embodiment of the present invention, the S102 may include:
constructing a face model according to the target face; such as a 2D or 3D face model.
And generating a target mapping relation by adjusting the intensity of the target expression parameters on the face model. For example, the target mapping relationship may include lip curvature values and their corresponding facial features arranged in order from small to large.
In this embodiment, the purpose of constructing the face model is to determine the facial feature condition of the face when the target expression parameters are at different intensities, so as to accurately construct the target mapping relationship between the intensities and the facial features. In the process of constructing the target mapping relationship, the strength in the target mapping relationship can be set according to a preset setting range, for example, the radian of a lip is 0-20 degrees, each strength is separated by 5 degrees, and the range in which a user can select to adjust is enlarged.
In an embodiment of the present invention, in a case that the initial image is a multi-face image, the step S102 may include:
constructing a face model according to the target face;
and adjusting the intensity of the target expression parameters on the face model corresponding to the target face to the target intensity corresponding to the target face to obtain a target mapping relation between the target intensity and the target facial features.
In this example, a face model is respectively constructed for each target face, corresponding target intensities are respectively obtained for different target faces, and then the intensity of each target face is respectively adjusted according to facial features corresponding to each target intensity, so that the expressions of each adjusted target face are different, and the overall natural degree of the obtained target image is high.
In other embodiments of the present invention, in the case that the initial image is a single face image, the step S103 may include:
taking the preset intensity as a target intensity; the target mapping relationship comprises preset intensity.
In this embodiment, that is, in this embodiment, the facial expression in the photo is modified according to the preset intensity value, so that the electronic device can automatically adjust the expression, and convenience of the user in use is improved. For example, a preset default intensity of 5 is set. Wherein, corresponding preset default intensities can be respectively set for different target expression parameters.
In another implementation manner of the present invention, in the case that the initial image is a single face image, the S103 may include:
receiving a first input of a first intensity in the target mapping relationship from a user;
in response to the first input, the first intensity is taken as the target intensity.
In this embodiment, the facial feature corresponding to which intensity of the target expression parameter is specifically adopted to perform expression adjustment is determined according to the input of the user, that is, manually selected by the user. Under the condition, the autonomous selectivity of the user is improved, and the adjustable range of the expression adjustment is enlarged.
For the convenience of selection by the user, it is necessary to enable the user to see the selectable intensity range and the face feature condition corresponding to each intensity, and optionally, the target mapping relationship may be displayed in the above embodiment, or may also be implemented in the following manner.
Optionally, the receiving a first input of a first intensity in the target mapping relationship by the user may include:
receiving a first input of a user to a sliding control on a preset sliding shaft; the method comprises the following steps that a sliding shaft and a sliding control are preset and need to be displayed on a display interface;
and taking the intensity corresponding to the position of the sliding control on the preset sliding shaft as the target intensity.
In this embodiment, a visible preset sliding shaft is set on the display interface, and the user can slide the sliding control on the preset sliding shaft. The preset sliding shaft is taken as a sliding bar, and the sliding control is taken as a sliding block for example. Different positions on the sliding strip are associated with different strength values, the movement of the user to the sliding block is the first input, and the facial features corresponding to the strength associated with the current sliding block position can be correspondingly displayed on the display interface, namely the expression of the target face can be changed along with the movement of the sliding block by the user, so that the user can conveniently select the appropriate facial features. For example, the slide block slides upwards, the radian of the lips is increased, and the smiling degree of the face is enhanced; the slide block slides downwards, the radian of the lips is reduced, and the smiling degree of the face is reduced.
In some embodiments of the present invention, in the case that the initial image is a multi-face image, the step S103 may include:
calculating the reference intensity according to the intensity of the target expression parameters on the reference face in the initial image;
acquiring a normal distribution random number according to the reference intensity and the number of the target faces, and taking the normal distribution random number as the target intensity; each target face corresponds to a target intensity, and the difference value between the normally distributed random number and the reference intensity is smaller than a preset threshold value.
In this embodiment, the normal distribution random number is obtained according to the strength of the target expression parameter in the reference face as the mean value, and the normal distribution random number generated based on the reference strength is used as the target strength, which can ensure that the target strengths are not equal to each other and not equal to the reference strength, and meanwhile, the generated normal distribution random number is close to the reference strength, so that the situation that the expressions of the faces in the target image obtained by final adjustment are the same can be avoided, and the natural degree and the overall harmony of the target image are also improved under the condition that the expression effect of each face in the target image is ensured to be improved. Specifically, each normally distributed random number may be randomly assigned to each target face.
In still other embodiments of the present invention, the calculating the reference intensity according to the intensity of the target expression parameter on the reference face in the initial image may include:
receiving a second input of the first face in the initial image from the user; namely, the first face selected by the user is the reference face;
and responding to the second input, acquiring a first intensity of the target expression parameter in the first face, and taking the first intensity as a reference intensity.
In this embodiment, the user selects the first face by himself, and the first intensity of the target expression parameter of the first face selected by the user is used as the reference intensity, which can ensure that the selected reference intensity meets the user requirement, so that the result after expression adjustment better meets the preference of the user.
In other embodiments of the present invention, in a case that the initial image is a multi-face image, the calculating the reference intensity according to the intensity of the target expression parameter on the reference face in the initial image may include:
determining an intensity interval in which the intensity of the target expression parameter on each face in the initial image is located; namely, all the faces in the initial image are used as reference faces;
setting the intensity interval with the highest intensity as a target intensity interval;
and taking the intensity with the largest number in the target intensity interval as the reference intensity.
In this embodiment, first, an intensity interval in which the intensity of the target expression parameter on each face is located is determined, then, the intensity number in each intensity interval is counted, the intensity interval with the largest number indicates that the expression corresponding to the intensity interval meets the expression requirement when most people take pictures, for example, the interval corresponds to a smile expression, so that the interval is selected as the target intensity interval, and then, the intensity with the largest number in the target intensity interval is further selected as the reference intensity, thereby ensuring that the selected reference intensity meets the expression requirement when most people take pictures. For convenience of understanding, for example, it is assumed that there are 5 faces, and the intensity intervals are preset as a first interval 1-5 and a second interval 6-10, where the intensities of the target expression parameters of the two faces fall into the first interval, and the intensities of the target expression parameters of the remaining 3 faces fall into the second interval, so that the intensities included in the first interval are 2, the intensities included in the second interval are 3, and thus the target intensity interval is the second interval; the 3 intensities comprised in the second interval are 7, 9, the number of 7 being the largest, so the reference intensity is 7.
Of course, in other embodiments, the intensity average of the target expression parameters of each face may also be used as the reference intensity, which is not limited in the present invention.
In further embodiments of the present invention, before S102, the method may further include:
and determining a target face.
In some specific implementations, the determining the target face may include:
receiving a third input of a second face in the initial image by the user;
and responding to the third input, and taking the second face as the target face.
In the embodiment, the target face is selected by the user, so that the autonomy of the user is improved, and the target face needing expression adjustment is selected by the user, so that the finally obtained target image can better meet the requirements of the user. The number of the second faces selected here may be one or more.
In other specific implementations, in the case that the initial image is a multi-face image, the determining the target face may include:
and taking the face contained in the initial image and with the target expression parameter intensity outside the target intensity interval as the target face.
In the embodiment, the default is that the faces with the target expression parameters outside the target intensity interval are all used as the target faces, and the mode does not need manual adjustment of a user, so that the convenience is better, and the whole target image can be more harmonious.
Optionally, after obtaining the target image, the method may further include:
receiving storage input of a user to a target image; in response to the save input, the target image is saved. E.g., saved to an album. In other embodiments, the target image may be saved directly without receiving a save input from the user.
Optionally, after obtaining the target image and before saving the target image, the method may further include:
and displaying the target image on a preview display interface.
In the embodiment, before the target image is stored, the target image is previewed, so that a user can judge whether the current target image meets the requirements of the user in advance, if not, the user returns to continue the image processing operation, the mode can avoid that the user stores too many images with failed expression adjustment, the occupation of the failed images on the storage space is reduced, and the convenience of the user in expression adjustment is improved.
Based on the embodiment of the image processing method provided by the above embodiment, correspondingly, an embodiment of the present invention further provides an image processing apparatus, and referring to fig. 2, fig. 2 shows a schematic structural diagram of an image processing apparatus provided by the embodiment of the present invention. The image processing apparatus includes:
an image obtaining module 201, configured to obtain an initial image;
the mapping module 202 is configured to, under the condition that the initial image includes face information, generate a target mapping relationship between the intensity of the target expression parameter and the facial features according to the facial information of the target face in the initial image, and acquire target intensity;
and the adjusting module 203 is configured to adjust the feature of the target face to a target face feature corresponding to the target intensity based on the target mapping relationship and the target intensity, so as to obtain a target image.
In the embodiment of the invention, a target mapping relation between the strength of the target expression parameters and the facial features, for example, a mapping relation between the radian of lips and the overall facial features of the face in smiling, can be generated according to the facial information of the target face in the initial image. And then, acquiring the target intensity, and adjusting the characteristics of the target face in the initial image to be the facial characteristics corresponding to the target intensity. Therefore, in the embodiment, the user does not need to use additional cropping software to crop the image, but can adjust the facial expression in the image directly according to the acquired target intensity and the generated target mapping relationship, the operation is simple, and the process of processing the facial expression in the image by the user is simplified.
Optionally, the image obtaining module 201 may be specifically configured to: and receiving the trigger input of the user to the shooting control, controlling the shooting component to shoot, and taking the picture shot by the shooting component as an initial image. Alternatively, the image acquiring module 201 may be specifically configured to: and receiving image selection input of a user, and calling the pre-stored face image selected by the image selection input as an initial image.
In addition, in order to distinguish the facial expression processing from other image processing, the image acquisition module 201 may further be configured to: receiving a trigger input of a user to the expression processing control, responding to the trigger input, and entering an expression processing mode; in the expression processing mode, an initial image is acquired.
In some embodiments of the present invention, the initial image may be a single face image or multiple face images.
In a specific embodiment of the present invention, the mapping module 202 may include:
the mapping generation unit is used for constructing a face model according to the target face; and generating a target mapping relation by adjusting the intensity of the target expression parameters on the face model.
In this embodiment, the purpose of constructing the face model is to determine the facial feature condition of the face when the target expression parameters are at different intensities, so as to accurately construct the target mapping relationship between the intensities and the facial features. In the process of constructing the target mapping relationship, the strength in the target mapping relationship can be set according to a preset setting range, so that the range which can be selected by a user for adjustment is enlarged.
In an embodiment of the present invention, in a case that the initial image is a multi-face image, the mapping generating unit may be configured to:
constructing a face model according to the target face; and adjusting the intensity of the target expression parameters on the face model corresponding to the target face to the target intensity corresponding to the target face to obtain a target mapping relation between the target intensity and the target facial features.
In this example, a face model is respectively constructed for each target face, corresponding target intensities are respectively obtained for different target faces, and then the intensity of each target face is respectively adjusted according to facial features corresponding to each target intensity, so that the expressions of each adjusted target face are different, and the overall natural degree of the obtained target image is high.
In other embodiments of the present invention, in the case that the initial image is a single face image, the mapping module 202 may include:
an intensity acquisition unit configured to take a preset intensity as a target intensity; the target mapping relationship comprises preset intensity.
In this embodiment, that is, in this embodiment, the facial expression in the photo is modified according to the preset intensity value, so that the electronic device can automatically adjust the expression, and convenience of the user in use is improved. Wherein, corresponding preset default intensities can be respectively set for different target expression parameters.
In another implementation manner of the present invention, in the case that the initial image is a single face image, the intensity acquiring unit may be configured to:
receiving a first input of a first intensity in the target mapping relationship from a user; in response to the first input, the first intensity is taken as the target intensity.
In this embodiment, the facial feature corresponding to which intensity of the target expression parameter is specifically adopted to perform expression adjustment is determined according to the input of the user, that is, manually selected by the user. Under the condition, the autonomous selectivity of the user is improved, and the adjustable range of the expression adjustment is enlarged.
For the convenience of selection by the user, it is necessary to enable the user to see the selectable intensity range and the face feature condition corresponding to each intensity, and optionally, the target mapping relationship may be displayed in the above embodiment, or may also be implemented in the following manner.
Optionally, the intensity obtaining unit may be specifically configured to:
receiving a first input of a user to a sliding control on a preset sliding shaft; the method comprises the following steps that a sliding shaft and a sliding control are preset and need to be displayed on a display interface; and taking the intensity corresponding to the position of the sliding control on the preset sliding shaft as the target intensity.
In this embodiment, a visible preset sliding shaft is set on the display interface, and the user can slide the sliding control on the preset sliding shaft. The preset sliding shaft is taken as a sliding bar, and the sliding control is taken as a sliding block for example. Different positions on the sliding strip are associated with different strength values, the movement of the user to the sliding block is the first input, and the facial features corresponding to the strength associated with the current sliding block position can be correspondingly displayed on the display interface, namely the expression of the target face can be changed along with the movement of the sliding block by the user, so that the user can conveniently select the appropriate facial features.
In some embodiments of the present invention, in a case that the initial image is a multi-face image, the intensity obtaining unit may specifically include:
a reference intensity calculating unit for calculating a reference intensity according to the intensity of the target expression parameter on the reference face in the initial image;
the intensity determining unit is used for acquiring a normal distribution random number according to the reference intensity and the number of the target faces, and taking the normal distribution random number as the target intensity; each target face corresponds to a target intensity, and the difference value between the normally distributed random number and the reference intensity is smaller than a preset threshold value.
In this embodiment, the normal distribution random number is obtained according to the strength of the target expression parameter in the reference face as the mean value, and the normal distribution random number generated based on the reference strength is used as the target strength, which can ensure that the target strengths are not equal to each other and not equal to the reference strength, and meanwhile, the generated normal distribution random number is close to the reference strength, so that the situation that the expressions of the faces in the target image obtained by final adjustment are the same can be avoided, and the natural degree and the overall harmony of the target image are also improved under the condition that the expression effect of each face in the target image is ensured to be improved. Specifically, each normally distributed random number may be randomly assigned to each target face.
In still other embodiments of the present invention, the reference intensity calculating unit may be configured to:
receiving a second input of the first face in the initial image from the user; and responding to the second input, acquiring a first intensity of the target expression parameter in the first face, and taking the first intensity as a reference intensity.
In this embodiment, the user selects the first face by himself, and the first intensity of the target expression parameter of the first face selected by the user is used as the reference intensity, which can ensure that the selected reference intensity meets the user requirement, so that the result after expression adjustment better meets the preference of the user.
In other embodiments of the present invention, in a case that the initial image is a multi-face image, the reference intensity calculating unit may be configured to:
determining an intensity interval in which the intensity of the target expression parameter on each face in the initial image is located; namely, all the faces in the initial image are used as reference faces; setting the intensity interval with the highest intensity as a target intensity interval; and taking the intensity with the largest number in the target intensity interval as the reference intensity.
In this embodiment, first, an intensity interval in which the intensity of the target expression parameter on each face is located is determined, then, the intensity number in each intensity interval is counted, the intensity interval with the largest number indicates that the expression corresponding to the intensity interval meets the expression requirement when most people take pictures, for example, the interval corresponds to a smile expression, so that the interval is selected as the target intensity interval, and then, the intensity with the largest number in the target intensity interval is further selected as the reference intensity, thereby ensuring that the selected reference intensity meets the expression requirement when most people take pictures.
Of course, in other embodiments, the intensity average of the target expression parameters of each face may also be used as the reference intensity, which is not limited in the present invention.
In still other embodiments of the present invention, the apparatus may further comprise:
and the target face determining module is used for determining a target face.
In some specific implementations, the target face determination module may be configured to:
receiving a third input of a second face in the initial image by the user; and responding to the third input, and taking the second face as the target face.
In the embodiment, the target face is selected by the user, so that the autonomy of the user is improved, and the target face needing expression adjustment is selected by the user, so that the finally obtained target image can better meet the requirements of the user. The number of the second faces selected here may be one or more.
In some other specific implementations, in the case that the initial image is a multi-face image, the target face determining module may be configured to:
and taking the face contained in the initial image and with the target expression parameter intensity outside the target intensity interval as the target face.
In the embodiment, the default is that the faces with the target expression parameters outside the target intensity interval are all used as the target faces, and the mode does not need manual adjustment of a user, so that the convenience is better, and the whole target image can be more harmonious.
Optionally, the apparatus may further include:
the storage module is used for receiving storage input of a user on the target image; in response to the save input, the target image is saved. In other embodiments, the target image may be saved directly without receiving a save input from the user.
Optionally, the apparatus may further include:
and the preview module is used for displaying the target image on a preview display interface.
In the embodiment, before the target image is stored, the target image is previewed, so that a user can judge whether the current target image meets the requirements of the user in advance, if not, the user returns to continue the image processing operation, the mode can avoid that the user stores too many images with failed expression adjustment, the occupation of the failed images on the storage space is reduced, and the convenience of the user in expression adjustment is improved.
The image processing apparatus provided in the embodiment of the present invention can implement each method step implemented in the method embodiment of fig. 1, and is not described herein again to avoid repetition.
Fig. 3 is a schematic diagram illustrating a hardware structure of an electronic device according to an embodiment of the present invention.
The electronic device 300 includes, but is not limited to: radio frequency unit 301, network module 302, audio output unit 303, input unit 304, sensor 305, display unit 306, user input unit 307, interface unit 308, memory 309, processor 310, and power supply 311. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 3 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 310 is configured to acquire an initial image; under the condition that the initial image comprises face information, generating a target mapping relation between the intensity of the target expression parameters and facial features according to the facial information of the target face in the initial image, and acquiring the target intensity; and adjusting the characteristics of the target face to the target face characteristics corresponding to the target intensity based on the target mapping relation and the target intensity to obtain a target image.
In the embodiment of the invention, a target mapping relation between the strength of the target expression parameters and the facial features, for example, a mapping relation between the radian of lips and the overall facial features of the face in smiling, can be generated according to the facial information of the target face in the initial image. And then, acquiring the target intensity, and adjusting the characteristics of the target face in the initial image to be the facial characteristics corresponding to the target intensity. Therefore, in the embodiment, the user does not need to use additional cropping software to crop the image, but can adjust the facial expression in the image directly according to the acquired target intensity and the generated target mapping relationship, the operation is simple, and the process of processing the facial expression in the image by the user is simplified.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 301 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 310; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 301 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 302, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 303 may convert audio data received by the radio frequency unit 301 or the network module 302 or stored in the memory 309 into an audio signal and output as sound. Also, the audio output unit 303 may also provide audio output related to a specific function performed by the electronic apparatus 300 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 303 includes a speaker, a buzzer, a receiver, and the like.
The input unit 304 is used to receive audio or video signals. The input Unit 304 may include a Graphics Processing Unit (GPU) 3031 and a microphone 3042, and the Graphics processor 3031 processes image data of still pictures or video obtained by an image capturing apparatus (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 306. The image frames processed by the graphic processor 3031 may be stored in the memory 309 (or other storage medium) or transmitted via the radio frequency unit 301 or the network module 302. The microphone 3042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 301 in case of the phone call mode.
The electronic device 300 also includes at least one sensor 305, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 3061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 3061 and/or the backlight when the electronic device 300 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to determine the posture of the electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), determine the related functions of vibration (such as pedometer, tapping), and the like; the sensors 305 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The Display unit 306 may include a Display panel 3061, and the Display panel 3061 may be configured in the form of a liquid Crystal Display (L acquired Crystal Display, L CD), an Organic light Emitting Diode (O L ED), or the like.
The user input unit 307 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 307 includes a touch panel 3071 and other input devices 3072. The touch panel 3071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 3071 (e.g., operations by a user on or near the touch panel 3071 using a finger, a stylus, or any suitable object or attachment). The touch panel 3071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 310, and receives and executes commands sent by the processor 310. In addition, the touch panel 3071 may be implemented using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 307 may include other input devices 3072 in addition to the touch panel 3071. Specifically, the other input devices 3072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein.
Further, the touch panel 3071 may be overlaid on the display panel 3061, and when the touch panel 3071 detects a touch operation on or near the touch panel, the touch operation is transmitted to the processor 310 to determine the type of the touch event, and then the processor 310 provides a corresponding visual output on the display panel 3061 according to the type of the touch event. Although the touch panel 3071 and the display panel 3061 are shown in fig. 3 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 3071 and the display panel 3061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 308 is an interface for connecting an external device to the electronic apparatus 300. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having a determination module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 308 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 300 or may be used to transmit data between the electronic apparatus 300 and the external device.
The memory 309 may be used to store software programs as well as various data. The memory 309 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 309 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 310 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 309 and calling data stored in the memory 309, thereby performing overall monitoring of the electronic device. Processor 310 may include one or more processing units; preferably, the processor 310 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 310.
The electronic device 300 may further include a power supply 311 (such as a battery) for supplying power to various components, and preferably, the power supply 311 may be logically connected to the processor 310 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 300 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 310, a memory 309, and a computer program stored in the memory 309 and capable of running on the processor 310, where the computer program is executed by the processor 310 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An image processing method applied to an electronic device, comprising:
acquiring an initial image;
under the condition that the initial image comprises face information, generating a target mapping relation between the intensity of a target expression parameter and facial features according to the face information of a target face in the initial image, and acquiring the target intensity;
and adjusting the characteristics of the target face to the target face characteristics corresponding to the target intensity based on the target mapping relation and the target intensity to obtain a target image.
2. The method of claim 1, wherein generating the target mapping relationship between the intensity of the target expression parameter and the facial features according to the facial information of the target face in the initial image comprises:
constructing a face model according to the target face;
and generating the target mapping relation by adjusting the intensity of the target expression parameters on the face model.
3. The method of claim 1, wherein in the case that the initial image is a single face image, the obtaining the target intensity comprises:
taking a preset intensity as the target intensity; wherein the target mapping relationship comprises the preset intensity;
or,
receiving a first input of a first intensity in the target mapping relationship from a user;
in response to the first input, treating the first intensity as the target intensity;
under the condition that the initial image is a multi-face image, the obtaining of the target intensity comprises:
calculating the reference intensity according to the intensity of the target expression parameters on the reference face in the initial image;
obtaining a normal distribution random number according to the reference intensity and the number of the target faces, and taking the normal distribution random number as the target intensity; each target face corresponds to one target intensity, and the difference value between the normal distribution random number and the reference intensity is smaller than a preset threshold value.
4. The method according to claim 3, wherein in a case that the initial image is a multi-face image, the generating a target mapping relationship between the intensity of the target expression parameter and the facial features according to the facial information of the target face in the initial image comprises:
constructing a face model according to the target face;
and adjusting the intensity of the target expression parameter on the face model corresponding to the target face to the target intensity corresponding to the target face to obtain a target mapping relation between the target intensity and the target facial features.
5. The method according to claim 3, wherein calculating a reference intensity according to the intensity of the target expression parameter on the reference face in the initial image comprises:
receiving a second input of a first face in the initial image from a user;
responding to the second input, acquiring a first intensity of the target expression parameter in the first face, and taking the first intensity as the reference intensity;
or,
determining an intensity interval in which the intensity of the target expression parameter is positioned on each face in the initial image;
setting the intensity interval with the highest intensity as a target intensity interval;
and taking the intensity with the largest number in the target intensity interval as the reference intensity.
6. The method of claim 5, wherein before generating the target mapping relationship between the intensity of the target expression parameter and the facial features according to the facial information of the target face in the initial image, the method further comprises:
receiving a third input of a second face in the initial image from a user;
in response to the third input, treating the second face as the target face;
or,
and taking the face contained in the initial image and with the intensity of the target expression parameter outside the target intensity interval as the target face.
7. The method of claim 1, wherein the target expression parameter comprises at least one of: lip curvature, ratio of length to width of the eyes, and eyebrow spacing.
8. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring an initial image;
the mapping module is used for generating a target mapping relation between the intensity of the target expression parameters and the facial features according to the facial information of the target face in the initial image under the condition that the initial image comprises the facial information, and acquiring the target intensity;
and the adjusting module is used for adjusting the characteristics of the target face into target face characteristics corresponding to the target intensity based on the target mapping relation and the target intensity to obtain a target image.
9. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 7.
CN202010240796.3A 2020-03-31 2020-03-31 Image processing method, device, electronic equipment and medium Active CN111445417B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010240796.3A CN111445417B (en) 2020-03-31 2020-03-31 Image processing method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010240796.3A CN111445417B (en) 2020-03-31 2020-03-31 Image processing method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111445417A true CN111445417A (en) 2020-07-24
CN111445417B CN111445417B (en) 2023-12-19

Family

ID=71652610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010240796.3A Active CN111445417B (en) 2020-03-31 2020-03-31 Image processing method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111445417B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021943A (en) * 2007-04-06 2007-08-22 北京中星微电子有限公司 Image regulating method and system
CN106023067A (en) * 2016-05-17 2016-10-12 珠海市魅族科技有限公司 Image processing method and device
CN106056533A (en) * 2016-05-26 2016-10-26 维沃移动通信有限公司 Photographing method and terminal
CN107833177A (en) * 2017-10-31 2018-03-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal
US20180144185A1 (en) * 2016-11-21 2018-05-24 Samsung Electronics Co., Ltd. Method and apparatus to perform facial expression recognition and training
CN108256505A (en) * 2018-02-12 2018-07-06 腾讯科技(深圳)有限公司 Image processing method and device
CN108765264A (en) * 2018-05-21 2018-11-06 深圳市梦网科技发展有限公司 Image U.S. face method, apparatus, equipment and storage medium
CN108985241A (en) * 2018-07-23 2018-12-11 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN109961496A (en) * 2019-02-22 2019-07-02 厦门美图之家科技有限公司 Expression driving method and expression driving device
CN110096925A (en) * 2018-01-30 2019-08-06 普天信息技术有限公司 Enhancement Method, acquisition methods and the device of Facial Expression Image
CN110177205A (en) * 2019-05-20 2019-08-27 深圳壹账通智能科技有限公司 Terminal device, photographic method and computer readable storage medium based on micro- expression
CN110225196A (en) * 2019-05-30 2019-09-10 维沃移动通信有限公司 Terminal control method and terminal device
CN110264544A (en) * 2019-05-30 2019-09-20 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic device
CN110298329A (en) * 2019-07-03 2019-10-01 北京字节跳动网络技术有限公司 Expression degree prediction model acquisition methods and device, storage medium and terminal
US20200090392A1 (en) * 2018-09-19 2020-03-19 XRSpace CO., LTD. Method of Facial Expression Generation with Data Fusion

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021943A (en) * 2007-04-06 2007-08-22 北京中星微电子有限公司 Image regulating method and system
CN106023067A (en) * 2016-05-17 2016-10-12 珠海市魅族科技有限公司 Image processing method and device
CN106056533A (en) * 2016-05-26 2016-10-26 维沃移动通信有限公司 Photographing method and terminal
US20180144185A1 (en) * 2016-11-21 2018-05-24 Samsung Electronics Co., Ltd. Method and apparatus to perform facial expression recognition and training
CN107833177A (en) * 2017-10-31 2018-03-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN110096925A (en) * 2018-01-30 2019-08-06 普天信息技术有限公司 Enhancement Method, acquisition methods and the device of Facial Expression Image
CN108256505A (en) * 2018-02-12 2018-07-06 腾讯科技(深圳)有限公司 Image processing method and device
CN108765264A (en) * 2018-05-21 2018-11-06 深圳市梦网科技发展有限公司 Image U.S. face method, apparatus, equipment and storage medium
CN108985241A (en) * 2018-07-23 2018-12-11 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
US20200090392A1 (en) * 2018-09-19 2020-03-19 XRSpace CO., LTD. Method of Facial Expression Generation with Data Fusion
CN109961496A (en) * 2019-02-22 2019-07-02 厦门美图之家科技有限公司 Expression driving method and expression driving device
CN110177205A (en) * 2019-05-20 2019-08-27 深圳壹账通智能科技有限公司 Terminal device, photographic method and computer readable storage medium based on micro- expression
CN110225196A (en) * 2019-05-30 2019-09-10 维沃移动通信有限公司 Terminal control method and terminal device
CN110264544A (en) * 2019-05-30 2019-09-20 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic device
CN110298329A (en) * 2019-07-03 2019-10-01 北京字节跳动网络技术有限公司 Expression degree prediction model acquisition methods and device, storage medium and terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
余重基等: "基于表情分解-扭曲变形的人工表情合成算法" *

Also Published As

Publication number Publication date
CN111445417B (en) 2023-12-19

Similar Documents

Publication Publication Date Title
CN109361865B (en) Shooting method and terminal
CN107817939B (en) Image processing method and mobile terminal
CN109600550B (en) Shooting prompting method and terminal equipment
CN110365907B (en) Photographing method and device and electronic equipment
CN108989672B (en) Shooting method and mobile terminal
CN109461117B (en) Image processing method and mobile terminal
CN109361867B (en) Filter processing method and mobile terminal
CN108492246B (en) Image processing method and device and mobile terminal
CN110198413B (en) Video shooting method, video shooting device and electronic equipment
CN109788204A (en) Shoot processing method and terminal device
CN111147752B (en) Zoom factor adjusting method, electronic device, and medium
CN109102555B (en) Image editing method and terminal
CN108683850B (en) Shooting prompting method and mobile terminal
CN107644396B (en) Lip color adjusting method and device
CN108513067B (en) Shooting control method and mobile terminal
CN108881782B (en) Video call method and terminal equipment
CN109819167B (en) Image processing method and device and mobile terminal
CN111182211B (en) Shooting method, image processing method and electronic equipment
CN109819166B (en) Image processing method and electronic equipment
CN109448069B (en) Template generation method and mobile terminal
CN109656636B (en) Application starting method and device
CN108495036B (en) Image processing method and mobile terminal
CN110708475B (en) Exposure parameter determination method, electronic equipment and storage medium
CN109639981B (en) Image shooting method and mobile terminal
CN109903218B (en) Image processing method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant