CN111445417B - Image processing method, device, electronic equipment and medium - Google Patents

Image processing method, device, electronic equipment and medium Download PDF

Info

Publication number
CN111445417B
CN111445417B CN202010240796.3A CN202010240796A CN111445417B CN 111445417 B CN111445417 B CN 111445417B CN 202010240796 A CN202010240796 A CN 202010240796A CN 111445417 B CN111445417 B CN 111445417B
Authority
CN
China
Prior art keywords
target
intensity
face
image
initial image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010240796.3A
Other languages
Chinese (zh)
Other versions
CN111445417A (en
Inventor
李镕镕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010240796.3A priority Critical patent/CN111445417B/en
Publication of CN111445417A publication Critical patent/CN111445417A/en
Application granted granted Critical
Publication of CN111445417B publication Critical patent/CN111445417B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, electronic equipment and a medium. The method comprises the following steps: acquiring an initial image; under the condition that the initial image comprises face information, generating a target mapping relation between the intensity of target expression parameters and facial features according to the face information of a target face in the initial image, and acquiring target intensity; and adjusting the characteristics of the target face to target facial characteristics corresponding to the target intensity based on the target mapping relation and the target intensity to obtain a target image. The embodiment of the invention can solve the problem of complex processing process of the facial expression in the image.

Description

Image processing method, device, electronic equipment and medium
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to an image processing method, an image processing device, electronic equipment and a medium.
Background
At present, a user wants to obtain a face image with natural expression, and the face image is often required to be modified by using image modifying software, and especially when the face image is a multi-person image, the operation of modifying the face is very complex and difficult.
Therefore, the existing image processing method has complex process of processing the facial expression in the image, and is not simple enough.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, electronic equipment and a medium, which are used for solving the problem of complex processing process of facial expressions in images.
In a first aspect, an embodiment of the present invention provides an image processing method, which is applied to an electronic device, including:
acquiring an initial image;
under the condition that the initial image comprises face information, generating a target mapping relation between the intensity of target expression parameters and facial features according to the face information of a target face in the initial image, and acquiring target intensity;
and adjusting the characteristics of the target face to the target facial characteristics corresponding to the target intensity based on the target mapping relation and the target intensity to obtain a target image.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including:
the image acquisition module is used for acquiring an initial image;
the mapping module is used for generating a target mapping relation between the strength of the target expression parameter and the facial feature according to the facial information of the target face in the initial image and acquiring the target strength under the condition that the initial image comprises the facial information;
And the adjusting module is used for adjusting the characteristics of the target face to the target facial characteristics corresponding to the target intensity based on the target mapping relation and the target intensity to obtain a target image. .
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program implementing the steps of the image processing method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method according to the first aspect.
In the embodiment of the invention, the target mapping relation between the strength of the target expression parameter and the facial features, such as the mapping relation between the radian of the lips and the whole facial features of the face when smiling, can be generated according to the facial information of the target face in the initial image. And then, acquiring target intensity, and adjusting the characteristics of the target face in the initial image to facial characteristics corresponding to the target intensity. Therefore, in the embodiment, the user does not need to use additional image repairing software to repair the image, but can directly adjust the facial expression in the image according to the acquired target strength and the generated target mapping relation, so that the operation is simple, and the process of processing the facial expression in the image by the user is simplified.
Drawings
The invention will be better understood from the following description of specific embodiments thereof taken in conjunction with the accompanying drawings in which like or similar reference characters designate like or similar features.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 3 is a schematic hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As in the background art, a user often wants to obtain a face image with natural expression, and therefore needs to repair the face image. For example, in real daily life, most users do not reach a professional photographing level, and the users may not be good at facing the lens, resulting in a situation where it is difficult to make a natural smile due to stiff expression when photographing. The repair of the face image requires a relatively complex operation, and particularly when the face image is a multi-person image, the modification of a plurality of faces is very troublesome and difficult to operate.
The existing method for carrying out expression processing on the face image mainly comprises the following two steps: the first scheme is to construct a face model to generate an expression according to the photographed face image; the second scheme is to output shooting auxiliary information such as voice, multimedia and the like so as to trigger the shot object to adjust the current expression.
However, the first scheme is to make expression adjustment for a single Zhang Ren face, and does not deal with the situation that multiple faces exist in an image, if the strategy of adjusting the expression for the single Zhang Ren face is applied to each face in multiple face images, each face is set to be the same expression, so that the image is very strange and tricky. With the second scheme, outputting photographing auxiliary information is not an effective method for everyone.
Therefore, in order to solve the above technical problems, an embodiment of the present invention provides an image processing method, referring to fig. 1, fig. 1 shows a flow chart of the image processing method provided by the embodiment of the present invention; the method is applied to the electronic equipment and comprises the following steps:
s101, acquiring an initial image;
the initial image here refers to a face image to be subjected to expression adjustment. Here, acquiring the initial image may include: and receiving triggering input of a user to the shooting control, controlling the shooting assembly to shoot, and taking a picture shot by the shooting assembly as an initial image. Or acquiring the initial image may further include: and receiving an image selection input of a user, and calling a pre-stored face image selected by the image selection input as an initial image. Of course, the above is only two specific embodiments, and S101 may be implemented in other manners, which is not limited by the present invention.
In addition, in order to distinguish the facial expression processing from other image processing, before S101, it may further include: and receiving trigger input of a user on the expression processing control, and responding to the trigger input, and entering an expression processing mode. Specifically, S101 is: in the expression processing mode, an initial image is acquired.
Then, the face in the initial image needs to be detected by a face detection algorithm, and after the face is detected and the number of faces is determined, the process goes to S102.
S102, under the condition that an initial image comprises face information, generating a target mapping relation between the strength of target expression parameters and facial features according to the face information of a target face in the initial image;
since there may be erroneous operation by the user, the face is not included in the selected initial image, and in this case, the operation of the subsequent expression processing is not required, and the expression processing operation can be performed only when the initial image includes the image of the face.
The target expression parameter herein refers to an expression parameter having an effect on a facial expression, and may include at least one of the following: lip radian, ratio of length to width of eye, eyebrow spacing, etc. The radian of the lips can influence the smiling expression of the face, and the larger the radian of the lips is, the larger the smiling intensity value is; the ratio of the length to the width of the eyes can influence the panic expression of the face, and the smaller the ratio is, the larger the panic intensity value is; the eyebrow spacing affects the anger expression of the face, and the smaller the spacing, the greater the anger intensity value. Wherein the intensity of the target expression parameter here may be set to a range of 1 to 10. Of course, different intensity ranges may be set for different expression parameters, respectively.
Optionally, different expression modes, such as "smiling face mode", "panic mode", "anger mode", etc., may be set for different target expression parameters, and if a user's selection input of the target expression mode is received, the target expression parameter is determined according to the target expression mode.
In addition, because the muscles of the whole face are changed under different expressions, in order to ensure the naturalness of the face, the target parameter expression of the face cannot be adjusted, but the characteristics of the whole face need to be adjusted. It is therefore necessary to determine facial features of the full face under different lip radians and subsequently make facial expression adjustments based on the facial features. The facial features here include a plurality of facial feature points.
S103, acquiring target strength; in fig. 1, there is no limitation of the sequence between S102 and S103, and both may be executed in parallel.
When the target intensities are corresponding to the target faces, a plurality of target intensities are required to be acquired when the target faces are included, and each target intensity corresponds to one target face. Based on this, in this embodiment, the expressions of the faces can be adjusted for the multi-face image, so that the situation that the faces are adjusted to be the same expression is avoided, and the naturalness of the expression adjustment is ensured.
And S104, based on the target mapping relation and the target intensity, adjusting the characteristics of the target face to the target facial characteristics corresponding to the target intensity to obtain a target image.
In the embodiment of the invention, the target mapping relation between the strength of the target expression parameter and the facial features, such as the mapping relation between the radian of the lips and the whole facial features of the face when smiling, can be generated according to the facial information of the target face in the initial image. And then, acquiring target intensity, and adjusting the characteristics of the target face in the initial image to facial characteristics corresponding to the target intensity. Therefore, in the embodiment, the user does not need to use additional image repairing software to repair the image, but can directly adjust the facial expression in the image according to the acquired target strength and the generated target mapping relation, so that the operation is simple, and the process of processing the facial expression in the image by the user is simplified.
In some embodiments of the present invention, the initial image may be a single face image or a multi-face image.
In a specific embodiment of the present invention, the step S102 may include:
constructing a face model according to the target face; such as a 2D or 3D face model.
And generating a target mapping relation by adjusting the strength of the target expression parameters on the face model. For example, the target mapping relationship may include lip radian values and their corresponding facial features arranged in order from small to large.
In this embodiment, the purpose of building a face model is to determine the facial feature condition of a face when target expression parameters are in different intensities, so as to accurately build a target mapping relationship of the intensity and the facial feature. In the process of constructing the target mapping relationship, the intensities in the target mapping relationship can be set according to a preset setting range, for example, the radian of lips is 0-20 degrees, and each intensity is 5 degrees or the like, so that the range of adjustment selectable by a user is enlarged.
In a specific embodiment of the present invention, in a case where the initial image is a multi-face image, S102 may include:
constructing a face model according to the target face;
and adjusting the intensity of the target expression parameter on the face model corresponding to the target face to be the target intensity corresponding to the target face, so as to obtain a target mapping relation between the target intensity and the target facial feature.
In this example, face models are respectively built for the target faces, corresponding target intensities are respectively obtained for different target faces, and then the intensities of the target faces are respectively adjusted according to facial features corresponding to the target intensities, so that the expressions of the adjusted target faces are different, and the overall natural degree of the obtained target image is high.
In other embodiments of the present invention, in the case where the initial image is a single face image, S103 may include:
taking the preset intensity as the target intensity; the target mapping relation comprises preset intensity.
In this embodiment, that is, in this embodiment, facial expressions in the photo are modified according to preset intensity values, so that the electronic device can automatically perform expression adjustment, and convenience in use is improved for a user. For example, a preset default intensity is set to 5. Corresponding preset default intensities can be set for different target expression parameters respectively.
In another implementation manner of the present invention, in a case where the initial image is a single face image, S103 may include:
receiving a first input of a first intensity in a target mapping relation from a user;
in response to the first input, the first intensity is taken as the target intensity.
In this embodiment, the facial feature corresponding to which intensity of the target expression parameter is specifically adopted to perform the expression adjustment is determined according to the input of the user, that is, manually selected by the user. In this case, the autonomous selectivity of the user is improved, and the adjustable range of the expression adjustment is also increased.
In order to facilitate the selection of the user, the user needs to be able to see the selectable intensity range and the facial feature condition corresponding to each intensity, and optionally, the target mapping relationship may be displayed in the above embodiment, or may also be implemented in the following manner.
Optionally, the receiving the first input of the first intensity in the target mapping relationship by the user may include:
receiving a first input of a user to a sliding control on a preset sliding shaft; the method comprises the steps that a sliding shaft is preset, and a sliding control is required to be displayed on a display interface;
and taking the intensity corresponding to the position of the sliding control on the preset sliding shaft as the target intensity.
In this embodiment, a visual preset sliding shaft is set on the display interface, and the user can slide the sliding control on the preset sliding shaft. The preset sliding shaft is taken as a sliding strip, and the sliding control is taken as a sliding block for introduction. Different positions on the sliding bar are associated with different intensity values, the movement of the sliding block by a user is the first input, and the facial features corresponding to the intensity associated with the current sliding block position can be correspondingly displayed on the display interface, namely, along with the movement of the sliding block by the user, the expression of the target face can be changed, so that the user can conveniently select the proper facial features. For example, when the sliding block is slid upwards, the radian of the lips is increased, and the smiling degree of the human face is enhanced; the sliding block is slid downwards, the radian of the lips is reduced, and the smiling degree of the human face is reduced.
In some embodiments of the present invention, in a case where the initial image is a multi-face image, S103 may include:
calculating reference intensity according to the intensity of the target expression parameter on the reference face in the initial image;
acquiring normal distribution random numbers according to the reference intensity and the number of the target faces, and taking the normal distribution random numbers as the target intensity; each target face corresponds to one target intensity, and the difference value between the normal distribution random number and the reference intensity is smaller than a preset threshold value.
In this embodiment, the normal distribution random number is obtained according to the intensity of the target expression parameter in the reference face as the average value, and the normal distribution random number generated based on the reference intensity is used as the target intensity, so that the target intensities are not equal to each other and are not equal to the reference intensity, and meanwhile, the generated normal distribution random number approaches to the reference intensity. Specifically, each normal distribution random number can be randomly allocated to each target face.
In still other embodiments of the present invention, the calculating the reference intensity according to the intensity of the target expression parameter on the reference face in the initial image may include:
receiving a second input of a user to a first face in the initial image; the first face selected by the user is the reference face;
and responding to the second input, acquiring first intensity of the target expression parameter in the first face, and taking the first intensity as reference intensity.
In this embodiment, the user selects the first face by himself, and takes the first intensity of the target expression parameter of the first face selected by the user as the reference intensity, so that the selected reference intensity can be ensured to meet the user requirement, and the result after the expression adjustment more meets the preference of the user.
In other embodiments of the present invention, in the case where the initial image is a multi-face image, the calculating the reference intensity according to the intensity of the target expression parameter on the reference face in the initial image may include:
determining an intensity interval in which the intensity of the target expression parameter is located on each face in the initial image; namely, all faces in the initial image are used as reference faces;
taking the intensity interval with the highest intensity as a target intensity interval;
The intensity with the largest number in the target intensity interval is used as the reference intensity.
In this embodiment, the intensity interval in which the intensity of the target expression parameter on each face is located is first determined, then the intensity quantity in each intensity interval is counted, and the intensity interval with the largest quantity indicates that the expression corresponding to the intensity interval meets the expression requirement when most people take a photograph, for example, the interval corresponds to a smile expression, so that the interval is selected as the target intensity interval, and then the intensity with the largest quantity in the target intensity interval is further selected as the reference intensity, so that the selected reference intensity meets the expression requirement when most people take a photograph. For the convenience of understanding, for example, assume that there are 5 faces, and the intensity intervals are preset to be a first interval 1-5 and a second interval 6-10, wherein the intensities of the target expression parameters of two faces fall into the first interval, and the intensities of the target expression parameters of the other 3 faces fall into the second interval, so that the intensities contained in the first interval are 2, and the intensities contained in the second interval are 3, so that the target intensity interval is the second interval; the number of these 3 intensities contained in the second section is 7, 9,7 is the largest, and thus the reference intensity is 7.
Of course, in other embodiments, the average value of the intensities of the target expression parameters of the faces may be used as the reference intensity, which is not limited by the present invention.
In still other embodiments of the present invention, prior to S102, the method may further include:
and determining the target face.
In some specific implementations, the determining the target face may include:
receiving a third input of a user to a second face in the initial image;
in response to the third input, the second face is taken as the target face.
In the embodiment, the target face is selected by the user, so that the autonomy of the user is improved, and the target face needing expression adjustment is selected by the user, so that the finally obtained target image meets the user requirement. Wherein the second face selected here may be one or more.
In other specific implementations, in a case where the initial image is a multi-face image, determining the target face may include:
and taking the face, which is contained in the initial image and has the intensity of the target expression parameter outside the target intensity interval, as the target face.
In the embodiment, the face with the intensity of the target expression parameter outside the target intensity interval is taken as the target face by default, so that the mode does not need manual adjustment by a user, the convenience is better, and the whole target image can be more harmonious.
Optionally, after obtaining the target image, the method may further include:
receiving a saving input of a user to a target image; in response to the save input, the target image is saved. For example, saved into an album. In other embodiments, the target image may be stored directly without receiving the user's storage input.
Optionally, after obtaining the target image and before saving the target image, the method may further include:
the target image is displayed on the preview display interface.
According to the method, before the target image is stored, the target image is previewed, so that a user can judge whether the current target image meets the self requirement in advance, if not, the image processing operation is continued, the image which fails to be adjusted by excessive expressions can be prevented from being stored by the user, occupation of the storage space by the failed image is reduced, and convenience in adjusting the expressions of the user is improved.
Based on the image processing method embodiment provided in the foregoing embodiment, correspondingly, the embodiment of the present invention further provides an image processing apparatus, referring to fig. 2, and fig. 2 shows a schematic structural diagram of the image processing apparatus provided in the embodiment of the present invention. The image processing apparatus includes:
An image acquisition module 201 for acquiring an initial image;
the mapping module 202 is configured to generate, when the initial image includes face information, a target mapping relationship between the strength of the target expression parameter and the facial feature according to the face information of the target face in the initial image, and obtain a target strength;
and the adjusting module 203 is configured to adjust the feature of the target face to the target facial feature corresponding to the target strength based on the target mapping relationship and the target strength, so as to obtain a target image.
In the embodiment of the invention, the target mapping relation between the strength of the target expression parameter and the facial features, such as the mapping relation between the radian of the lips and the whole facial features of the face when smiling, can be generated according to the facial information of the target face in the initial image. And then, acquiring target intensity, and adjusting the characteristics of the target face in the initial image to facial characteristics corresponding to the target intensity. Therefore, in the embodiment, the user does not need to use additional image repairing software to repair the image, but can directly adjust the facial expression in the image according to the acquired target strength and the generated target mapping relation, so that the operation is simple, and the process of processing the facial expression in the image by the user is simplified.
Optionally, the image acquisition module 201 may specifically be configured to: and receiving triggering input of a user to the shooting control, controlling the shooting assembly to shoot, and taking a picture shot by the shooting assembly as an initial image. Alternatively, the image acquisition module 201 may specifically be configured to: and receiving an image selection input of a user, and calling a pre-stored face image selected by the image selection input as an initial image.
Furthermore, in order to distinguish the facial expression processing from other image processing, the above-described image acquisition module 201 may also be used to: receiving trigger input of a user on the expression processing control, and responding to the trigger input, and entering an expression processing mode; in the expression processing mode, an initial image is acquired.
In some embodiments of the present invention, the initial image may be a single face image or a multi-face image.
In a specific embodiment of the present invention, the mapping module 202 may include:
the mapping generation unit is used for constructing a face model according to the target face; and generating a target mapping relation by adjusting the strength of the target expression parameters on the face model.
In this embodiment, the purpose of building a face model is to determine the facial feature condition of a face when target expression parameters are in different intensities, so as to accurately build a target mapping relationship of the intensity and the facial feature. In the process of constructing the target mapping relationship, the strength in the target mapping relationship can be set according to a preset setting range, so that the range in which a user can select to adjust is enlarged.
In a specific embodiment of the present invention, in a case where the initial image is a multi-face image, the map generating unit may be configured to:
constructing a face model according to the target face; and adjusting the intensity of the target expression parameter on the face model corresponding to the target face to be the target intensity corresponding to the target face, so as to obtain a target mapping relation between the target intensity and the target facial feature.
In this example, face models are respectively built for the target faces, corresponding target intensities are respectively obtained for different target faces, and then the intensities of the target faces are respectively adjusted according to facial features corresponding to the target intensities, so that the expressions of the adjusted target faces are different, and the overall natural degree of the obtained target image is high.
In other embodiments of the present invention, in the case where the initial image is a single face image, the mapping module 202 may include:
an intensity obtaining unit for taking the preset intensity as a target intensity; the target mapping relation comprises preset intensity.
In this embodiment, that is, in this embodiment, facial expressions in the photo are modified according to preset intensity values, so that the electronic device can automatically perform expression adjustment, and convenience in use is improved for a user. Corresponding preset default intensities can be set for different target expression parameters respectively.
In another implementation manner of the present invention, in a case where the initial image is a single face image, the intensity obtaining unit may be configured to:
receiving a first input of a first intensity in a target mapping relation from a user; in response to the first input, the first intensity is taken as the target intensity.
In this embodiment, the facial feature corresponding to which intensity of the target expression parameter is specifically adopted to perform the expression adjustment is determined according to the input of the user, that is, manually selected by the user. In this case, the autonomous selectivity of the user is improved, and the adjustable range of the expression adjustment is also increased.
In order to facilitate the selection of the user, the user needs to be able to see the selectable intensity range and the facial feature condition corresponding to each intensity, and optionally, the target mapping relationship may be displayed in the above embodiment, or may also be implemented in the following manner.
Optionally, the intensity obtaining unit may specifically be configured to:
receiving a first input of a user to a sliding control on a preset sliding shaft; the method comprises the steps that a sliding shaft is preset, and a sliding control is required to be displayed on a display interface; and taking the intensity corresponding to the position of the sliding control on the preset sliding shaft as the target intensity.
In this embodiment, a visual preset sliding shaft is set on the display interface, and the user can slide the sliding control on the preset sliding shaft. The preset sliding shaft is taken as a sliding strip, and the sliding control is taken as a sliding block for introduction. Different positions on the sliding bar are associated with different intensity values, the movement of the sliding block by a user is the first input, and the facial features corresponding to the intensity associated with the current sliding block position can be correspondingly displayed on the display interface, namely, along with the movement of the sliding block by the user, the expression of the target face can be changed, so that the user can conveniently select the proper facial features.
In some embodiments of the present invention, in a case where the initial image is a multi-face image, the intensity obtaining unit may specifically include:
the reference intensity calculating unit is used for calculating reference intensity according to the intensity of the target expression parameter on the reference face in the initial image;
the intensity determining unit is used for obtaining normal distribution random numbers according to the reference intensity and the number of the target faces, and taking the normal distribution random numbers as the target intensity; each target face corresponds to one target intensity, and the difference value between the normal distribution random number and the reference intensity is smaller than a preset threshold value.
In this embodiment, the normal distribution random number is obtained according to the intensity of the target expression parameter in the reference face as the average value, and the normal distribution random number generated based on the reference intensity is used as the target intensity, so that the target intensities are not equal to each other and are not equal to the reference intensity, and meanwhile, the generated normal distribution random number approaches to the reference intensity. Specifically, each normal distribution random number can be randomly allocated to each target face.
In still other embodiments of the present invention, the above reference intensity calculating unit may be configured to:
receiving a second input of a user to a first face in the initial image; and responding to the second input, acquiring first intensity of the target expression parameter in the first face, and taking the first intensity as reference intensity.
In this embodiment, the user selects the first face by himself, and takes the first intensity of the target expression parameter of the first face selected by the user as the reference intensity, so that the selected reference intensity can be ensured to meet the user requirement, and the result after the expression adjustment more meets the preference of the user.
In other embodiments of the present invention, in the case where the initial image is a multi-face image, the reference intensity calculating unit may be configured to:
determining an intensity interval in which the intensity of the target expression parameter is located on each face in the initial image; namely, all faces in the initial image are used as reference faces; taking the intensity interval with the highest intensity as a target intensity interval; the intensity with the largest number in the target intensity interval is used as the reference intensity.
In this embodiment, the intensity interval in which the intensity of the target expression parameter on each face is located is first determined, then the intensity quantity in each intensity interval is counted, and the intensity interval with the largest quantity indicates that the expression corresponding to the intensity interval meets the expression requirement when most people take a photograph, for example, the interval corresponds to a smile expression, so that the interval is selected as the target intensity interval, and then the intensity with the largest quantity in the target intensity interval is further selected as the reference intensity, so that the selected reference intensity meets the expression requirement when most people take a photograph.
Of course, in other embodiments, the average value of the intensities of the target expression parameters of the faces may be used as the reference intensity, which is not limited by the present invention.
In still other embodiments of the present invention, the apparatus may further include:
and the target face determining module is used for determining the target face.
In some specific implementations, the above-mentioned target face determining module may be configured to:
receiving a third input of a user to a second face in the initial image; in response to the third input, the second face is taken as the target face.
In the embodiment, the target face is selected by the user, so that the autonomy of the user is improved, and the target face needing expression adjustment is selected by the user, so that the finally obtained target image meets the user requirement. Wherein the second face selected here may be one or more.
In other specific implementations, in a case where the initial image is a multi-face image, the target face determining module may be configured to:
and taking the face, which is contained in the initial image and has the intensity of the target expression parameter outside the target intensity interval, as the target face.
In the embodiment, the face with the intensity of the target expression parameter outside the target intensity interval is taken as the target face by default, so that the mode does not need manual adjustment by a user, the convenience is better, and the whole target image can be more harmonious.
Optionally, the apparatus may further include:
the storage module is used for receiving the storage input of a user on the target image; in response to the save input, the target image is saved. In other embodiments, the target image may be stored directly without receiving the user's storage input.
Optionally, the apparatus may further include:
and the preview module is used for displaying the target image on a preview display interface.
According to the method, before the target image is stored, the target image is previewed, so that a user can judge whether the current target image meets the self requirement in advance, if not, the image processing operation is continued, the image which fails to be adjusted by excessive expressions can be prevented from being stored by the user, occupation of the storage space by the failed image is reduced, and convenience in adjusting the expressions of the user is improved.
The image processing device provided in the embodiment of the present invention can implement each method step implemented in the method embodiment of fig. 1, and in order to avoid repetition, a description is omitted here.
Fig. 3 shows a schematic hardware structure of an electronic device according to an embodiment of the present invention.
The electronic device 300 includes, but is not limited to: radio frequency unit 301, network module 302, audio output unit 303, input unit 304, sensor 305, display unit 306, user input unit 307, interface unit 308, memory 309, processor 310, and power supply 311. Those skilled in the art will appreciate that the electronic device structure shown in fig. 3 does not constitute a limitation of the electronic device, and the electronic device may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components. In the embodiment of the invention, the electronic equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
Wherein, the processor 310 is configured to acquire an initial image; under the condition that the initial image comprises face information, generating a target mapping relation between the intensity of target expression parameters and facial features according to the face information of a target face in the initial image, and acquiring target intensity; and adjusting the characteristics of the target face to target facial characteristics corresponding to the target intensity based on the target mapping relation and the target intensity to obtain a target image.
In the embodiment of the invention, the target mapping relation between the strength of the target expression parameter and the facial features, such as the mapping relation between the radian of the lips and the whole facial features of the face when smiling, can be generated according to the facial information of the target face in the initial image. And then, acquiring target intensity, and adjusting the characteristics of the target face in the initial image to facial characteristics corresponding to the target intensity. Therefore, in the embodiment, the user does not need to use additional image repairing software to repair the image, but can directly adjust the facial expression in the image according to the acquired target strength and the generated target mapping relation, so that the operation is simple, and the process of processing the facial expression in the image by the user is simplified.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 301 may be used to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the downlink data with the processor 310; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 301 may also communicate with networks and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user through the network module 302, such as helping the user to send and receive e-mail, browse web pages, and access streaming media, etc.
The audio output unit 303 may convert audio data received by the radio frequency unit 301 or the network module 302 or stored in the memory 309 into an audio signal and output as sound. Also, the audio output unit 303 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the electronic device 300. The audio output unit 303 includes a speaker, a buzzer, a receiver, and the like.
The input unit 304 is used to receive an audio or video signal. The input unit 304 may include a graphics processor (Graphics Processing Unit, GPU) 3031 and a microphone 3042, the graphics processor 3031 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 306. The image frames processed by the graphics processor 3031 may be stored in memory 309 (or other storage medium) or transmitted via the radio frequency unit 301 or the network module 302. The microphone 3042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 301 in the case of a telephone call mode.
The electronic device 300 further comprises at least one sensor 305, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 3061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 3061 and/or the backlight when the electronic device 300 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for determining the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration determination related functions (such as pedometer and knocking), and the like; the sensor 305 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 306 is used to display information input by a user or information provided to the user. The display unit 306 may include a display panel 3061, and the display panel 3061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 307 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 307 includes a touch panel 3071 and other input devices 3072. The touch panel 3071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 3071 or thereabout the touch panel 3071 using any suitable object or accessory such as a finger, stylus, or the like). The touch panel 3071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 310, and receives and executes commands sent by the processor 310. In addition, the touch panel 3071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 307 may include other input devices 3072 in addition to the touch panel 3071. Specifically, other input devices 3072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 3071 may be overlaid on the display panel 3061, and when the touch panel 3071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the processor 310 to determine a type of touch event, and then the processor 310 provides a corresponding visual output on the display panel 3061 according to the type of touch event. Although in fig. 3, the touch panel 3071 and the display panel 3061 are two independent components for implementing the input and output functions of the electronic device, in some embodiments, the touch panel 3071 and the display panel 3061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 308 is an interface to which an external device is connected to the electronic apparatus 300. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having a determination module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 308 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 300 or may be used to transmit data between the electronic apparatus 300 and an external device.
Memory 309 may be used to store software programs as well as various data. The memory 309 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 309 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 310 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 309, and calling data stored in the memory 309, thereby performing overall monitoring of the electronic device. Processor 310 may include one or more processing units; preferably, the processor 310 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 310.
The electronic device 300 may also include a power supply 311 (e.g., a battery) for powering the various components, and preferably the power supply 311 may be logically coupled to the processor 310 via a power management system that performs functions such as managing charge, discharge, and power consumption.
In addition, the electronic device 300 includes some functional modules, which are not shown, and will not be described herein.
Preferably, the embodiment of the present invention further provides an electronic device, including a processor 310, a memory 309, and a computer program stored in the memory 309 and capable of running on the processor 310, where the computer program when executed by the processor 310 implements each process of the above embodiment of the image processing method, and the same technical effects can be achieved, and for avoiding repetition, a description is omitted herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above-mentioned image processing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (8)

1. An image processing method applied to an electronic device, comprising:
acquiring an initial image;
when the initial image includes face information, generating a target mapping relation between the intensity of a target expression parameter and facial features according to the face information of a target face in the initial image, and acquiring the target intensity, wherein the target expression parameter includes at least one of the following: lip radian, ratio of length and width of eyes, and eyebrow spacing;
based on the target mapping relation and the target intensity, adjusting the characteristics of the target face to target facial characteristics corresponding to the target intensity to obtain a target image;
in the case that the initial image is a multi-face image, the acquiring the target intensity includes:
Calculating reference intensity according to the intensity of the target expression parameter on the reference face in the initial image;
acquiring a normal distribution random number according to the reference intensity and the number of the target faces, and taking the normal distribution random number as the target intensity; and each target face corresponds to one target intensity, and the difference value between the normal distribution random number and the reference intensity is smaller than a preset threshold value.
2. The method according to claim 1, wherein the generating a target mapping relationship between the intensity of the target expression parameter and the facial feature according to the face information of the target face in the initial image includes:
constructing a face model according to the target face;
and generating the target mapping relation by adjusting the intensity of the target expression parameter on the face model.
3. The method according to claim 1, wherein, in the case where the initial image is a multi-face image, the generating a target mapping relationship between the intensity of a target expression parameter and facial features according to the face information of a target face in the initial image includes:
constructing a face model according to the target face;
And adjusting the intensity of the target expression parameter on the face model corresponding to the target face to the target intensity corresponding to the target face to obtain a target mapping relation between the target intensity and the target facial feature.
4. The method of claim 1, wherein the calculating the reference intensity from the intensity of the target expression parameter on the reference face in the initial image comprises:
receiving a second input of a user to a first face in the initial image;
responding to the second input, acquiring first intensity of the target expression parameter in the first face, and taking the first intensity as the reference intensity;
or,
determining an intensity interval in which the intensity of the target expression parameter is located on each face in the initial image;
taking the intensity interval with the highest intensity as a target intensity interval;
and taking the intensity with the largest number in the target intensity interval as the reference intensity.
5. The method of claim 4, further comprising, prior to generating the target mapping relationship between the intensity of the target expression parameter and the facial feature according to the facial information of the target face in the initial image:
Receiving a third input of a user to a second face in the initial image;
in response to the third input, taking the second face as the target face;
or,
and taking the face, which is contained in the initial image and has the intensity of the target expression parameter outside the target intensity interval, as the target face.
6. An image processing apparatus, comprising:
the image acquisition module is used for acquiring an initial image;
the mapping module is used for generating a target mapping relation between the intensity of a target expression parameter and the facial feature according to the facial information of the target face in the initial image and acquiring the target intensity when the initial image comprises the facial information, wherein the target expression parameter comprises at least one of the following: lip radian, ratio of length and width of eyes, and eyebrow spacing;
the adjusting module is used for adjusting the characteristics of the target face to the target facial characteristics corresponding to the target intensity based on the target mapping relation and the target intensity to obtain a target image;
in the case that the initial image is a multi-face image, the mapping module includes:
A reference intensity calculating unit, configured to calculate a reference intensity according to the intensity of the target expression parameter on the reference face in the initial image;
the intensity determining unit is used for obtaining normal distribution random numbers according to the reference intensity and the number of the target faces, and taking the normal distribution random numbers as the target intensity; and each target face corresponds to one target intensity, and the difference value between the normal distribution random number and the reference intensity is smaller than a preset threshold value.
7. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the image processing method according to any one of claims 1 to 6.
8. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the image processing method according to any one of claims 1 to 6.
CN202010240796.3A 2020-03-31 2020-03-31 Image processing method, device, electronic equipment and medium Active CN111445417B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010240796.3A CN111445417B (en) 2020-03-31 2020-03-31 Image processing method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010240796.3A CN111445417B (en) 2020-03-31 2020-03-31 Image processing method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111445417A CN111445417A (en) 2020-07-24
CN111445417B true CN111445417B (en) 2023-12-19

Family

ID=71652610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010240796.3A Active CN111445417B (en) 2020-03-31 2020-03-31 Image processing method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111445417B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021943A (en) * 2007-04-06 2007-08-22 北京中星微电子有限公司 Image regulating method and system
CN106023067A (en) * 2016-05-17 2016-10-12 珠海市魅族科技有限公司 Image processing method and device
CN106056533A (en) * 2016-05-26 2016-10-26 维沃移动通信有限公司 Photographing method and terminal
CN107833177A (en) * 2017-10-31 2018-03-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108256505A (en) * 2018-02-12 2018-07-06 腾讯科技(深圳)有限公司 Image processing method and device
CN108765264A (en) * 2018-05-21 2018-11-06 深圳市梦网科技发展有限公司 Image U.S. face method, apparatus, equipment and storage medium
CN108985241A (en) * 2018-07-23 2018-12-11 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN109961496A (en) * 2019-02-22 2019-07-02 厦门美图之家科技有限公司 Expression driving method and expression driving device
CN110096925A (en) * 2018-01-30 2019-08-06 普天信息技术有限公司 Enhancement Method, acquisition methods and the device of Facial Expression Image
CN110177205A (en) * 2019-05-20 2019-08-27 深圳壹账通智能科技有限公司 Terminal device, photographic method and computer readable storage medium based on micro- expression
CN110225196A (en) * 2019-05-30 2019-09-10 维沃移动通信有限公司 Terminal control method and terminal device
CN110264544A (en) * 2019-05-30 2019-09-20 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic device
CN110298329A (en) * 2019-07-03 2019-10-01 北京字节跳动网络技术有限公司 Expression degree prediction model acquisition methods and device, storage medium and terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180057096A (en) * 2016-11-21 2018-05-30 삼성전자주식회사 Device and method to perform recognizing and training face expression
US20200090392A1 (en) * 2018-09-19 2020-03-19 XRSpace CO., LTD. Method of Facial Expression Generation with Data Fusion

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021943A (en) * 2007-04-06 2007-08-22 北京中星微电子有限公司 Image regulating method and system
CN106023067A (en) * 2016-05-17 2016-10-12 珠海市魅族科技有限公司 Image processing method and device
CN106056533A (en) * 2016-05-26 2016-10-26 维沃移动通信有限公司 Photographing method and terminal
CN107833177A (en) * 2017-10-31 2018-03-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN110096925A (en) * 2018-01-30 2019-08-06 普天信息技术有限公司 Enhancement Method, acquisition methods and the device of Facial Expression Image
CN108256505A (en) * 2018-02-12 2018-07-06 腾讯科技(深圳)有限公司 Image processing method and device
CN108765264A (en) * 2018-05-21 2018-11-06 深圳市梦网科技发展有限公司 Image U.S. face method, apparatus, equipment and storage medium
CN108985241A (en) * 2018-07-23 2018-12-11 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN109961496A (en) * 2019-02-22 2019-07-02 厦门美图之家科技有限公司 Expression driving method and expression driving device
CN110177205A (en) * 2019-05-20 2019-08-27 深圳壹账通智能科技有限公司 Terminal device, photographic method and computer readable storage medium based on micro- expression
CN110225196A (en) * 2019-05-30 2019-09-10 维沃移动通信有限公司 Terminal control method and terminal device
CN110264544A (en) * 2019-05-30 2019-09-20 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic device
CN110298329A (en) * 2019-07-03 2019-10-01 北京字节跳动网络技术有限公司 Expression degree prediction model acquisition methods and device, storage medium and terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
余重基等.基于表情分解-扭曲变形的人工表情合成算法.中国图象图形学报.2006,第11卷(第3期),第372-378页. *

Also Published As

Publication number Publication date
CN111445417A (en) 2020-07-24

Similar Documents

Publication Publication Date Title
CN109361865B (en) Shooting method and terminal
CN109361869B (en) Shooting method and terminal
CN107621738B (en) Control method of mobile terminal and mobile terminal
CN109688322B (en) Method and device for generating high dynamic range image and mobile terminal
CN110365907B (en) Photographing method and device and electronic equipment
CN109361867B (en) Filter processing method and mobile terminal
CN109461117B (en) Image processing method and mobile terminal
CN108492246B (en) Image processing method and device and mobile terminal
CN108989672B (en) Shooting method and mobile terminal
CN111147752B (en) Zoom factor adjusting method, electronic device, and medium
CN108040209B (en) Shooting method and mobile terminal
CN108683850B (en) Shooting prompting method and mobile terminal
CN107730460B (en) Image processing method and mobile terminal
CN110198413B (en) Video shooting method, video shooting device and electronic equipment
CN109462745B (en) White balance processing method and mobile terminal
CN108848309B (en) Camera program starting method and mobile terminal
CN107644396B (en) Lip color adjusting method and device
CN109819166B (en) Image processing method and electronic equipment
CN109727212B (en) Image processing method and mobile terminal
CN109448069B (en) Template generation method and mobile terminal
CN109474784B (en) Preview image processing method and terminal equipment
CN107959755B (en) Photographing method, mobile terminal and computer readable storage medium
CN110855901A (en) Camera exposure time control method and electronic equipment
CN110708475B (en) Exposure parameter determination method, electronic equipment and storage medium
CN110602387B (en) Shooting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant