CN110248104B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN110248104B
CN110248104B CN201910661458.4A CN201910661458A CN110248104B CN 110248104 B CN110248104 B CN 110248104B CN 201910661458 A CN201910661458 A CN 201910661458A CN 110248104 B CN110248104 B CN 110248104B
Authority
CN
China
Prior art keywords
user
image
adjustment
users
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910661458.4A
Other languages
Chinese (zh)
Other versions
CN110248104A (en
Inventor
吴谷河
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201910661458.4A priority Critical patent/CN110248104B/en
Publication of CN110248104A publication Critical patent/CN110248104A/en
Application granted granted Critical
Publication of CN110248104B publication Critical patent/CN110248104B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, wherein the method comprises the following steps: obtaining a frame of image, wherein the frame of image comprises at least one display object; identifying the biological characteristic information of the at least one display object in the frame of image to obtain user information of at least one user; determining adjustment parameters respectively corresponding to the users of the at least one user based on the user information; adjusting display objects respectively corresponding to the users of the at least one user in the frame of image based on adjustment parameters respectively corresponding to the users of the at least one user, so that the adjusted display objects in the frame of image have a different presentation effect after adjustment from the presentation effect before adjustment.

Description

Image processing method and device and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
At present, when a mobile phone takes a picture, a user can achieve the desired beautifying effect by adjusting the beautifying function of a camera.
However, the user needs to adjust the beauty template for many times when the camera is turned on every time of taking a picture, which results in low imaging efficiency.
Disclosure of Invention
In view of the above, the present application provides an image processing method, an image processing apparatus and an electronic device, so as to improve imaging efficiency.
The application provides an image processing method, which comprises the following steps:
obtaining a frame of image, wherein the frame of image comprises at least one display object;
identifying the biological characteristic information of the at least one display object in the frame of image to obtain user information of at least one user;
determining adjustment parameters respectively corresponding to the users of the at least one user based on the user information;
adjusting display objects respectively corresponding to the users of the at least one user in the frame of image based on adjustment parameters respectively corresponding to the users of the at least one user, so that the adjusted display objects in the frame of image have a different presentation effect after adjustment from the presentation effect before adjustment.
In the above method, optionally, the adjustment parameters respectively corresponding to the users in the at least one user correspond to adjustment targets in the frame of image, where:
the respective corresponding adjustment targets are different between the users of the at least one user.
In the above method, optionally, the adjustment parameters respectively corresponding to the users in the at least one user correspond to adjustment targets in the frame of image, where:
the adjustment targets respectively corresponding to the users in the at least one user are the same and the adjustment parameters on the adjustment targets are different.
In the above method, optionally, the number of the adjustment parameters respectively corresponding to the at least one user is one or more, and the adjustment parameters correspond to adjustment targets in the frame of image; wherein:
different adjustment targets exist in the adjustment targets respectively corresponding to the users in the at least one user.
The above method, optionally, wherein:
the adjustment parameters are different between the users of the at least one user on the same adjustment target.
The method may further include, before adjusting the display objects respectively corresponding to the users in the at least one user in the frame of image based on the adjustment parameters respectively corresponding to the users in the at least one user, respectively:
judging that the frame of image contains an image area acquired by collecting a real user corresponding to the display object;
if so, executing the adjustment of the display objects respectively corresponding to the users in the at least one user in the frame of image based on the adjustment parameters respectively corresponding to the users in the at least one user;
and if not, processing the display objects respectively corresponding to the users in the at least one user in the frame of image according to preset processing parameters.
In the above method, optionally, the processing parameters are: modifying the adjustment parameters respectively corresponding to the users of the at least one user to obtain parameters;
wherein the adjustment increment value of the processing parameter for the display object is smaller than the adjustment increment value of the processing parameter for the display object.
Optionally, in the above method, determining, based on the user information, adjustment parameters respectively corresponding to the users of the at least one user includes:
obtaining historical operation data of the user in the at least one user based on the user information, wherein the historical operation data comprises operation data for adjusting an image when the user performs image processing;
based on the historical operating data, adjustment parameters respectively corresponding to the users of the at least one user are determined.
The present application also provides an image processing apparatus including:
the device comprises an obtaining unit, a display unit and a processing unit, wherein the obtaining unit is used for obtaining a frame of image, and the frame of image comprises at least one display object;
the identification unit is used for identifying the biological characteristic information of the at least one display object in the frame of image so as to obtain user information of at least one user;
a determining unit configured to determine adjustment parameters respectively corresponding to the users of the at least one user based on the user information;
an adjusting unit, configured to adjust, in the frame of image, display objects respectively corresponding to the users of the at least one user based on adjustment parameters respectively corresponding to the users of the at least one user, so that a presentation effect of the adjusted display objects in the frame of image after adjustment is different from a presentation effect before adjustment.
The present application further provides an electronic device, including:
the image acquisition device is used for acquiring a frame of image, wherein the frame of image comprises at least one display object;
image display means for presenting the one frame image;
a processor for identifying the biometric information of the at least one display object in the frame of image to obtain user information of at least one user; determining adjustment parameters respectively corresponding to the users of the at least one user based on the user information; adjusting display objects respectively corresponding to the users of the at least one user in the frame of image based on adjustment parameters respectively corresponding to the users of the at least one user, so that the adjusted display objects in the frame of image have a different presentation effect after adjustment from the presentation effect before adjustment.
According to the technical scheme, after the image is obtained, the biological characteristic information of the display object in the image is identified, the user information of at least one user is obtained, the adjustment parameters corresponding to the user are determined based on the user information, and the corresponding display object in the image is adjusted based on the adjustment parameters corresponding to the users, so that the adjusted display object in the image has a different presentation effect from the presentation effect before adjustment. Therefore, the corresponding adjustment parameters are respectively obtained for the users in the images in the application, and then the users can directly obtain the adjustment parameters corresponding to the users when taking pictures to adjust the display objects corresponding to the users in the images, and the users do not need to adjust for multiple times when taking pictures every time, so that the adjustment time is reduced, and the imaging efficiency of the images is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application;
FIGS. 2-4 are exemplary diagrams of embodiments of the present application, respectively;
fig. 5 is another flowchart of an image processing method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to a second embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to a third embodiment of the present application;
fig. 8 is another exemplary diagram of an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flowchart illustrating an implementation of an image processing method according to an embodiment of the present disclosure, where the method is applied to an electronic device capable of performing image processing, such as a mobile phone, a pad, and a terminal device with an image capturing component. The embodiment is mainly used for adjusting the display object corresponding to the user in the image when the electronic equipment is subjected to image acquisition, so that the aims of reducing the time consumed by adjustment and improving the imaging efficiency of the image are achieved.
Specifically, the method in this embodiment may include the following steps:
step 101: a frame image is obtained.
Wherein, a frame of image comprises at least one display object. The display object may be an image region object of one or more users, where the region object of a user may be understood as an object including a region of the whole user, such as a region object of the whole body or the whole head of the user, or may be understood as an object including a partial region of the user, such as a region object of the side face, the left body, or the corners of the mouth of the user, as shown in fig. 2.
It should be noted that, a frame of image in the present embodiment may be understood as a single frame of image in the image data, and may also be understood as a single frame of image in the video data, and accordingly, the present embodiment is applicable to image processing during image capturing, and also applicable to image processing during image capturing, that is, the image processing scheme in the present embodiment is executed on each frame of image in the video data.
In addition, in this embodiment, one frame of image may include at least one display object of one user, or may include at least one display object of each of a plurality of users, that is, one frame of image may be an image formed by shooting one user, or may be an image formed by shooting a plurality of users, and accordingly, the display object in one frame of image may correspond to one user, or may correspond to a plurality of users: user a and user B as shown in fig. 3.
It should be noted that the user in this embodiment may be a real user, such as a real person or an animal, such as a person in a self-portrait, or a virtual person or an animal in an image or an image, such as a poster or an advertisement image or a person or an animal in an image, or may include both a real user and a virtual user, such as a poster fan user and a virtual image even user in a combined image of a fan and an even poster, as shown in fig. 4.
Step 102: biometric information of at least one display object in one frame of image is identified to obtain user information of at least one user.
Correspondingly, in this embodiment, a face recognition algorithm or a visual recognition algorithm or the like may be adopted to recognize the biometric information of the display object in one frame of image, so as to recognize the user to which each display object belongs in at least one display object, and further obtain the user information of the users.
It should be noted that, in one frame of image, the display object may correspond to one user or a plurality of users, so that in the present embodiment, when identifying the display object, one user to which the display object belongs or a plurality of users to which the display object belongs can be identified, and therefore, the user information obtained in the present embodiment may belong to one user or a plurality of users.
Step 103: based on the user information, adjustment parameters respectively corresponding to users of the at least one user are determined.
In this embodiment, adjustment parameters are preset for users, the adjustment parameter of each user corresponds to the user to which the user belongs, and the adjustment parameters may be the same or different for different users. For example, user a has adjustment parameter 1 and user B has adjustment parameter 2.
It should be noted that there may be one or more adjustment parameters corresponding to one user, so as to adjust the corresponding object to be adjusted, such as nose, eyes, or color tone.
Step 104: and adjusting display objects respectively corresponding to the users of the at least one user in the frame of image based on the adjustment parameters respectively corresponding to the users of the at least one user, so that the adjusted display objects in the frame of image have a different presentation effect after adjustment from the presentation effect before adjustment.
If the display object in one frame of image belongs to one user, the display object in one frame of image is adjusted by the adjustment parameter corresponding to the user in the embodiment, so that the display object in one frame of image has different presentation effects before and after adjustment based on the adjustment parameter to which the display object belongs, and the purpose that the user needs to adjust the image is achieved.
If the display object in one frame of image belongs to a plurality of users, in this embodiment, the display object corresponding to the corresponding user in one frame of image is adjusted by the adjustment parameter corresponding to each user, the display object corresponding to the user a in one frame of image is adjusted by the adjustment parameter 1 of the user a, and the display object corresponding to the user B in one frame of image is adjusted by the adjustment parameter 2 of the user B, and so on, so that the display effect of the display object corresponding to the user a in one frame of image before adjustment is different from the display effect after adjustment, and the difference in the display effect meets the requirement of the user a, so that the display effect of the display object corresponding to the user B in one frame of image before adjustment is different from the display effect after adjustment, and the difference in the display effect meets the requirement of the user B.
As can be seen from the foregoing solution, in the image processing method provided in the embodiment of the present application, after the image is obtained, the biometric information of the display object in the image is identified, and then the user information of at least one user is obtained, based on the user information, the adjustment parameters corresponding to the user are determined, and then the corresponding display object in the image is adjusted based on the adjustment parameters respectively corresponding to the users, so that the adjusted display object in the image has a different presentation effect after adjustment from the presentation effect before adjustment. It can be seen that, in this embodiment, corresponding adjustment parameters are respectively obtained for a user in an image, and then the user can directly obtain the adjustment parameters corresponding to the user when taking a picture to adjust the display object corresponding to the user in the image, and the user does not need to perform multiple adjustments each time taking a picture, thereby reducing the time consumed by the adjustment and improving the imaging efficiency of the image.
In one implementation manner, the adjustment parameter in this embodiment corresponds to an adjustment target in the image, where the adjustment target may be one display object, or an object combination composed of a plurality of display objects, or a partial region in one display object.
When each display object in one frame image corresponds to a plurality of users, the adjustment targets corresponding to the different users may be the same or different.
That is, in this embodiment, after the user information of one or more users identified in the image is obtained, the adjustment parameters are different for different users, specifically, the adjustment targets corresponding to the adjustment parameters are different, for example, the adjustment parameter 1 of the user a is an adjustment parameter for the nose, and the adjustment parameter 2 of the user B is an adjustment parameter for the lips.
In another case, the adjustment parameters are different for different users, which is specifically represented by that the adjustment targets corresponding to the adjustment parameters are the same and the values of the adjustment parameters on the corresponding adjustment targets are different, for example, the adjustment parameters of the user a and the user B are both adjustment parameters for the nose, where the difference is that the parameter value of the adjustment parameter of the user a is different from the parameter value of the adjustment parameter of the user B, for example, the parameter value of the adjustment parameter of the user a for the nose is 2 mm for increasing the height of the nose, and the parameter value of the adjustment parameter of the user B for the user is 4 mm for increasing the height of the nose. Or, the adjustment parameters of the user a and the user B are both adjustment parameters for skin color, and the difference is that the parameter value of the adjustment parameter of the user a is different from the parameter value of the adjustment parameter of the user B, for example, the user a adjusts skin color to be bright white, the user B adjusts skin color to be dark red, and the like.
In one implementation, for users in the image, there may be one or more adjustment parameters of one user, the adjustment parameters of one user are different, and accordingly, the adjustment targets in the display objects corresponding to the users correspond to different adjustment targets, for example, adjustment parameter 1 of user a includes adjustment parameter 11 and adjustment parameter 12, adjustment parameter 11 corresponds to adjustment target 11 in the image, such as nose of user a, and adjustment parameter 12 corresponds to adjustment target 12 in the image, such as skin color of user a, including skin color of nose and skin color of cheek and ear, and so on.
When each display object in one frame image corresponds to a plurality of users, the adjustment parameter of each user corresponds to one or more adjustment targets, and for different users, the adjustment targets corresponding to the adjustment parameters may overlap or may be different among different users. That is, different users may have the same adjustment target and different adjustment targets. For example, the adjustment parameter 1 of the user a corresponds to the adjustment target 11 and the adjustment target 12, while the adjustment parameter of the user B also corresponds to the adjustment target 11, which is the same as the adjustment target 11 of the user a, and the adjustment parameter of the user B also corresponds to the adjustment target 21, which is different from each adjustment target of the user a.
For example, the adjustment parameters of the user a correspond to the adjustment target nose and eyes, and the adjustment parameters of the user B correspond to the adjustment target nose and lips.
Correspondingly, the adjustment parameters of different users on the same adjustment target may be the same or different, and the parameter values of the adjustment parameters of different users on the same adjustment target may be the same or different. For example, the adjustment parameter of the user a on the nose is the adjustment height, the adjustment parameter of the user B on the nose is the adjustment width and the adjustment height, and the corresponding values of the adjustment heights may be both 2 mm higher, or the adjustment height of the user a on the nose may be 2 mm higher, and the adjustment height of the user B on the nose is 4 mm higher.
As can be seen from the above, in this embodiment, image adjustment more conforming to the characteristics of the user can be implemented for different users, and an adjustment effect more satisfying the personalized requirements of the user is presented for the user.
In one implementation, after identifying the user in the image and obtaining the user information of the user, the embodiment needs to determine whether the user in the image is a real user or a virtual imaging user, i.e., whether the image is an image captured of a real user or an image captured of a virtually imaged user, such as a user in a picture or movie, e.g., whether it is a real person or a poster figure or the like, further, aiming at the real user and the virtual imaging user, the display object can be adjusted in the image according to the adjustment parameters corresponding to the user, or, only for the real user, the display object is adjusted in the image according to the adjustment parameter corresponding to the real user, or only the user of the virtual imaging is correspondingly adjusted in the image according to the adjustment parameters corresponding to the user of the virtual imaging. When the real user and the virtual imaging user adjust the display object, the adjustment parameters can be modified and optimized properly according to the difference between the real user and the virtual imaging user, and then the optimized adjustment parameters are utilized to adjust the corresponding display object of the real user or the virtual imaging user in the image.
For example, in this embodiment, before performing step 104, a performing step may also be performed, as shown in fig. 5:
step 105: and judging whether one frame of image contains an image area acquired by a real user corresponding to the display object, if so, executing the step 104, and otherwise, executing the step 106.
In this embodiment, the identification and determination may be performed according to light, brightness, iris, and the like corresponding to the display object in the image, so as to determine whether the image contains an image region for collecting a real user, that is, whether the display object in the image is a display object of a real user or a display object of a virtual imaging user, and if the display object in the image is determined to be nose and eyes of a real person, or nose and eyes of a virtual imaging person in the image, the real user in the image is determined.
It should be noted that, in this embodiment, the display object in one frame of image may belong to one or more users, that is, one or more users may be identified in one frame of image, and accordingly, these users may all be real users, may all be users who perform virtual imaging, may also be a part of real users, and other users who perform virtual imaging, that is, the image may be an image acquired from the real user completely, an image acquired from the user who performs imaging completely, or an image acquired from both the real user and the user who performs imaging together. For example, user a and user B are identified in the image, where user a is a real user and user B is an imaged user in a poster, such as a movie image of a fan poster and a ghost poster.
Step 106: and processing the display objects respectively corresponding to at least one user in use in one frame of image according to preset processing parameters.
In this embodiment, after it is determined that the image has an image area formed by collecting the real user, step 104 may be executed to adjust, for the display object of the real user, the display object of the corresponding real user in the image based on the adjustment parameters respectively corresponding to the real user, so that the adjusted display object has a presentation effect different from the presentation effect before adjustment;
for the display object of the imaged user, the display object of the corresponding imaged user in the image may be adjusted based on the adjustment parameter corresponding to the imaged user, or for the display object of the imaged user, the display object of the imaged user in the image may be adjusted by a preset processing parameter, where the preset processing parameter may be a parameter obtained by modifying the adjustment parameter of the imaged user; alternatively, no adjustment may be made to the display object of the imaged user.
If it is determined that the image area formed by collecting the real user is not present in the image, step 106 may be directly performed to process the display object in the image according to the preset processing parameter.
If it is determined that there is no image area formed by the real user in the image, in this embodiment, the display object of the user imaged in the image may be adjusted according to the adjustment parameter of the user, or the adjustment parameter is modified instead of the original adjustment parameter of the user imaged, so as to obtain a processing parameter, and then the display object of the user imaged in the image is adjusted according to the processing parameter; or, in this embodiment, if it is determined that the image is not an image formed by capturing the real user, the image processing is not performed.
It should be noted that the processing parameters are obtained by modifying the processing parameters based on the adjustment parameters of the user to which the processing parameters belong, and specifically, the adjustment increment value of the display object in the processing parameters is smaller than the adjustment increment value of the display object in the original adjustment parameters. That is to say, in this embodiment, when it is found that the image is not an image acquired by a real user, the adjustment parameter of the user in the image may be adjusted, that is, the adjustment incremental value in the adjustment parameter is reduced, for example, the adjustment parameter is an adjustment parameter for adjusting the height of the nose by 2 mm, and the corresponding processing parameter is: the height of the nose is adjusted by 1 mm or is not adjusted. The following steps are repeated: the adjustment parameter is that the skin color brightness is increased by 3 values, and the corresponding processing parameters are as follows: the skin tone schedule is adjusted up by 1 value or not.
That is to say, in this embodiment, whether the user in the image is a real user or a virtual imaging user may be identified and determined, and then, for the real user and the virtual imaging user, the display object in the image may be adjusted in different adjustment manners. For example, in the case of an image obtained by combining a fan and a doll poster, the face or the body of the fan as a real user in the image is adjusted to present different effects, while in the case of the doll poster as a virtual imaging user, the display object of the poster in the image is not adjusted. Therefore, in the embodiment, the display object of the real user in the image is adjusted, and the display object of the virtual imaging user in the image is adjusted or not adjusted in different modes, so that the image can be more accurately processed, the user requirements are met, and the situation of poor effect caused by excessive image processing is avoided.
In one implementation, the adjustment parameters of the users may be preset, and specifically, the adjustment parameters of all the users may be set in advance based on the historical operation of each user and stored in the remote server, so that the calculation amount of the users in real-time image processing may be reduced. After the image needs to be processed and the user information is identified on the electronic device such as a mobile phone, the adjustment parameter corresponding to the identified user information can be determined in the preset adjustment parameters of each user in the remote server, wherein the user can be a user who uses the current electronic device for image acquisition or image processing, and the historical operation of the user is the image processing operation that the user uses the current electronic device;
or, in order to improve the real-time performance and accuracy of the parameters and reduce the calculation amount of the preset calculation and the time consumption of data transmission, the adjustment parameters of the user may be obtained by analyzing the historical operation data of the user corresponding to the user information in real time after the current electronic device needs to process the image and identify the user information.
For example, step 103 can be implemented in the following manner in this embodiment:
based on the user information, obtaining historical operation data of a user of the at least one user, wherein the historical operation data comprises operation data for adjusting the image when the user performs image acquisition or processing, namely: aiming at the belonged users of the user information identified in the image, acquiring historical operation data which are respectively performed by the users when the users perform image processing on the current electronic equipment or other electronic equipment, such as operation data of performing beauty nose adjustment when the user A performs image acquisition by using the electronic equipment, operation data of performing skin color adjustment when the user B performs image beauty by using the electronic equipment, and the like;
and then, based on the historical operation data of the users, determining the adjustment parameters respectively corresponding to the users in at least one user.
Specifically, in this embodiment, a history adjustment parameter of a display object corresponding to a user in history operation data is extracted, and the history adjustment parameter is directly used as an adjustment parameter of the user; or, averaging, or taking the most value, or weighting the parameter values in the historical adjustment parameters to obtain the adjustment parameters of the user.
For example, in this embodiment, operation data of nose height adjustment performed by the user a when performing image acquisition by using the electronic device is obtained, and then an adjustment incremental value of the nose adjustment height is extracted, and the adjustment incremental value of the nose adjustment height is used as a standard adjustment parameter of the user a for a nose when performing subsequent image processing;
for another example, in this embodiment, operation data for performing skin color adjustment when the user B performs image beautification by using the electronic device is obtained, and then, the skin color adjustment color value is extracted, so that skin color adjustment color values respectively corresponding to multiple times of historical operation data of the user B are obtained, and then, a color value with the largest or the smallest or the proper adjustment use times is selected from the skin color adjustment color values as a standard adjustment parameter for adjusting the skin color when the user B performs subsequent image processing.
Referring to fig. 6, a schematic structural diagram of an image processing apparatus according to a second embodiment of the present disclosure is shown, where the apparatus is suitable for an electronic device capable of performing image processing, such as a mobile phone, a pad, and a terminal device with an image capturing component. The embodiment is mainly used for adjusting the display object corresponding to the user in the image when the electronic equipment is subjected to image acquisition, so that the aims of reducing the time consumed by adjustment and improving the imaging efficiency of the image are achieved.
Specifically, the apparatus of this embodiment may include the following structure:
an obtaining unit 601, configured to obtain a frame of image, where the frame of image includes at least one display object;
an identifying unit 602, configured to identify biometric information of the at least one display object in the frame of image to obtain user information of at least one user;
a determining unit 603 configured to determine, based on the user information, adjustment parameters respectively corresponding to the users of the at least one user;
an adjusting unit 604, configured to adjust, based on adjustment parameters respectively corresponding to the users of the at least one user, display objects respectively corresponding to the users of the at least one user in the frame of image, so that a rendering effect of the adjusted display objects in the frame of image after adjustment is different from a rendering effect before adjustment.
As can be seen from the above, in the image processing apparatus provided in the second embodiment, after the image is obtained, the biometric information of the display object in the image is identified, so as to obtain the user information of at least one user, based on the user information, the adjustment parameter corresponding to the user is determined, and further, based on the adjustment parameter corresponding to each user, the corresponding display object in the image is adjusted, so that the adjusted display effect of the display object in the image after adjustment is different from the display effect before adjustment. It can be seen that, in this embodiment, corresponding adjustment parameters are respectively obtained for a user in an image, and then the user can directly obtain the adjustment parameters corresponding to the user when taking a picture to adjust the display object corresponding to the user in the image, and the user does not need to perform multiple adjustments each time taking a picture, thereby reducing the time consumed by the adjustment and improving the imaging efficiency of the image.
In one implementation manner, the adjustment parameters respectively corresponding to the users in the at least one user correspond to adjustment targets in the frame of image, where:
the users in the at least one user have different corresponding adjustment targets;
or the adjustment targets respectively corresponding to the users in the at least one user are the same and the adjustment parameters on the adjustment targets are different.
In another implementation manner, the number of the adjustment parameters respectively corresponding to the users in the at least one user is one or more, and the adjustment parameters correspond to adjustment targets in the frame of image; wherein:
different adjustment targets exist in the adjustment targets respectively corresponding to the users in the at least one user.
Further, the adjustment parameters on the same adjustment target are different between the users of the at least one user.
In addition, in this embodiment, the adjusting unit 604 is further configured to, before adjusting the display objects respectively corresponding to the users of the at least one user in the frame image based on the adjustment parameters respectively corresponding to the users of the at least one user:
judging that the frame of image contains an image area acquired by collecting a real user corresponding to the display object; if so, executing the adjustment of the display objects respectively corresponding to the users in the at least one user in the frame of image based on the adjustment parameters respectively corresponding to the users in the at least one user; and if not, processing the display objects respectively corresponding to the users in the at least one user in the frame of image according to preset processing parameters.
For example, the processing parameters are: modifying the adjustment parameters respectively corresponding to the users of the at least one user to obtain parameters;
wherein the adjustment increment value of the processing parameter for the display object is smaller than the adjustment increment value of the processing parameter for the display object.
In an implementation manner, when determining, based on the user information, the adjustment parameters respectively corresponding to the users of the at least one user, the determining unit 603 may specifically be implemented in the following manner:
obtaining historical operation data of the user in the at least one user based on the user information, wherein the historical operation data comprises operation data for adjusting an image when the user performs image processing; determining adjustment parameters respectively corresponding to the users of the at least one user based on the historical operating data;
or, in this embodiment, historical operation data of each user is obtained in advance on a remote server and stored in a cloud server, where the historical operation data includes operation data for adjusting an image when the user performs image processing; based on the historical operation data, adjustment parameters respectively corresponding to each user are obtained, and the determination unit 603 may determine, among the adjustment parameters, adjustment parameters respectively corresponding to the user of the at least one user based on user information.
Referring to fig. 7, a schematic structural diagram of an electronic device according to a third embodiment of the present disclosure is shown, where the electronic device may be an electronic device capable of performing image processing, such as a mobile phone, a pad, and a terminal device with an image capturing device. The embodiment is mainly used for adjusting the display object corresponding to the user in the image when the electronic equipment is subjected to image acquisition, so that the aims of reducing the time consumed by adjustment and improving the imaging efficiency of the image are achieved.
Specifically, the electronic device may include the following structure:
the image capturing device 701 is configured to obtain a frame of image, where the frame of image includes at least one display object.
The image capturing device 701 may be a camera or a video camera, and the like, for obtaining an image, or the image capturing device 701 may be an image data transmission interface for reading an image from a memory of the electronic device.
An image display device 702, configured to present the frame of image, such as a display or a touch screen, for presenting the image.
A processor 703 for identifying the biometric information of the at least one display object in the frame of image to obtain user information of at least one user; determining adjustment parameters respectively corresponding to the users of the at least one user based on the user information; adjusting display objects respectively corresponding to the users of the at least one user in the frame of image based on adjustment parameters respectively corresponding to the users of the at least one user, so that the adjusted display objects in the frame of image have a different presentation effect after adjustment from the presentation effect before adjustment.
As can be seen from the above, in the electronic device provided in the third embodiment, after the image is obtained, the biometric information of the display object in the image is identified, and then the user information of at least one user is obtained, based on the user information, the adjustment parameter corresponding to the user is determined, and then the corresponding display object in the image is adjusted based on the adjustment parameter corresponding to each user, so that the adjusted display effect of the display object in the image after adjustment is different from the display effect before adjustment. It can be seen that, in this embodiment, corresponding adjustment parameters are respectively obtained for a user in an image, and then the user can directly obtain the adjustment parameters corresponding to the user when taking a picture to adjust the display object corresponding to the user in the image, and the user does not need to perform multiple adjustments each time taking a picture, thereby reducing the time consumed by the adjustment and improving the imaging efficiency of the image.
In one implementation manner, the adjustment parameters respectively corresponding to the users in the at least one user correspond to adjustment targets in the frame of image, where:
the users in the at least one user have different corresponding adjustment targets;
or the adjustment targets respectively corresponding to the users in the at least one user are the same and the adjustment parameters on the adjustment targets are different.
In another implementation manner, the number of the adjustment parameters respectively corresponding to the users in the at least one user is one or more, and the adjustment parameters correspond to adjustment targets in the frame of image; wherein:
different adjustment targets exist in the adjustment targets respectively corresponding to the users in the at least one user.
Further, the adjustment parameters on the same adjustment target are different between the users of the at least one user.
In addition, in this embodiment, before adjusting the display objects respectively corresponding to the users of the at least one user in the frame of image based on the adjustment parameters respectively corresponding to the users of the at least one user, the processor 703 is further configured to:
judging that the frame of image contains an image area acquired by collecting a real user corresponding to the display object; if so, executing the adjustment of the display objects respectively corresponding to the users in the at least one user in the frame of image based on the adjustment parameters respectively corresponding to the users in the at least one user; and if not, processing the display objects respectively corresponding to the users in the at least one user in the frame of image according to preset processing parameters.
For example, the processing parameters are: modifying the adjustment parameters respectively corresponding to the users of the at least one user to obtain parameters;
wherein the adjustment increment value of the processing parameter for the display object is smaller than the adjustment increment value of the processing parameter for the display object.
In an implementation manner, when determining, based on the user information, the adjustment parameters respectively corresponding to the users of the at least one user, the processor 703 may specifically be implemented in the following manner:
obtaining historical operation data of the user in the at least one user based on the user information, wherein the historical operation data comprises operation data for adjusting an image when the user performs image processing; determining adjustment parameters respectively corresponding to the users of the at least one user based on the historical operating data;
or, in this embodiment, historical operation data of each user is obtained in advance on a remote server and stored in a cloud server, where the historical operation data includes operation data for adjusting an image when the user performs image processing; based on the historical operation data, adjustment parameters respectively corresponding to each user are obtained, and the processor 703 may determine, based on the user information, adjustment parameters respectively corresponding to the users of the at least one user among the adjustment parameters.
The following takes processing of an image in preview or a captured image when a mobile phone takes a picture as an example, and exemplifies the technical scheme in the embodiment of the present application:
with reference to fig. 8, in this embodiment, historical map trimming operations of a user are intelligently learned according to a place adjusted when the user trims a picture, and these historical map trimming operations can reflect points of interest of the user's trimming, and for these points of interest, a set of beauty parameters suitable for the effect after trimming is automatically generated and output according to the historical map trimming result. The operation of generating the beauty parameters can be performed before or during the photographing process of the mobile phone, and can also be performed in a remote server such as a cloud server, so that the calculation amount on the mobile phone is reduced, and the blocking of the mobile phone is avoided. Therefore, when the camera is used for shooting, people can be intelligently and automatically identified, the beautifying parameters suitable for the people are called and used, the favorite effect of the user can be achieved at one time, and intelligent and personalized beautifying is really achieved.
Therefore, the region where the user is located in the image can be trimmed according to the habit or historical operation of the user, so that the personalized requirements of the user are met, in addition, the user does not need to trim the image and adjust the image every time when taking a picture or processing, the favorite effect of the user can be achieved once, and the image trimming does not need to be performed again, so that the imaging efficiency is improved.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. An image processing method comprising:
obtaining a frame of image, wherein the frame of image comprises at least one display object;
identifying the biological characteristic information of the at least one display object in the frame of image to obtain user information of at least one user;
determining adjustment parameters respectively corresponding to the users of the at least one user based on the user information;
adjusting display objects respectively corresponding to the users of the at least one user in the frame of image based on adjustment parameters respectively corresponding to the users of the at least one user, so that the adjusted display objects in the frame of image have a different presentation effect after adjustment from the presentation effect before adjustment;
wherein, before adjusting the display objects respectively corresponding to the users of the at least one user in the frame of image based on the adjustment parameters respectively corresponding to the users of the at least one user, the method further comprises:
judging that the frame of image contains an image area acquired by collecting a real user corresponding to the display object;
if so, executing the adjustment of the display objects respectively corresponding to the users in the at least one user in the frame of image based on the adjustment parameters respectively corresponding to the users in the at least one user;
if not, processing display objects respectively corresponding to the users in the at least one user in the frame of image according to preset processing parameters; the preset processing parameters are parameters obtained by modifying on the basis of corresponding adjustment parameters of a user;
wherein determining adjustment parameters respectively corresponding to the users of the at least one user comprises:
obtaining historical operation data of the user in the at least one user based on the user information, wherein the historical operation data comprises operation data for adjusting an image when the user performs image processing;
determining a trimming drawing attention point of the user based on the historical operation data;
and generating and outputting adjusting parameters suitable for the effect after the image is repaired based on the attention points of the image repairing.
2. The method of claim 1, wherein the adjustment parameters respectively corresponding to the users in the at least one user correspond to adjustment targets in the frame of image, and wherein:
the respective corresponding adjustment targets are different between the users of the at least one user.
3. The method of claim 1, wherein the adjustment parameters respectively corresponding to the users in the at least one user correspond to adjustment targets in the frame of image, and wherein:
the adjustment targets respectively corresponding to the users in the at least one user are the same and the adjustment parameters on the adjustment targets are different.
4. The method according to claim 1, wherein the at least one user respectively corresponds to one or more adjustment parameters, and the adjustment parameters correspond to adjustment targets in the frame of image; wherein:
different adjustment targets exist in the adjustment targets respectively corresponding to the users in the at least one user.
5. The method of claim 4, wherein:
the adjustment parameters are different between the users of the at least one user on the same adjustment target.
6. The method of claim 1, the processing parameters being: modifying the adjustment parameters respectively corresponding to the users of the at least one user to obtain parameters;
wherein the adjustment increment value of the processing parameter for the display object is smaller than the adjustment increment value of the processing parameter for the display object.
7. An image processing apparatus comprising:
the device comprises an obtaining unit, a display unit and a processing unit, wherein the obtaining unit is used for obtaining a frame of image, and the frame of image comprises at least one display object;
the identification unit is used for identifying the biological characteristic information of the at least one display object in the frame of image so as to obtain user information of at least one user;
a determining unit configured to determine adjustment parameters respectively corresponding to the users of the at least one user based on the user information;
an adjusting unit, configured to adjust, in the frame of image, display objects respectively corresponding to the users of the at least one user based on adjustment parameters respectively corresponding to the users of the at least one user, so that a rendering effect of the adjusted display objects in the frame of image after adjustment is different from a rendering effect before adjustment;
wherein, before adjusting the display objects respectively corresponding to the users of the at least one user in the frame of image based on the adjustment parameters respectively corresponding to the users of the at least one user, the method further comprises:
judging that the frame of image contains an image area acquired by collecting a real user corresponding to the display object;
if so, executing the adjustment of the display objects respectively corresponding to the users in the at least one user in the frame of image based on the adjustment parameters respectively corresponding to the users in the at least one user;
if not, processing display objects respectively corresponding to the users in the at least one user in the frame of image according to preset processing parameters; the preset processing parameters are parameters obtained by modifying on the basis of corresponding adjustment parameters of a user;
wherein determining adjustment parameters respectively corresponding to the users of the at least one user comprises:
obtaining historical operation data of the user in the at least one user based on the user information, wherein the historical operation data comprises operation data for adjusting an image when the user performs image processing;
determining a trimming drawing attention point of the user based on the historical operation data;
and generating and outputting adjusting parameters suitable for the effect after the image is repaired based on the attention points of the image repairing.
8. An electronic device, comprising:
the image acquisition device is used for acquiring a frame of image, wherein the frame of image comprises at least one display object;
image display means for presenting the one frame image;
a processor for identifying the biometric information of the at least one display object in the frame of image to obtain user information of at least one user; determining adjustment parameters respectively corresponding to the users of the at least one user based on the user information; adjusting display objects respectively corresponding to the users of the at least one user in the frame of image based on adjustment parameters respectively corresponding to the users of the at least one user, so that the adjusted display objects in the frame of image have a different presentation effect after adjustment from the presentation effect before adjustment;
wherein, before adjusting the display objects respectively corresponding to the users of the at least one user in the frame of image based on the adjustment parameters respectively corresponding to the users of the at least one user, the method further comprises:
judging that the frame of image contains an image area acquired by collecting a real user corresponding to the display object;
if so, executing the adjustment of the display objects respectively corresponding to the users in the at least one user in the frame of image based on the adjustment parameters respectively corresponding to the users in the at least one user;
if not, processing display objects respectively corresponding to the users in the at least one user in the frame of image according to preset processing parameters; the preset processing parameters are parameters obtained by modifying on the basis of corresponding adjustment parameters of a user;
wherein determining adjustment parameters respectively corresponding to the users of the at least one user comprises:
obtaining historical operation data of the user in the at least one user based on the user information, wherein the historical operation data comprises operation data for adjusting an image when the user performs image processing;
determining a trimming drawing attention point of the user based on the historical operation data;
and generating and outputting adjusting parameters suitable for the effect after the image is repaired based on the attention points of the image repairing.
CN201910661458.4A 2019-07-22 2019-07-22 Image processing method and device and electronic equipment Active CN110248104B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910661458.4A CN110248104B (en) 2019-07-22 2019-07-22 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910661458.4A CN110248104B (en) 2019-07-22 2019-07-22 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110248104A CN110248104A (en) 2019-09-17
CN110248104B true CN110248104B (en) 2021-03-19

Family

ID=67893037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910661458.4A Active CN110248104B (en) 2019-07-22 2019-07-22 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110248104B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110798614A (en) * 2019-10-14 2020-02-14 珠海格力电器股份有限公司 Photo shooting method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540844A (en) * 2008-03-19 2009-09-23 索尼株式会社 Composition determination device, composition determination method, and program
CN105809637A (en) * 2016-02-29 2016-07-27 广东欧珀移动通信有限公司 Control method, control apparatus and electronic apparatus
CN109872273A (en) * 2019-02-26 2019-06-11 上海上湖信息技术有限公司 A kind of image processing method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5997545B2 (en) * 2012-08-22 2016-09-28 キヤノン株式会社 Signal processing method and signal processing apparatus
CN104159032B (en) * 2014-08-20 2018-05-29 广东欧珀移动通信有限公司 A kind of real-time adjustment camera is taken pictures the method and device of U.S. face effect
CN104503749B (en) * 2014-12-12 2017-11-21 广东欧珀移动通信有限公司 Photo processing method and electronic equipment
CN104715236A (en) * 2015-03-06 2015-06-17 广东欧珀移动通信有限公司 Face beautifying photographing method and device
CN107123081A (en) * 2017-04-01 2017-09-01 北京小米移动软件有限公司 image processing method, device and terminal
CN107563976B (en) * 2017-08-24 2020-03-27 Oppo广东移动通信有限公司 Beauty parameter obtaining method and device, readable storage medium and computer equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540844A (en) * 2008-03-19 2009-09-23 索尼株式会社 Composition determination device, composition determination method, and program
CN105809637A (en) * 2016-02-29 2016-07-27 广东欧珀移动通信有限公司 Control method, control apparatus and electronic apparatus
CN109872273A (en) * 2019-02-26 2019-06-11 上海上湖信息技术有限公司 A kind of image processing method and device

Also Published As

Publication number Publication date
CN110248104A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
US11265523B2 (en) Illuminant estimation referencing facial color features
USRE47960E1 (en) Methods and devices of illuminant estimation referencing facial color features for automatic white balance
US9712743B2 (en) Digital image processing using face detection and skin tone information
US8948468B2 (en) Modification of viewing parameters for digital images using face detection information
US8761449B2 (en) Method of improving orientation and color balance of digital images using face detection information
US8224108B2 (en) Digital image processing using face detection information
US7616233B2 (en) Perfecting of digital image capture parameters within acquisition devices using face detection
US7317815B2 (en) Digital image processing composition using face detection information
US8326066B2 (en) Digital image adjustable compression and resolution using face detection information
US7702136B2 (en) Perfecting the effect of flash within an image acquisition devices using face detection
US7362368B2 (en) Perfecting the optics within a digital image acquisition device using face detection
EP1703436A2 (en) Image processing system, image processing apparatus and method, recording medium, and program
CN107341762B (en) Photographing processing method and device and terminal equipment
CN109919866B (en) Image processing method, device, medium and electronic equipment
WO2012000800A1 (en) Eye beautification
CN113610723B (en) Image processing method and related device
WO2008102296A2 (en) Method for enhancing the depth sensation of an image
CN110248104B (en) Image processing method and device and electronic equipment
JP2005316743A (en) Image processing method and device
JP4984247B2 (en) Image processing apparatus, image processing method, and program
CN111866407A (en) Image processing method and device based on motion digital camera
CN112714251A (en) Shooting method and shooting terminal
CN112949392B (en) Image processing method and device, storage medium and terminal
CN116017178A (en) Image processing method and device and electronic equipment
CN113781292A (en) Image processing method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant