CN112785490B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN112785490B
CN112785490B CN202011630767.4A CN202011630767A CN112785490B CN 112785490 B CN112785490 B CN 112785490B CN 202011630767 A CN202011630767 A CN 202011630767A CN 112785490 B CN112785490 B CN 112785490B
Authority
CN
China
Prior art keywords
human body
image
target
target object
target human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011630767.4A
Other languages
Chinese (zh)
Other versions
CN112785490A (en
Inventor
段霞霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011630767.4A priority Critical patent/CN112785490B/en
Publication of CN112785490A publication Critical patent/CN112785490A/en
Application granted granted Critical
Publication of CN112785490B publication Critical patent/CN112785490B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/18
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, and belongs to the technical field of communication. The method generally includes receiving a first input to a first image, the first image including at least one object; in response to the first input, in the case where a target human feature model corresponding to a target object of the at least one object is stored, processing the target object according to the target human feature model; displaying a second image, wherein the second image is an image after the target object is processed; the target human body characteristic model comprises characteristic data of a human body part, and the characteristic data are used for processing the target human body part of a target object in the first image. The method provided by the embodiment of the application can solve the problem of low image processing efficiency.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to an image processing method, an image processing device and electronic equipment.
Background
With the development of electronic devices, more and more users can shoot images and process later images through the image beautifying function provided by the electronic devices so as to meet the pursuit of the users on the image effect.
However, the unified parameters provided by the image beautifying function are used for processing the massive images in the image beautifying process. Therefore, all users use the same parameters to beautify the images, so that the pertinence of the beautification of the images is poor, and the users are required to manually adjust the parameters according to the requirements at the later stage, so that the operation is complicated. In addition, when a person in a plurality of images is beautified, the person needs to be processed by using the same parameters, and once the degree of manual adjustment of the user is different, the same person appears in different forms in different images, so that the effect of beautifying the images is affected.
Disclosure of Invention
An embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, which can solve the problem of low image processing efficiency.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, which may include:
receiving a first input to a first image, the first image comprising at least one object;
in response to the first input, in the case where a target human feature model corresponding to a target object of the at least one object is stored, processing the target object according to the target human feature model;
Displaying a second image, wherein the second image is an image of the adjusted target object in the first image;
the target human body characteristic model comprises characteristic data of a human body part, and the characteristic data are used for processing the target human body part of a target object in the first image.
In a second aspect, embodiments of the present application provide an image processing apparatus, which may include:
a receiving module for receiving a first input to a first image, the first image comprising at least one object;
a processing module for processing the target object according to the target human body feature model in response to the first input in the case that the target human body feature model corresponding to the target object in the at least one object is stored;
the display module is used for displaying a second image, wherein the second image is an image of the adjusted target object in the first image;
the target human body characteristic model comprises characteristic data of a human body part, and the characteristic data are used for processing the target human body part of a target object in the first image.
In a third aspect, embodiments of the present application provide an electronic device including a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions implementing the steps of the image processing method as shown in the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method as shown in the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement an image processing method as shown in the first aspect.
In the embodiment of the application, the target human body part of the target object in the first image is processed in a targeted manner through the target human body characteristic model corresponding to the target object, so that a targeted image beautifying scheme can be provided for each target object. In addition, through the target human body feature model, each target human body part of the target object in the first image is processed, so that when the human images are processed in batches, manual adjustment of each human body part is not needed by a user, the operation of the user is reduced, the image of the same user can be displayed uniformly to show the deformation effect, and the image processing efficiency is improved.
Drawings
Fig. 1 is a schematic diagram of an image processing architecture according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a position change in image processing according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of another image processing architecture according to an embodiment of the present application;
fig. 4 is a flowchart of an image management method according to an embodiment of the present application;
fig. 5 is a flowchart of an image management method based on a user a and a user B according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
Based on this, the image processing method provided in the embodiment of the present application is described in detail below with reference to fig. 1 to 3 through a specific embodiment and an application scenario thereof.
The present application proposes two image processing architectures, a first image processing architecture may comprise an electronic device and a second image processing architecture may comprise an electronic device and a server. The image processing methods provided in the embodiments of the present application are respectively described below by taking these two architectures as examples.
First, description is made based on an image processing architecture of fig. 1, which includes an electronic device 10. Thus, the user can shoot the character image through the electronic equipment, and the image beautifying process is carried out on the character image shot by the user, and the specific process is as follows.
Thus, a human body characteristic model is constructed according to the character image shot by the user. Here, a plurality of images, such as image 1 to image 7, taken by the user history may be acquired, face recognition may be performed on each of the plurality of images, at least one object in the plurality of images may be determined, and each of the at least one object may be tagged with identity information. If the object 1 is identified, the identity information of the object is marked as a person a, and similarly, if the object 2 different from the object 1 is identified, the identity information of the object 2 is marked as a person B, where the person involved in the self-captured image is typically a common user who uses the electronic device.
Next, third feature data of the human body part of the object 1 in reality in the plurality of images is determined from the plurality of images in which the object 1 is located, and fourth feature data of the object 1 in which the human body part is adjusted in the plurality of images. Here, the fourth characteristic data may be at least one of: the user manually adjusts the data of the human body parts, and the user confirms the data of the human body parts after the adjustment by using preset parameters. And then training the preset human body characteristic model according to the third characteristic data and the fourth characteristic data until the preset training condition is met, so as to obtain the human body characteristic model corresponding to the object 1.
Here, the human body feature model corresponding to the object 1 may be associated with the identity information of the object 1, that is, the person a, so that the object 1 in the other image may be later image-beautified according to the human body feature model corresponding to the person a.
The object 1 may have a corresponding human body feature model, and the object 2 may have a corresponding human body feature model. Of course, in some cases, the object 1 and the object 2 may use one human body feature model, as long as the human body part of each object is distinguished.
Based on the constructed human body characteristic model, the object 1 in the new image is subjected to image beautifying processing based on the human body characteristic model corresponding to the object 1.
In this way, the electronic device receives a first input of a first image taken by the user, the first image comprising at least one object at this time. Here, the first image may be a self-captured image of the user or may be a person image captured by the user.
Next, in response to the first input, determining whether a target human feature model corresponding to a target object, such as object 1, of the at least one object is stored in the electronic device; in the case where a target human body feature model corresponding to a target object, such as object 1, of the at least one object is stored, the target object is processed according to the target human body feature model. Here, if two objects, namely, object 1 and object 2, are included in the first image, the target human feature model corresponding to each object, namely, target human feature model 1 corresponding to object 1 and target human feature model 2 corresponding to object 2, may be determined separately.
The target human body characteristic model may include characteristic data of a plurality of human body parts such as face data, leg data, abdomen data, and the like, and herein, the face data may include data of skin color, data of hair color, data of length of hair, data of proportion of positions of five elements, and shape and size data of five elements, face contour data, and the like. Based on this, after determining the target human body portion such as a face of the object 1 in the first image, the position of the target object in the first image can be adjusted according to the feature data such as the shape size of the five sense organs. Alternatively, when the characteristic data such as the length data of the hair is inconsistent with the length data of the hair of the target object in the current first image, it may be determined whether to adjust the length of the hair of the target object in the first image to the length data of the hair corresponding to the characteristic data according to the selection of the user.
Then, a second image after processing the target object is displayed. As shown in fig. 2, taking the size of the eye as an example, the position of the eye in the first image is an area 1, and the position of the eye in the first image is adjusted from the area 1 to an area 2 according to the feature data.
In addition, in one possible embodiment, the image processing architecture based on fig. 3 includes an electronic device and a server. Unlike the architecture of fig. 1, the first image in fig. 1 is an image taken by a user, and the first image in fig. 2 may be downloaded by the user on a server of a platform, that is, may be a general user of the electronic device, or may be a character image related to the general user. In this way, the downloaded first image can be subjected to image beautification to obtain the image beautification information of the object in the downloaded first image, and the target human body characteristic model corresponding to the object in the downloaded first image is constructed. Here, the human body feature model constructed based on the architecture in fig. 3, and the image beautification of the target object in the first image may refer to the procedure shown in fig. 1.
In addition, in another possible embodiment, based on the image processing architecture of fig. 3, the image processing of the embodiment of the present application further provides another possibility that the human body feature model is constructed at the server side. In this way, the server may construct a human feature model for each object from the plurality of images uploaded by the electronic device. Or the server determines that a plurality of images corresponding to the identity information are stored in the server according to the identity information of the object uploaded by the user, and a human body characteristic model of the object can be constructed according to the images.
Based on this, when the electronic device receives a first input of a user to the first image, an image processing request is sent to the server. And when the server receives the image processing request, processing the target object according to the target human body characteristic model which is constructed before and corresponds to the object 1 in the first image to obtain a second image, and sending the second image to the electronic equipment so that the electronic equipment can display the second image.
The process of adjusting the target object and the process of constructing the human body feature model can refer to the steps in fig. 1, and will not be described herein.
Therefore, the human body part of the target object in the image is adjusted in a targeted manner through the target human body characteristic model corresponding to the target object, and thus, a targeted image beautifying scheme can be provided for each target object.
In addition, each target human body part of the target object is adjusted through the target human body characteristic model, so that when the human images are processed in batches, manual adjustment of each human body part is not needed by a user, the operation of the user is reduced, the image of the same user can be enabled to show a uniform deformation effect, and the image processing efficiency is improved.
It should be noted that, the image processing method provided in this embodiment of the present application may be applied to the above-mentioned application scenario in which a user performs image beautification on a captured person image, and may also be applied to a scenario in which image beautification is performed on a person image in a video, for example, beautification is performed on a target human body part of an actor in a movie or on a target human body part of a model in a captured series of images, so that, by using a customized human body feature model of an object, unified beautification processing is performed on a target human body part in an object included in each frame of image, so that when person images are processed in batch, manual adjustment is not required for each human body part in each frame of image, and while reducing user operations, an image of the same user may be presented to exhibit a uniform deformation effect, and image processing efficiency is improved.
According to the above application scenario, the image processing method provided in the embodiment of the present application is described in detail below with reference to fig. 4 to 5.
Fig. 4 is a flowchart of an image processing method according to an embodiment of the present application.
As shown in fig. 4, the image processing method may be applied to an electronic device or an electronic device and a server, and based on this, may specifically include the following steps:
first, a first input is received for a first image, the first image including at least one object, step 410. Next, in response to the first input, in step 420, the target object is processed according to the target human feature model in a case where the target human feature model corresponding to the target object of the at least one object is stored, wherein the target human feature model includes feature data of a human body part, and the feature data is used to process the target human body part of the target object in the first image. Then, in step 440, a second image is displayed, the second image being a processed image of the target object.
In this way, the target human body part of the target object in the first image is processed in a targeted manner through the target human body characteristic model corresponding to the target object, and in the same way, a targeted image beautifying scheme can be provided for each target object. In addition, the target human body parts of the target objects in the first image are processed through the target human body feature model, and the target human body parts of the target objects in the first image are processed in a targeted manner through the target human body feature model corresponding to the target objects, so that a targeted image beautifying scheme can be provided for each target object.
In addition, through the target human body feature model, each target human body part of the target object in the first image is processed, so that when the human images are processed in batches, manual adjustment of each human body part is not needed by a user, the operation of the user is reduced, the image of the same user can be displayed uniformly to show the deformation effect, and the image processing efficiency is improved.
It should be noted that, in the embodiment of the present application, the process related to the processing of the target object may be, for example, performing an image beautifying operation on the target object, such as whitening, skin polishing, face thinning, lengthening the height, and making the eyes large.
The following describes the above steps in detail, as follows:
first, before referring to step 420, in one or more alternative embodiments, the image processing method may further include: the target human body characteristic model corresponding to the target object is constructed in the following manner.
Acquiring original characteristic data of a third image, wherein an object in the third image comprises a target object, and the target object is marked with identity information;
performing image processing on the third image to obtain processed characteristic data;
and constructing a target human body characteristic model corresponding to the target object according to the original characteristic data and the processed characteristic data of the third image.
Specifically, the processed characteristic data includes at least one of the following:
adjusting data of each of a plurality of human body parts of the target object at a time; and each time, determining the data of each human body part of the target object according to the preset parameters. Such as data for each manual adjustment by the user of the eye size of the target object, or the user determines to adjust the eye size of the target object using preset parameters for most users set in the application program including the image beautifying function.
Here, the target human body characteristic model in the embodiment of the present application may be a three-dimensional target human body characteristic model. Thus, a three-dimensional target human body feature model can be constructed according to the following steps: firstly, acquiring a plurality of images including a target object at a plurality of angles such as 15 degrees raised, 45 degrees left side face or 30 degrees right side face; an original three-dimensional model of the target object is constructed based on a plurality of images at a plurality of angles. And acquiring the processed characteristic data of each of the plurality of images at the plurality of angles, and constructing a processed three-dimensional model of the target object processed according to the processed characteristic data. Then, a target human feature model is constructed based on the original three-dimensional model and the processed three-dimensional model. Thus, when a new image including the target object is received later, it can be adjusted by the three-dimensional target human body characteristic model. Therefore, the target human body model is equivalent to 360 degrees to meet the aesthetic requirement of a user, so that images at all angles can be processed through the target human body model, the processing results are consistent, and the problem that each image is different from a person due to the fact that the user beautifies the image is avoided.
Based on this, the target human body part can be adjusted according to the target human body feature model determined as described above.
In addition, for the above-mentioned target human body feature model, the embodiment of the present application provides, in addition to a manner of constructing the target human body feature model, a manner of updating the target human body feature model, which is specifically as follows:
acquiring a plurality of fourth images comprising the target object;
determining processed feature data of a fifth image corresponding to original feature data of the fifth image when an input of image processing of the fifth image among the plurality of fourth images is detected;
and updating the target human body characteristic model through the processed characteristic data of the fifth image corresponding to the original characteristic data of the fifth image.
Thus, even if the human body part of the target object changes, such as the length of hair changes, the change can be adapted to the condition that the slimming is successful or the eyes wear the pupil-expanding contact lenses, and the human body part of the target object is beautified by the latest target human body characteristic model.
Specifically, in one or more alternative embodiments, the image processing method may further include, before processing the target object:
Performing face recognition on the target object, and determining identity information of the target object;
and determining a target human body characteristic model corresponding to the identity information of the target object according to the association information of the identity information and the human body characteristic model.
Based on this, step 420 may specifically include:
determining a target human body part in the target object according to the target object in the first image;
according to the target human body part, determining angle information presented by the target human body part in the first image;
and processing the target human body part through the target human body characteristic model according to the angle information.
In this embodiment of the present application, the target human body part may be adjusted based on the target human body feature model constructed in step 410, and a specific adjustment process is as follows:
according to angle information, such as left side face 45 degrees, of the target human body part in the first image, acquiring characteristic data, such as left side face 45 degrees, corresponding to the angle information in the target human body characteristic model, and acquiring two-dimensional characteristic data, corresponding to the left side face 45 degrees in the target human body characteristic model; and processing the target human body part in the first image according to the two-dimensional characteristic data, such as performing image beautifying processing of face thinning, big eye, skin grinding, nose height adjustment, leg length adjustment and the like on the target human body part in the first image. Therefore, the images of the target object under various angles can be processed according to the target human body characteristic model, so that the consistency of processing results is ensured, and the problem that each image is different and different from a person due to the fact that the user beautifies the image is avoided.
Here, if the target object is a plurality of human body parts, such as nose height adjustment, and eye size adjustment, the step 420 may specifically include:
in the case that the target human body part is a plurality of target human body parts, determining a target human body feature model corresponding to each target human body part according to each target human body part of the plurality of target human body parts;
and respectively processing each target human body part according to the characteristic data of the human body part in the target human body characteristic model corresponding to each target human body part to obtain a second image.
Here, when the target human body parts are eyes, a nose and skin, the state of each target human body part in the first image changes, so that the target object in the first image is deformed to a certain extent, and therefore, the adjustment image can be subjected to smoothing treatment, the deformed target human body parts are reasonably displayed, and the display effect of the second image is improved.
Additionally, in one or more alternative embodiments, before determining the target human body feature model corresponding to each of the plurality of target human body parts from each of the plurality of target human body parts, the image processing method may further include:
Displaying first prompt information, wherein the first prompt information comprises a plurality of first options, and the first options are options of human body parts of target objects in a first image;
receiving a second input for the plurality of first options;
determining a target option selected from the plurality of first options in response to the second input;
and determining the human body part corresponding to the target option as a target human body part.
For example, upon identifying eyes, nose, skin, hair, lips, and faces of a target object included in the first image, options may be generated based on these body parts so that a user may select which target body parts in the first image to adjust, and if all selected inputs are received by the user, all body parts identified may be determined to be target body parts to adjust. If the input of selecting the eye options is received, the eyes corresponding to the eye options can be determined to be the target human body part.
Additionally, prior to step 420, in one or more possible alternative embodiments, the image processing method may further include:
and displaying prompt information for prompting whether to adjust the target object by adopting preset parameters and whether to establish the target human body characteristic model corresponding to the target object under the condition that the target human body characteristic model corresponding to the target object is not stored.
It should be noted that the preset parameters may include preset parameters suitable for most users, such as parameters in an image filter, set in an application program of the image beautifying function.
In order to better illustrate the image processing method, the embodiment of the present application further combines fig. 5, and uses the image processing architecture as the electronic device and the object a and the object B as an example to illustrate the image processing method provided in the embodiment of the present application.
Here, the user a may make an image beautification on its existing photo, and determine deformation information of the entire face and/or the human body of the user a by acquiring data of each human body part of the user a from a plurality of historical images, so as to construct a target human body feature model of the user a. In this way, when the user A is identified after photographing is completed, the target human body characteristic model of the user A is mapped into the current image so as to process and beautify the human body part of the user A in the current image.
And if the current image also comprises the user B, determining whether the target human body characteristic model corresponding to the user B is stored in the electronic equipment. When the target human body characteristic model of the user B is not found in the electronic equipment, prompting the user whether to apply the human body characteristic model corresponding to the preset parameters to process the human body part of the user B in the current image. After the user determines that the human body part of the user B in the current image is processed by using the human body feature model corresponding to the preset parameters, the human body part is processed according to the preset parameters.
It should be noted that, if the user continues to manually adjust the user B after processing the human body part of the user B with the preset parameters, the electronic device may acquire the original feature data of the human body part of the user B in the current image and the feature data after the user manually adjusts the user B in the current image, so as to construct the target human body feature model of the user B, so as to beautify the human body part of the user B through the target human body feature model of the user B next time.
Similarly, if it is detected that the human body part of the user a is manually adjusted after being processed by the target human body feature model of the user a, the electronic device may acquire feature data of the current image for manually adjusting the human body part of the user a, so as to update the target human body feature model of the user a according to the manually adjusted feature data, so that the human body part of the user a is beautified by the updated target human body feature model of the user a next time.
The above-described image processing procedure, which includes steps 501 to 508, is described below in detail with reference to fig. 5.
Step 501, determining a target human body characteristic model of each object in at least one object according to at least one object in a plurality of images, wherein the at least one object comprises a user A.
Step 502, a first input is received from a user editing a first image.
In step 503, in response to the first input, face recognition is performed on the first image, and two target objects in the first image, namely, user a and user B, are determined.
Step 504, it is determined whether a target human feature model corresponding to the user is stored.
Executing step 505 when it is determined that the target human feature model corresponding to the user a is stored; and upon determining that the target human feature model corresponding to user B is not stored, executing step 506.
Step 505, the target human body part of the user a is adjusted through the target human body feature model corresponding to the user a, and step 508 is performed.
Step 506, displaying prompt information, where the prompt information is used to prompt whether to adjust the user B by using preset parameters, and whether to establish a target human feature model corresponding to the user B, and executing step 507.
In step 507, when receiving an input of adjusting the user B by using the preset parameters and establishing a target human body feature model corresponding to the user B, adjusting the user B by using the preset parameters, and acquiring original feature data of each human body part of the user B in the first image and the feature data after adjusting the user B by using the preset parameters.
And constructing a target human body characteristic model corresponding to the user B according to the original characteristic data of each human body part of the user B and the characteristic data after the user B is adjusted by the preset parameters, and storing the target human body characteristic model corresponding to the user B so as to beautify other images comprising the human body parts of the user B through the target human body characteristic model of the user B next time.
Step 508, displaying a second image, wherein the second image is an image obtained by processing the target object.
In addition, in the case where the target human body feature model corresponding to the user a is stored and the target human body feature model corresponding to the user B is stored, the following steps 509 to 513 may be performed.
Step 509 receives a first input from a user editing a first image.
In response to the first input, facial recognition is performed on the first image, determining two target objects in the first image, user a and user B, step 510.
Step 511 determines whether a target human feature model corresponding to the user is stored.
Step 512 is performed when it is determined that the target human body feature model corresponding to user a is stored, and when it is determined that the target human body feature model corresponding to user B is stored.
Step 512, adjusting the target human body part of the user a through the target human body feature model corresponding to the user a; and adjusting the target human body part of the user B through the target human body feature model corresponding to the user B, and performing step 513.
In step 513, a second image is displayed, where the second image includes the target object processed by the target human body feature model corresponding to user a and the target object processed by the target human body feature model corresponding to user B.
Therefore, the behavior habit of the user can be learned, the pictures are processed in batches, the shapes of the characters in the processed pictures are highly consistent, the human body characteristics of the user cannot be changed due to different periods, the mode is quick and convenient, the operation of the user is reduced, and the efficiency of image processing is improved.
In summary, in the embodiment of the present application, the human body portion of the target object in the image is adjusted in a targeted manner through the target human body feature model corresponding to the target object, so that a targeted image beautifying scheme can be provided for each target object. In addition, each target human body part of the target object is processed through the target human body characteristic model, so that when character images are processed in batches, manual adjustment of each human body part is not needed by a user, the operation of the user is reduced, the image of the same user can be enabled to show a uniform deformation effect, and the image processing efficiency is improved.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module for executing the image processing method in the image processing apparatus. In the embodiment of the present application, an image processing apparatus that performs an image processing method is taken as an example, and an image processing apparatus provided in the embodiment of the present application is described.
Based on the same inventive concept, the present application provides an image processing apparatus. This is described in detail with reference to fig. 6.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
As shown in fig. 6, the image processing apparatus 60 may be applied to an electronic device or a server, and may specifically include:
a receiving module 601 for receiving a first input of a first image, the first image comprising at least one object;
a processing module 602, configured to process, in response to the first input, the target object according to the target human feature model when the target human feature model corresponding to the target object in the at least one object is stored;
a display module 603, configured to display a second image, where the second image is an image obtained by processing the target object;
The target human body characteristic model comprises characteristic data of a plurality of human body parts, and the characteristic data are used for processing the target human body parts of the target object.
Therefore, the human body part of the target object in the image is adjusted in a targeted manner through the target human body characteristic model corresponding to the target object, and thus, a targeted image beautifying scheme can be provided for each target object. In addition, each target human body part of the target object is adjusted through the target human body characteristic model, so that when the human images are processed in batches, manual adjustment of each human body part is not needed by a user, the operation of the user is reduced, the image of the same user can be enabled to show a uniform deformation effect, and the image processing efficiency is improved.
The image processing apparatus 60 will be described in detail, specifically as follows:
in one possible embodiment, the processing module 602 may be further configured to perform facial recognition on the target object to determine identity information of the target object;
and determining a target human body characteristic model corresponding to the identity information of the target object according to the association information of the identity information and the human body characteristic model.
Based on this, the processing module 602 may be specifically configured to determine, according to the target object in the first image, a target human body part in the target object;
According to the target human body part, determining angle information presented by the target human body part in the first image;
and processing the target human body part through the target human body characteristic model according to the angle information.
In another possible embodiment, the processing module 602 may be specifically configured to, in a case where the target human body part is a plurality of target human body parts, determine, according to each target human body part of the plurality of target human body parts, a target human body feature model corresponding to each target human body part;
and respectively processing each target human body part according to the characteristic data of the human body part in the target human body characteristic model corresponding to each target human body part to obtain a second image.
The display module 603 in this embodiment of the present application may be further configured to display a first prompt message, where the first prompt message includes a plurality of first options, and the first option is an option of a human body part of the target object in the first image. Based on this, the receiving module 601 may also be configured to receive a second input for the plurality of first options. The processing module 602 may also be operative to determine a target option selected from the plurality of first options in response to the second input; and determining the human body part corresponding to the target option as a target human body part.
In addition, the image processing apparatus 60 in the embodiment of the present application may further include a construction module, configured to acquire original feature data of a third image, where an object in the third image includes a target object, and the target object is marked with identity information;
performing image processing on the third image to obtain processed characteristic data;
and constructing a target human body characteristic model corresponding to the target object according to the original characteristic data and the processed characteristic data of the third image.
In addition, the image processing apparatus 60 in the embodiment of the present application may further include an updating module configured to acquire a plurality of fourth images including the target object;
determining processed feature data of a fifth image corresponding to original feature data of the fifth image when an input of image processing of the fifth image among the plurality of fourth images is detected;
and updating the target human body characteristic model through the processed characteristic data of the fifth image corresponding to the original characteristic data of the fifth image.
The processing module 602 in this embodiment of the present application may be further configured to display, when the target human body feature model corresponding to the target object is not stored, prompt information, where the prompt information is used to prompt whether to adjust the target object by using the preset parameter, and whether to establish the target human body feature model corresponding to the target object.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in an electronic device. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image processing apparatus provided in this embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 5, and in order to avoid repetition, a description is omitted here.
In summary, according to the embodiment of the present application, the target human body portion of the target object in the first image is processed in a targeted manner through the target human body feature model corresponding to the target object, so that a targeted image beautifying scheme can be provided for each target object. In addition, through the target human body feature model, each target human body part of the target object in the first image is processed, so that when the human images are processed in batches, manual adjustment of each human body part is not needed by a user, the operation of the user is reduced, the image of the same user can be displayed uniformly to show the deformation effect, and the image processing efficiency is improved.
Optionally, as shown in fig. 7, the embodiment of the present application further provides an electronic device 700, including a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and capable of running on the processor 701, where the program or the instruction implements each process of the embodiment of the image processing method when executed by the processor 701, and the process can achieve the same technical effect, and for avoiding repetition, a description is omitted herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 8 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
The electronic device 800 includes, but is not limited to: radio frequency unit 801, network module 802, audio output unit 803, input unit 804, sensor 805, display unit 806, user input unit 807, interface unit 808, memory 809, and processor 810.
Those skilled in the art will appreciate that the electronic device 800 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 810 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein in the present embodiment, the user input unit 807 is configured to receive a first input of a first image comprising at least one object.
The processor 810 is configured to process a target object according to a target human feature model in response to a first input, in a case where the target human feature model corresponding to a target object of the at least one object is stored. The target human body characteristic model comprises characteristic data of a human body part, and the characteristic data are used for processing the target human body part of a target object in the first image.
A display unit 806, configured to display a second image, where the second image is an image obtained by processing the target object.
Therefore, the target human body part of the target object in the first image is processed in a targeted manner through the target human body characteristic model corresponding to the target object, and thus a targeted image beautifying scheme can be provided for each target object. In addition, through the target human body feature model, each target human body part of the target object in the first image is processed, so that when the human images are processed in batches, manual adjustment of each human body part is not needed by a user, the operation of the user is reduced, the image of the same user can be displayed uniformly to show the deformation effect, and the image processing efficiency is improved.
It should be appreciated that the input unit 804 may include a graphics processor (Graphics Processing Unit, GPU) 8041 and a microphone 8042, the graphics processor 8041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. Touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two parts, a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 810 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, where the program or the instruction realizes each process of the embodiment of the image processing method when executed by a processor, and the same technical effects can be achieved, so that repetition is avoided, and no redundant description is given here.
The processor is a processor in the electronic device in the above embodiment. Among them, the readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
In addition, the embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or instructions, each process of the embodiment of the image processing method can be implemented, the same technical effect can be achieved, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method of the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (9)

1. An image processing method, comprising:
receiving a first input to a first image, the first image comprising at least one object;
in response to the first input, performing facial recognition on a target object in the at least one object under the condition that a target human body characteristic model corresponding to the target object is stored, and determining identity information of the target object;
determining a target human body characteristic model corresponding to the identity information of the target object according to the association information of the identity information and the human body characteristic model;
processing the target object according to the target human body characteristic model;
displaying a second image, wherein the second image is an image after the target object is processed;
The target human body characteristic model is constructed by original characteristic data of a third image of the target object and characteristic data processed by the third image, the target human body characteristic model comprises characteristic data of a human body part of the target object, and the characteristic data is used for processing the target human body part of the target object in the first image.
2. The method of claim 1, wherein said processing said target object according to said target human body feature model comprises:
determining a target human body part in the target object according to the target object in the first image;
determining angle information presented by the target human body part in the first image according to the target human body part;
and processing the target human body part through the target human body characteristic model according to the angle information.
3. The method of claim 2, wherein said processing said target object according to said target human body feature model comprises:
determining a target human body characteristic model corresponding to each target human body part according to each target human body part in the plurality of target human body parts when the target human body part is a plurality of target human body parts;
And respectively processing each target human body part according to the characteristic data of the human body part in the target human body characteristic model corresponding to each target human body part to obtain the second image.
4. The method of claim 2, wherein prior to determining a target human body feature model corresponding to each of the plurality of target human body parts from each of the target human body parts, the method further comprises:
displaying first prompt information, wherein the first prompt information comprises a plurality of first options, and the first options are options of human body parts of the target object in the first image;
receiving a second input for the plurality of first options;
determining a target option selected from the plurality of first options in response to the second input;
and determining the human body part corresponding to the target option as a target human body part.
5. The method of claim 1, wherein prior to said processing said target object in accordance with said target human body feature model, said method further comprises:
acquiring original characteristic data of a third image, wherein an object in the third image comprises the target object, and the target object is marked with identity information;
Performing image processing on the third image to obtain processed characteristic data;
and constructing a target human body characteristic model corresponding to the target object according to the original characteristic data of the third image and the processed characteristic data.
6. The method of claim 1, wherein after the displaying the second image, the method further comprises:
acquiring a plurality of fourth images comprising the target object;
determining processed feature data of a fifth image of the plurality of fourth images corresponding to original feature data of the fifth image when an input of image processing of the fifth image is detected;
and updating the target human body characteristic model through the processed characteristic data of the fifth image corresponding to the original characteristic data of the fifth image.
7. The method of claim 1, wherein prior to processing the target object according to the target human body feature model, the method further comprises:
and displaying prompt information under the condition that the target human body characteristic model corresponding to the target object is not stored, wherein the prompt information is used for prompting whether to adjust the target object by adopting preset parameters and whether to establish the target human body characteristic model corresponding to the target object.
8. An image processing apparatus, comprising:
a receiving module for receiving a first input to a first image, the first image comprising at least one object;
a processing module, configured to perform facial recognition on a target object in the at least one object in response to the first input, and determine identity information of the target object when a target human feature model corresponding to the target object is stored; determining a target human body characteristic model corresponding to the identity information of the target object according to the association information of the identity information and the human body characteristic model; processing the target object according to the target human body characteristic model;
the display module is used for displaying a second image, wherein the second image is an image after the target object is processed;
the target human body characteristic model is constructed by original characteristic data of a third image of the target object and characteristic data processed by the third image, the target human body characteristic model comprises characteristic data of a human body part of the target object, and the characteristic data is used for processing the target human body part of the target object in the first image.
9. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which program or instruction when executed by the processor implements the steps of the image processing method as claimed in claims 1-7.
CN202011630767.4A 2020-12-30 2020-12-30 Image processing method and device and electronic equipment Active CN112785490B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011630767.4A CN112785490B (en) 2020-12-30 2020-12-30 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011630767.4A CN112785490B (en) 2020-12-30 2020-12-30 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112785490A CN112785490A (en) 2021-05-11
CN112785490B true CN112785490B (en) 2024-03-05

Family

ID=75754711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011630767.4A Active CN112785490B (en) 2020-12-30 2020-12-30 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112785490B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113784084B (en) * 2021-09-27 2023-05-23 联想(北京)有限公司 Processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106033529A (en) * 2014-09-12 2016-10-19 宏达国际电子股份有限公司 Image processing method and electronic apparatus
WO2017041295A1 (en) * 2015-09-11 2017-03-16 Intel Corporation Real-time face beautification features for video images
CN110827378A (en) * 2019-10-31 2020-02-21 北京字节跳动网络技术有限公司 Virtual image generation method, device, terminal and storage medium
CN111652016A (en) * 2019-03-27 2020-09-11 上海铼锶信息技术有限公司 Method for amplifying face recognition training data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9491263B2 (en) * 2014-01-10 2016-11-08 Pixtr Ltd. Systems and methods for automatically modifying a picture or a video containing a face

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106033529A (en) * 2014-09-12 2016-10-19 宏达国际电子股份有限公司 Image processing method and electronic apparatus
WO2017041295A1 (en) * 2015-09-11 2017-03-16 Intel Corporation Real-time face beautification features for video images
CN111652016A (en) * 2019-03-27 2020-09-11 上海铼锶信息技术有限公司 Method for amplifying face recognition training data
CN110827378A (en) * 2019-10-31 2020-02-21 北京字节跳动网络技术有限公司 Virtual image generation method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN112785490A (en) 2021-05-11

Similar Documents

Publication Publication Date Title
US10565763B2 (en) Method and camera device for processing image
CN109087239B (en) Face image processing method and device and storage medium
EP3086275A1 (en) Numerical value transfer method, terminal, cloud server, computer program and recording medium
US20150049924A1 (en) Method, terminal device and storage medium for processing image
US11030733B2 (en) Method, electronic device and storage medium for processing image
WO2016145830A1 (en) Image processing method, terminal and computer storage medium
CN111583154B (en) Image processing method, skin beautifying model training method and related device
CN110263617B (en) Three-dimensional face model obtaining method and device
CN108200337B (en) Photographing processing method, device, terminal and storage medium
CN112532885B (en) Anti-shake method and device and electronic equipment
CN110689479B (en) Face makeup method, device, equipment and medium
CN112333385B (en) Electronic anti-shake control method and device
CN112785490B (en) Image processing method and device and electronic equipment
CN110568770A (en) method for controlling intelligent household equipment and control equipment
CN111353946A (en) Image restoration method, device, equipment and storage medium
CN112153281A (en) Image processing method and device
CN111373409B (en) Method and terminal for obtaining color value change
CN112561787B (en) Image processing method, device, electronic equipment and storage medium
WO2022042502A1 (en) Beautifying function enabling method and apparatus, and electronic device
CN112702533B (en) Sight line correction method and sight line correction device
CN110941977A (en) Image processing method, image processing device, storage medium and electronic equipment
CN112861592B (en) Training method of image generation model, image processing method and device
CN112184540A (en) Image processing method, image processing device, electronic equipment and storage medium
CN106851100B (en) Photo processing method and system
CN114004922B (en) Bone animation display method, device, equipment, medium and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant