CN112785490A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN112785490A
CN112785490A CN202011630767.4A CN202011630767A CN112785490A CN 112785490 A CN112785490 A CN 112785490A CN 202011630767 A CN202011630767 A CN 202011630767A CN 112785490 A CN112785490 A CN 112785490A
Authority
CN
China
Prior art keywords
human body
image
target
target human
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011630767.4A
Other languages
Chinese (zh)
Other versions
CN112785490B (en
Inventor
段霞霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011630767.4A priority Critical patent/CN112785490B/en
Publication of CN112785490A publication Critical patent/CN112785490A/en
Application granted granted Critical
Publication of CN112785490B publication Critical patent/CN112785490B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, and belongs to the technical field of communication. The method generally includes receiving a first input to a first image, the first image including at least one object; in response to the first input, processing the target object according to the target human body feature model in the case that the target human body feature model corresponding to the target object in the at least one object is stored; displaying a second image, wherein the second image is an image obtained after the target object is processed; the target human body feature model comprises feature data of a human body part, and the feature data is used for processing the target human body part of the target object in the first image. The method provided by the embodiment of the application can solve the problem of low image processing efficiency.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to an image processing method and device and electronic equipment.
Background
With the development of electronic equipment, more and more users can perform image shooting and post-image processing through an image beautifying function provided by the electronic equipment so as to meet the pursuit of the users on image effects.
However, in the process of beautifying the image, the unified parameters provided by the image beautifying function are adopted to process the massive images. Therefore, all users use the same parameters to beautify the image, the pertinence of the image beautification is poor, the users are required to manually adjust the parameters according to the requirements in the later period, and the operation is complex. In addition, when people in a plurality of images are beautified, people need to be processed by using the same parameters, and once the degree of manual adjustment of a user is different each time, the same people have different forms in different images, so that the effect of beautifying the images is influenced.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, and an electronic device, which can solve the problem of low image processing efficiency.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, which may include:
receiving a first input to a first image, the first image including at least one object;
in response to the first input, processing the target object according to the target human body feature model in the case that the target human body feature model corresponding to the target object in the at least one object is stored;
displaying a second image, wherein the second image is an image obtained after the state of the target object in the first image is adjusted;
the target human body feature model comprises feature data of a human body part, and the feature data is used for processing the target human body part of the target object in the first image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, which may include:
a receiving module for receiving a first input to a first image, the first image comprising at least one object;
the processing module is used for responding to the first input, and processing the target object according to the target human body feature model under the condition that the target human body feature model corresponding to the target object in the at least one object is stored;
the display module is used for displaying a second image, wherein the second image is an image obtained after the state of the target object in the first image is adjusted;
the target human body feature model comprises feature data of a human body part, and the feature data is used for processing the target human body part of the target object in the first image.
In a third aspect, the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the image processing method as shown in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a program or instructions are stored, and when executed by a processor, the program or instructions implement the steps of the image processing method as shown in the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the image processing method according to the first aspect.
In the embodiment of the application, the target human body part of the target object in the first image is processed in a targeted manner through the target human body feature model corresponding to the target object, so that a targeted image beautification scheme can be provided for each target object. In addition, each target human body part of the target object in the first image is processed through the target human body feature model, so that when the figure images are processed in batches, a user does not need to manually adjust each human body part, the user operation is reduced, meanwhile, the image of the same user can present a uniform expression deformation effect, and the image processing efficiency is improved.
Drawings
Fig. 1 is a schematic diagram of an image processing architecture according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a position change in image processing according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of another image processing architecture provided in the embodiments of the present application;
fig. 4 is a flowchart of an image management method according to an embodiment of the present application;
fig. 5 is a flowchart of an image management method based on a user a and a user B according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
Based on this, the image processing method provided by the embodiment of the present application is described in detail below with reference to fig. 1 to fig. 3 through a specific embodiment and an application scenario thereof.
The application provides two image processing architectures, wherein the first image processing architecture can comprise an electronic device, and the second image processing architecture can comprise an electronic device and a server. The following describes the image processing methods provided in the embodiments of the present application by taking these two architectures as examples.
First, the description is made based on the image processing architecture of fig. 1, which includes an electronic device 10. In this way, the user can shoot the person image through the electronic equipment, and the person image shot by the user is subjected to the image beautifying processing, and the specific process is as follows.
In this way, a human feature model is constructed from the images of the person previously taken by the user. Here, a plurality of images such as the image 1 to the image 7 taken by the user history may be acquired, face recognition may be performed on each of the plurality of images, and at least one object in the plurality of images may be determined, each of the at least one object being marked with identification information. If an object 1 is recognized, the identification information of the object is marked as a person a, and similarly, if an object 2 different from the object 1 is recognized, the identification information of the object 2 is marked as a person B, where the person involved in the self-portrait image is a general user using the electronic device.
Next, from the plurality of images in which the object 1 is located, third feature data of the human body part of the object 1 in the plurality of images and fourth feature data of the human body part of the object 1 in the plurality of images after adjustment are determined. Here, the fourth feature data may be at least one of: the user manually adjusts the data of the human body parts, and the user confirms the data of the human body parts adjusted by using the preset parameters. And then, training the preset human body feature model according to the third feature data and the fourth feature data until a preset training condition is met, and obtaining the human body feature model corresponding to the object 1.
Here, the human body feature model corresponding to the object 1 may be associated with the person a, which is the identity information of the object 1, so that the object 1 in other images may be later beautified according to the human body feature model corresponding to the person a.
The object 1 may have a corresponding human feature model, and the object 2 may have a corresponding human feature model. Of course, in some cases, the object 1 and the object 2 may use one human feature model as long as the human part of each object is distinguished.
Based on the constructed human body feature model, the following performs image beautification processing on the object 1 in the new image based on the human body feature model corresponding to the object 1.
In this way, the electronic device receives a first input of a first image taken by the user, where the first image includes at least one object. Here, the first image may be a self-portrait image of the user or a person image photographed by the user.
Then, responding to the first input, determining whether a target human body characteristic model corresponding to a target object in at least one object, such as the object 1, is stored in the electronic equipment; in the case where a target human feature model corresponding to a target object of the at least one object, such as the object 1, is stored, the target object is processed according to the target human feature model. Here, if the first image includes two objects, i.e., the object 1 and the object 2, the target human feature model corresponding to each object, i.e., the target human feature model 1 corresponding to the object 1 and the target human feature model 2 corresponding to the object 2, may be determined.
The target human feature model may include feature data of a plurality of human body parts, such as face data, leg data, abdomen data, and the like, where the face data may specifically include data of skin color, data of hair length, data of proportion of positions of five sense organs, and data of shape and size of five sense organs, face contour data, and the like. Based on this, after determining the target human body part of the object 1 such as the face in the first image, the position of the target object in the first image can be adjusted according to feature data such as the shape and size of the five sense organs. Or, when the feature data, such as the length data of the hair, is not consistent with the length data of the hair of the target object in the current first image, it may be determined whether to adjust the length of the hair of the target object in the first image to the length data of the hair corresponding to the feature data according to the selection of the user.
Then, a second image after the target object is processed is displayed. As shown in fig. 2, taking the size of the eye as an example, the position of the eye in the first image is region 1, and the position of the eye in the first image is adjusted from region 1 to region 2 according to the feature data.
In addition, in one possible embodiment, the image processing architecture based on fig. 3 includes an electronic device and a server. Unlike the architecture in fig. 1, the first image in fig. 1 is an image captured by a user, and the first image in fig. 2 may be downloaded by the user on a server of a certain platform, that is, may be a user who is frequently used in the electronic device, and may of course be a person image related to the user who is frequently used. Thus, the downloaded first image can be beautified to obtain the image beautifying information of the object in the downloaded first image, and the target human body feature model corresponding to the object in the downloaded first image is constructed. Here, the human body feature model constructed based on the architecture in fig. 3, and the beautification of the image of the target object in the first image can refer to the process shown in fig. 1.
In addition, in another possible embodiment, based on the image processing architecture of fig. 3, the image processing in the embodiment of the present application provides another possibility, that is, a human body feature model is constructed at the server side. In this way, the server can construct a human body feature model of each object according to a plurality of images uploaded by the electronic equipment. Or the server determines that a plurality of images corresponding to the identity information are stored in the server according to the identity information of the object uploaded by the user, and a human body feature model of the object can be constructed according to the images.
Based on this, when the electronic device receives a first input to the first image by a user, an image processing request is sent to the server. When receiving the image processing request, the server processes the target object according to the previously constructed target human body feature model corresponding to the object 1 in the first image to obtain a second image, and sends the second image to the electronic device, so that the electronic device displays the second image.
Here, the process of adjusting the target object and the process of constructing the human body feature model may refer to the steps involved in fig. 1, and are not described herein again.
Therefore, the human body part of the target object in the image is adjusted in a targeted mode through the target human body feature model corresponding to the target object, and therefore a targeted image beautification scheme can be provided for each target object.
In addition, each target human body part of the target object is adjusted through the target human body feature model, so that when people images are processed in batches, a user does not need to manually adjust each human body part, the user operation is reduced, meanwhile, the images of the same user can show a uniform deformation expression effect, and the image processing efficiency is improved.
It should be noted that the image processing method provided in the embodiment of the present application can be applied to the above-mentioned application scene in which the user beautifies the captured character image, and can also be applied to a scene in which the image of the character image in the video is beautified, such as beautifying the target human body of an actor in a movie or beautifying the target human body of a model in a series of captured images, so that the target human body of the object included in each frame of image is beautified uniformly through the customized human body feature model of the object, so that when the character images are processed in batch, manual adjustment of each human body in each frame of image is not needed, and the image of the same user can be presented with a uniform deformation expression effect while user operations are reduced, thereby improving the image processing efficiency.
According to the application scenarios, the image processing method provided by the embodiment of the present application is described in detail below with reference to fig. 4 to 5.
Fig. 4 is a flowchart of an image processing method according to an embodiment of the present application.
As shown in fig. 4, the image processing method may be applied to an electronic device or an electronic device and a server, and specifically includes the following steps:
first, a first input to a first image is received, the first image including at least one object, step 410. Next, step 420, in response to the first input, in a case that a target human body feature model corresponding to a target object of the at least one object is stored, processing the target object according to the target human body feature model, wherein the target human body feature model includes feature data of a human body part, and the feature data is used for processing the target human body part of the target object in the first image. Then, step 440 displays a second image, which is an image obtained by processing the target object.
In this way, the target human body part of the target object in the first image is processed in a targeted manner through the target human body feature model corresponding to the target object, and in the same way, a targeted image beautification scheme can be provided for each target object. In addition, each target human body part of the target object in the first image is processed through the target human body feature model corresponding to the target object, and the target human body part of the target object in the first image is processed in a targeted mode, so that a targeted image beautifying scheme can be provided for each target object.
In addition, each target human body part of the target object in the first image is processed through the target human body feature model, so that when the figure images are processed in batches, a user does not need to manually adjust each human body part, the user operation is reduced, meanwhile, the image of the same user can present a uniform expression deformation effect, and the image processing efficiency is improved.
It should be noted that, in the embodiment of the present application, the process related to processing the target object may be, for example, performing image beautification operations such as whitening, peeling, face thinning, height lengthening, and eye enlarging on the target object.
The above steps are described in detail below, specifically as follows:
first, before referring to step 420, in one or more alternative embodiments, the image processing method may further include: the target human body feature model corresponding to the target object may be constructed in the following manner.
Acquiring original characteristic data of a third image, wherein an object in the third image comprises a target object, and the target object is marked with identity information;
processing the third image to obtain processed characteristic data;
and constructing a target human body feature model corresponding to the target object according to the original feature data and the processed feature data of the third image.
Specifically, the processed feature data includes at least one of the following data:
adjusting data of each of a plurality of human body parts of the target object at a time; and determining data of each human body part of the target object to be adjusted according to preset parameters each time. Such as data for manually adjusting the eye size of the target object each time by the user, or the user determines to adjust the eye size of the target object using preset parameters suitable for most users set in an application including an image beautification function.
Here, the target human feature model in the embodiment of the present application may be a three-dimensional target human feature model. Thus, a three-dimensional target human feature model can be constructed according to the following steps: firstly, acquiring a plurality of images of a target object at a plurality of angles, such as 15 degrees of head-up, 45 degrees of left side face or 30 degrees of right side face; an original three-dimensional model of the target object is constructed based on the plurality of images from the plurality of angles. And acquiring the processed feature data of each image in the plurality of images at the plurality of angles, and constructing a processed three-dimensional model of the target object according to the processed feature data. Then, a target human body feature model is constructed based on the original three-dimensional model and the processed three-dimensional model. Therefore, when a new image including the target object is received later, the three-dimensional target human characteristic model can be used for adjusting the new image. Therefore, the target human body model is equivalent to 360 degrees and accords with the aesthetic sense of the user, so that the images at all angles can be processed through the target human body model, the processing results are consistent, and the problem that each image is different and not like a person due to the fact that the user beautifies the images by himself is avoided.
Based on this, the target human body part can be adjusted according to the target human body feature model determined by the above.
In addition, for the above-mentioned target human body feature model, the embodiment of the present application provides a way to update the target human body feature model in addition to a way to construct the target human body feature model, which is specifically as follows:
acquiring a plurality of fourth images including the target object;
determining processed feature data of a fifth image corresponding to original feature data of the fifth image when an input of image processing on the fifth image among the plurality of fourth images is detected;
and updating the target human body feature model through the processed feature data of the fifth image corresponding to the original feature data of the fifth image.
Therefore, the human body part of the target object can be adapted to the change even if the human body part of the target object is changed, such as the length of hair is changed, the weight is reduced successfully or the eyes wear the dilated contact lenses, and the human body part of the target object is beautified by the latest target human body characteristic model.
Specifically, in one or more optional embodiments, before processing the target object, the image processing method may further include:
carrying out face recognition on a target object and determining identity information of the target object;
and determining a target human body characteristic model corresponding to the identity information of the target object according to the association information of the identity information and the human body characteristic model.
Based on this, step 420 may specifically include:
determining a target human body part in the target object according to the target object in the first image;
according to the target human body part, determining angle information of the target human body part presented in the first image;
and processing the target human body part through the target human body characteristic model according to the angle information.
In this embodiment of the application, the target human body part may be adjusted based on the target human body feature model constructed in step 410, and a specific adjustment process is as follows:
according to angle information such as a left side face 45 degrees, presented in a first image by a target human body part, acquiring feature data such as a left side face 45 degrees, corresponding to the angle information, in a target human body feature model, and acquiring two-dimensional feature data corresponding to the left side face 45 degrees in the target human body feature model; and processing the target human body part in the first image according to the two-dimensional characteristic data, such as face thinning, eye enlarging, skin grinding, nose height adjustment, leg length adjustment and other image beautifying processing on the target human body part in the first image. Therefore, the images of the target object under various angles can be processed according to the target human body feature model, so that the consistency of processing results is ensured, and the problem that each image is different and not like a person due to the fact that the user beautifies the images by himself is avoided.
Here, if the plurality of human body parts of the target object are processed, such as adjusting the height of the nose, and the size of the eyes is adjusted, the step 420 may specifically include:
determining a target human body feature model corresponding to each target human body part according to each target human body part in the target human body parts under the condition that the target human body parts are multiple;
and respectively processing each target human body part according to the characteristic data of the human body part in the target human body characteristic model corresponding to each target human body part to obtain a second image.
Here, when the target human body parts are eyes, a nose, and skin, the state of each target human body part in the first image changes, which may cause a certain deformation of the target object in the first image, and thus, the adjustment image may be smoothed, so that the deformed target human body parts are reasonably displayed, and the presentation effect of the second image is improved.
In addition, in one or more alternative embodiments, before determining, according to each target human body part of the plurality of target human body parts, a target human body feature model corresponding to each target human body part, the image processing method may further include:
displaying first prompt information, wherein the first prompt information comprises a plurality of first options, and the first options are options of human body parts of the target object in the first image;
receiving a second input for the plurality of first options;
determining a target option selected among the plurality of first options in response to the second input;
and determining the human body part corresponding to the target option as the target human body part.
For example, when the eyes, nose, skin, hair, lips, and face of the target object are recognized in the first image, options may be generated according to the human body parts, so that the user may select which target human body parts in the first image to adjust, and if an input of all the user selections is received, all the recognized human body parts may be determined as the target human body parts to adjust. If an input that the user selects the eye option is received, the eye corresponding to the eye option can be determined as the target human body part.
In addition, before step 420, in one or more possible alternative embodiments, the image processing method may further include:
and under the condition that the target human body characteristic model corresponding to the target object is determined not to be stored, displaying prompt information, wherein the prompt information is used for prompting whether to adjust the target object by adopting preset parameters and whether to establish the target human body characteristic model corresponding to the target object.
It should be noted that the preset parameters may include preset parameters suitable for most users set in an application program of the image beautification function, such as parameters in an image filter.
In order to better describe the above image processing method, the embodiment of the present application is further described with reference to fig. 5, and the image processing architecture is taken as an electronic device and an object a and an object B are taken as examples to describe the image processing method provided by the embodiment of the present application.
Here, the user a may beautify the image of the existing photo, and determine deformation information of the entire face and/or the human body of the user a by acquiring data of each human body part of the user a from a plurality of historical images, so as to construct a target human body feature model of the user a. In this way, when the user A is recognized after the photographing is finished, the target human body feature model of the user A is mapped to the current image so as to beautify the human body part of the user A in the current image.
And if the current image also comprises the user B, determining whether the target human body feature model corresponding to the user B is stored in the electronic equipment. And when the target human body feature model of the user B is determined not to exist in the electronic equipment, prompting the user whether to apply the human body feature model corresponding to the preset parameters to process the human body part of the user B in the current image or not. And after the fact that the user determines to use the human body feature model corresponding to the preset parameters to process the human body part of the user B in the current image is received, processing according to the preset parameters.
It should be noted that, if the user continues to manually adjust the user B after processing the human body part of the user B by the preset parameters, the electronic device may obtain the original feature data of the human body part of the user B in the current image and the feature data of the user B after manually adjusting the user B in the current image, and construct the target human body feature model of the user B, so that the human body part of the user B is beautified by the target human body feature model of the user B next time.
Similarly, if it is detected that the human body part of the user a is continuously adjusted manually after the human body part of the user a is processed through the target human body feature model of the user a, the electronic device may obtain feature data of the current image, which is obtained by manually adjusting the human body part of the user a, so as to update the target human body feature model of the user a according to the manually adjusted feature data, so that the human body part of the user a is beautified through the updated target human body feature model of the user a next time.
The image processing procedure described above is described with reference to fig. 5, and includes steps 501 to 508, which are described in detail below.
Step 501, determining a target human body feature model of each object in at least one object according to at least one object in a plurality of images, wherein at least one object comprises a user A.
Step 502, a first input of a user editing a first image is received.
In response to the first input, the first image is subjected to facial recognition, and two target objects, namely, the user a and the user B, in the first image are determined in step 503.
Step 504, determining whether a target human body feature model corresponding to the user is stored.
When determining that the target human body feature model corresponding to the user a is stored, executing step 505; and upon determining that the target human feature model corresponding to user B is not stored, performing step 506.
Step 505, adjusting the target human body part of the user a through the target human body feature model corresponding to the user a, and step 508.
Step 506, displaying prompt information, wherein the prompt information is used for prompting whether to adjust the user B by adopting preset parameters and whether to establish a target human body feature model corresponding to the user B, and executing step 507.
And 507, when receiving input of adjusting the user B by adopting preset parameters and establishing a target human body feature model corresponding to the user B, adjusting the user B by the preset parameters, and acquiring original feature data of each human body part of the user B in the first image and the feature data after adjusting the user B by the preset parameters.
According to the original characteristic data of each human body part of the user B and the characteristic data obtained after the user B is adjusted through preset parameters, a target human body characteristic model corresponding to the user B is built, and the target human body characteristic model corresponding to the user B is stored, so that the images of other human body parts including the user B are beautified through the target human body characteristic model of the user B next time.
And step 508, displaying a second image, wherein the second image is an image obtained by processing the target object.
In addition, when the target human body feature model corresponding to the user a and the target human body feature model corresponding to the user B are stored, the following steps 509 to 513 may be performed.
Step 509, a first input is received from a user to edit the first image.
In response to the first input, facial recognition is performed on the first image, and two target objects, namely, user a and user B, in the first image are determined, step 510.
Step 511, determining whether a target human body feature model corresponding to the user is stored.
When it is determined that the target human body feature model corresponding to the user a is stored, and when it is determined that the target human body feature model corresponding to the user B is stored, step 512 is executed.
Step 512, adjusting the target human body part of the user A through the target human body feature model corresponding to the user A; and adjusting the target human body part of the user B through the target human body feature model corresponding to the user B, and executing step 513.
And 513, displaying a second image, wherein the second image comprises the target object processed by the target human feature model corresponding to the user a and the target object processed by the target human feature model corresponding to the user B.
Therefore, the behavior habits of the user can be learned, the pictures are processed in batches, the shapes of the people in the processed pictures are highly consistent, the human body characteristics of the user cannot be changed due to different periods, the mode is fast and convenient, and the image processing efficiency is improved while the user operation is reduced.
In summary, in the embodiment of the present application, the target human body feature model corresponding to the target object is used to adjust the human body part of the target object in the image in a targeted manner, so that a targeted image beautification scheme can be provided for each target object. In addition, each target human body part of the target object is processed through the target human body feature model, so that when people images are processed in batches, a user does not need to manually adjust each human body part, the user operation is reduced, meanwhile, the images of the same user can present a uniform expression deformation effect, and the image processing efficiency is improved.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. In the embodiment of the present application, an image processing apparatus executes an image processing method as an example, and an apparatus for image processing provided in the embodiment of the present application is described.
Based on the same inventive concept, the present application provides an image processing apparatus. The details are described with reference to fig. 6.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
As shown in fig. 6, the image processing apparatus 60 may be applied to an electronic device or a server, and specifically may include:
a receiving module 601, configured to receive a first input to a first image, where the first image includes at least one object;
a processing module 602, configured to, in response to a first input, process a target object according to a target human body feature model in a case where the target human body feature model corresponding to the target object in the at least one object is stored;
a display module 603, configured to display a second image, where the second image is an image obtained by processing a target object;
the target human body feature model comprises feature data of a plurality of human body parts, and the feature data is used for processing the target human body part of the target object.
Therefore, the human body part of the target object in the image is adjusted in a targeted mode through the target human body feature model corresponding to the target object, and therefore a targeted image beautification scheme can be provided for each target object. In addition, each target human body part of the target object is adjusted through the target human body feature model, so that when people images are processed in batches, a user does not need to manually adjust each human body part, the user operation is reduced, meanwhile, the images of the same user can show a uniform deformation expression effect, and the image processing efficiency is improved.
The image processing apparatus 60 will be described in detail below, specifically as follows:
in a possible embodiment, the processing module 602 may be further configured to perform facial recognition on the target object, and determine identity information of the target object;
and determining a target human body characteristic model corresponding to the identity information of the target object according to the association information of the identity information and the human body characteristic model.
Based on this, the processing module 602 may be specifically configured to determine, according to the target object in the first image, a target human body part in the target object;
according to the target human body part, determining angle information of the target human body part presented in the first image;
and processing the target human body part through the target human body characteristic model according to the angle information.
In another possible embodiment, the processing module 602 may be specifically configured to, in a case that the target human body part is a plurality of target human body parts, determine, according to each target human body part of the plurality of target human body parts, a target human body feature model corresponding to each target human body part;
and respectively processing each target human body part according to the characteristic data of the human body part in the target human body characteristic model corresponding to each target human body part to obtain a second image.
The display module 603 in this embodiment may be further configured to display first prompt information, where the first prompt information includes a plurality of first options, and the first option is an option of a human body part of the target object in the first image. Based on this, the receiving module 601 may be further configured to receive a second input for the plurality of first options. The processing module 602 may be further configured to, in response to the second input, determine a target option selected among the plurality of first options; and determining the human body part corresponding to the target option as the target human body part.
In addition, the image processing apparatus 60 in this embodiment may further include a building module, configured to obtain original feature data of a third image, where an object in the third image includes a target object, and the target object is marked with identity information;
processing the third image to obtain processed characteristic data;
and constructing a target human body feature model corresponding to the target object according to the original feature data and the processed feature data of the third image.
In addition, the image processing apparatus 60 in the embodiment of the present application may further include an updating module configured to acquire a plurality of fourth images including the target object;
determining processed feature data of a fifth image corresponding to original feature data of the fifth image when an input of image processing on the fifth image among the plurality of fourth images is detected;
and updating the target human body feature model through the processed feature data of the fifth image corresponding to the original feature data of the fifth image.
The processing module 602 in this embodiment may be further configured to, in a case that the target human body feature model corresponding to the target object is not stored, display a prompt message, where the prompt message is used to prompt whether to adjust the target object by using preset parameters and whether to establish the target human body feature model corresponding to the target object.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in an electronic device. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 5, and is not described herein again to avoid repetition.
In summary, according to the embodiment of the application, the target human body part of the target object in the first image is processed in a targeted manner through the target human body feature model corresponding to the target object, so that a targeted image beautification scheme can be provided for each target object. In addition, each target human body part of the target object in the first image is processed through the target human body feature model, so that when the figure images are processed in batches, a user does not need to manually adjust each human body part, the user operation is reduced, meanwhile, the image of the same user can present a uniform expression deformation effect, and the image processing efficiency is improved.
Optionally, as shown in fig. 7, an electronic device 700 is further provided in this embodiment of the present application, and includes a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and executable on the processor 701, where the program or the instruction is executed by the processor 701 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
The electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, and a processor 810.
Those skilled in the art will appreciate that the electronic device 800 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 810 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
Therein, in this embodiment of the application, the user input unit 807 is configured to receive a first input to a first image, the first image including at least one object.
A processor 810 for processing the target object according to the target human feature model in a case where the target human feature model corresponding to the target object of the at least one object is stored in response to the first input. The target human body feature model comprises feature data of a human body part, and the feature data is used for processing the target human body part of the target object in the first image.
The display unit 806 is configured to display a second image, where the second image is an image obtained by processing the target object.
Therefore, the target human body part of the target object in the first image is processed in a targeted mode through the target human body feature model corresponding to the target object, and therefore a targeted image beautification scheme can be provided for each target object. In addition, each target human body part of the target object in the first image is processed through the target human body feature model, so that when the figure images are processed in batches, a user does not need to manually adjust each human body part, the user operation is reduced, meanwhile, the image of the same user can present a uniform expression deformation effect, and the image processing efficiency is improved.
It is to be understood that the input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, the Graphics processor 8041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. A touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two portions of a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. The readable storage medium includes a computer-readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
In addition, an embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above-mentioned embodiment of the image processing method, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image processing method, comprising:
receiving a first input to a first image, the first image comprising at least one object;
in response to the first input, processing a target object of the at least one object according to a target human feature model in a case where the target human feature model is stored and corresponds to the target object;
displaying a second image, wherein the second image is an image obtained after the target object is processed;
the target human body feature model comprises feature data of a human body part, and the feature data is used for processing the target human body part of the target object in the first image.
2. The method of claim 1, wherein prior to processing the target object according to the target human feature model, the method further comprises:
performing facial recognition on the target object, and determining identity information of the target object;
and determining the target human body characteristic model corresponding to the identity information of the target object according to the association information of the identity information and the human body characteristic model.
3. The method according to claim 1 or 2, wherein the processing the target object according to the target human feature model comprises:
determining a target human body part in the target object according to the target object in the first image;
according to the target human body part, determining angle information presented by the target human body part in the first image;
and processing the target human body part through the target human body characteristic model according to the angle information.
4. The method of claim 3, wherein the processing the target object according to the target human feature model comprises:
under the condition that the target human body part is a plurality of target human body parts, determining a target human body feature model corresponding to each target human body part according to each target human body part in the target human body parts;
and respectively processing each target human body part according to the characteristic data of the human body part in the target human body characteristic model corresponding to each target human body part to obtain the second image.
5. The method of claim 3, wherein prior to determining, from each target human body part of the plurality of target human body parts, a target human feature model corresponding to the each target human body part, the method further comprises:
displaying first prompt information, wherein the first prompt information comprises a plurality of first options, and the first options are options of the human body part of the target object in the first image;
receiving a second input for the plurality of first options;
determining a target option selected among the plurality of first options in response to the second input;
and determining the human body part corresponding to the target option as a target human body part.
6. The method of claim 1, wherein prior to said processing said target object according to said target human feature model, said method further comprises:
acquiring original characteristic data of a third image, wherein an object in the third image comprises the target object, and the target object is marked with identity information;
performing image processing on the third image to obtain processed feature data;
and constructing a target human body feature model corresponding to the target object according to the original feature data of the third image and the processed feature data.
7. The method of claim 1, wherein after the displaying the second image, the method further comprises:
acquiring a plurality of fourth images including the target object;
determining processed feature data of a fifth image corresponding to original feature data of the fifth image when an input of image processing on the fifth image among the plurality of fourth images is detected;
and updating the target human body feature model through the processed feature data of the fifth image corresponding to the original feature data of the fifth image.
8. The method of claim 1, wherein prior to processing the target object according to the target human feature model, the method further comprises:
and displaying prompt information under the condition that a target human body characteristic model corresponding to the target object is not stored, wherein the prompt information is used for prompting whether to adjust the target object by adopting preset parameters or not and whether to establish the target human body characteristic model corresponding to the target object or not.
9. An image processing apparatus characterized by comprising:
a receiving module for receiving a first input to a first image, the first image comprising at least one object;
a processing module, configured to, in response to the first input, process a target object in the at least one object according to a target human feature model in a case where the target human feature model is stored, the target object corresponding to the target object;
the display module is used for displaying a second image, wherein the second image is an image obtained by processing the target object;
the target human body feature model comprises feature data of a human body part, and the feature data is used for processing the target human body part of the target object in the first image.
10. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image processing method according to claims 1-8.
CN202011630767.4A 2020-12-30 2020-12-30 Image processing method and device and electronic equipment Active CN112785490B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011630767.4A CN112785490B (en) 2020-12-30 2020-12-30 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011630767.4A CN112785490B (en) 2020-12-30 2020-12-30 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112785490A true CN112785490A (en) 2021-05-11
CN112785490B CN112785490B (en) 2024-03-05

Family

ID=75754711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011630767.4A Active CN112785490B (en) 2020-12-30 2020-12-30 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112785490B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113784084A (en) * 2021-09-27 2021-12-10 联想(北京)有限公司 Processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150199558A1 (en) * 2014-01-10 2015-07-16 Pixtr Ltd. Systems and methods for automatically modifying a picture or a video containing a face
CN106033529A (en) * 2014-09-12 2016-10-19 宏达国际电子股份有限公司 Image processing method and electronic apparatus
WO2017041295A1 (en) * 2015-09-11 2017-03-16 Intel Corporation Real-time face beautification features for video images
CN110827378A (en) * 2019-10-31 2020-02-21 北京字节跳动网络技术有限公司 Virtual image generation method, device, terminal and storage medium
CN111652016A (en) * 2019-03-27 2020-09-11 上海铼锶信息技术有限公司 Method for amplifying face recognition training data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150199558A1 (en) * 2014-01-10 2015-07-16 Pixtr Ltd. Systems and methods for automatically modifying a picture or a video containing a face
CN106033529A (en) * 2014-09-12 2016-10-19 宏达国际电子股份有限公司 Image processing method and electronic apparatus
WO2017041295A1 (en) * 2015-09-11 2017-03-16 Intel Corporation Real-time face beautification features for video images
CN111652016A (en) * 2019-03-27 2020-09-11 上海铼锶信息技术有限公司 Method for amplifying face recognition training data
CN110827378A (en) * 2019-10-31 2020-02-21 北京字节跳动网络技术有限公司 Virtual image generation method, device, terminal and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113784084A (en) * 2021-09-27 2021-12-10 联想(北京)有限公司 Processing method and device

Also Published As

Publication number Publication date
CN112785490B (en) 2024-03-05

Similar Documents

Publication Publication Date Title
CN110110118B (en) Dressing recommendation method and device, storage medium and mobile terminal
CN106161939B (en) Photo shooting method and terminal
US11030733B2 (en) Method, electronic device and storage medium for processing image
CN110288716B (en) Image processing method, device, electronic equipment and storage medium
CN108200337B (en) Photographing processing method, device, terminal and storage medium
CN110263617B (en) Three-dimensional face model obtaining method and device
CN112333385B (en) Electronic anti-shake control method and device
CN114007099A (en) Video processing method and device for video processing
CN112532885B (en) Anti-shake method and device and electronic equipment
CN107085823B (en) Face image processing method and device
EP4315265A1 (en) True size eyewear experience in real-time
CN110728621B (en) Face changing method and device of face image, electronic equipment and storage medium
CN112785490B (en) Image processing method and device and electronic equipment
US11812183B2 (en) Information processing device and program
CN111800574B (en) Imaging method and device and electronic equipment
CN111373409B (en) Method and terminal for obtaining color value change
CN112561787B (en) Image processing method, device, electronic equipment and storage medium
WO2022042502A1 (en) Beautifying function enabling method and apparatus, and electronic device
CN110266947B (en) Photographing method and related device
CN113962840A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113891002A (en) Shooting method and device
CN112492211A (en) Shooting method, electronic equipment and storage medium
CN113572955A (en) Image processing method and device and electronic equipment
CN114757836A (en) Image processing method, image processing device, storage medium and computer equipment
CN112532904A (en) Video processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant