CN112818147A - Picture processing method, device, equipment and storage medium - Google Patents

Picture processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN112818147A
CN112818147A CN202110196637.2A CN202110196637A CN112818147A CN 112818147 A CN112818147 A CN 112818147A CN 202110196637 A CN202110196637 A CN 202110196637A CN 112818147 A CN112818147 A CN 112818147A
Authority
CN
China
Prior art keywords
picture
face
input
region
face region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110196637.2A
Other languages
Chinese (zh)
Inventor
黄佳伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110196637.2A priority Critical patent/CN112818147A/en
Publication of CN112818147A publication Critical patent/CN112818147A/en
Priority to PCT/CN2022/077076 priority patent/WO2022174826A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a picture processing method, a picture processing device, picture processing equipment and a storage medium, and belongs to the technical field of communication. The picture processing method comprises the following steps: receiving a first input under the condition that a first picture is displayed; the first picture comprises a first face region; displaying a second picture in response to the first input; the second picture is a picture generated after a first face area in the first picture is replaced by a second face area, and the first face area and the second face area have different corresponding expressions. By the adoption of the image processing method, user experience can be improved.

Description

Picture processing method, device, equipment and storage medium
Technical Field
The present application belongs to the field of communication technologies, and in particular, to a method, an apparatus, a device, and a storage medium for processing an image.
Background
With the continuous development of the photographing technology of electronic devices, electronic devices such as mobile phones and tablet computers have a function of processing pictures, such as adjusting lines, colors, filters and the like of the pictures.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art:
in the prior art, only the image processing operations such as adjusting lines, colors, filters and the like of the image can be realized, that is, only the display parameters of the image can be changed, so that the image processing effect is single, and the user requirements cannot be met.
Disclosure of Invention
The embodiment of the application aims to provide a picture processing method, a picture processing device, picture processing equipment and a storage medium, and the technical problems that the picture processing effect is single and the user requirements cannot be met can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, where the method includes:
receiving a first input under the condition that a first picture is displayed; the first picture comprises a first face area;
displaying a second picture in response to the first input; the second picture is a picture generated after the first face area in the first picture is replaced by the second face area, and the corresponding expressions of the first face area and the second face area are different.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the first receiving module is used for receiving first input under the condition that a first picture is displayed; the first picture comprises a first face area;
the first display module is used for responding to the first input and displaying the second picture; the second picture is a picture generated after the first face area in the first picture is replaced by the second face area, and the corresponding expressions of the first face area and the second face area are different.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, the first face area in the displayed first picture is replaced by the second face area different from the first face area. Because the expressions corresponding to the first face area and the second face area are different, the facial expression in the first picture is replaced, and compared with the picture processing effect in the prior art that only lines, colors, filters and the like of the picture can be adjusted and calibrated to change display parameters of the picture, the picture processing method provided by the embodiment of the application can realize the replacement of the facial expression and provide richer picture processing effects, so that the user requirements can be better met, and the user experience is improved.
Drawings
Fig. 1 is a schematic flowchart of a picture processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a display manner of a first target control according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a display manner of a first target control according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a display manner of a face region of a subject and a corresponding control thereof according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating a display manner of a candidate face area according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a display manner of a candidate face area according to an embodiment of the present application;
fig. 7 is a schematic diagram of a second picture provided by an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a display manner of a second target control according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a display manner of a third target control according to an embodiment of the present application;
FIG. 10 is a schematic structural diagram of a picture processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
Based on the background art, in the prior art, when an image is processed, the image can only be processed through operations of adjusting lines, colors, filters and the like of the image, that is, only display parameters of the image can be changed, so that the image processing effect is single, and the user requirements cannot be met.
Based on the above findings, embodiments of the present application provide a picture processing method, an apparatus, a device, and a storage medium, which can replace a first face region in a displayed first picture with a second face region different from the first face region. The facial expressions in the first picture are replaced, and compared with the picture processing effect in the prior art that only lines, colors, filters and the like of the picture can be adjusted and calibrated to change display parameters of the picture, the picture processing method can achieve replacement of the facial expressions, provide richer picture processing effects, better meet user requirements and improve user experience.
The following describes in detail a picture processing method, an apparatus, a device, and a storage medium provided in the embodiments of the present application with specific embodiments and application scenarios in conjunction with the accompanying drawings.
Fig. 1 illustrates a picture processing method provided in an embodiment of the present application, where the picture processing method may be applied to an electronic device. As shown in fig. 1, the picture processing method may include the steps of:
s110, receiving a first input when the first picture is displayed.
The first picture may be any one picture in the electronic device, and the picture may include the first face region. The first facial region may be a region where a facial image of any object in the first picture is located, for example, a region where a facial expression of any object in the first picture is located.
As an example, when the user wants to perform picture processing of replacing the face area on the first picture, one picture, i.e., the first picture, may be selected, and as a specific example, the first picture may be a picture that the user is satisfied with the background but is less satisfied with the face area of a certain object or objects. And then, entering a picture editing and previewing interface, and then, clicking the first target control by the user to process the picture so that the electronic equipment can receive an input instruction, namely first input. As shown in fig. 2, the first target control in fig. 2 is denoted as "emoticon replacement". Alternatively, as shown in FIG. 3, the first target control in FIG. 3 is represented as "the object has an alternative expression, click to view the alternative effect".
It is to be understood that the first input is not limited to the touch input in this embodiment, and may also be a voice input or other input, as long as the electronic device can receive the first input, and the specific input mode of the first input is not limited in this embodiment of the application.
And S120, responding to the first input, and displaying the second picture.
The second picture may be a picture generated after the first face region in the first picture is replaced by the second face region. The second facial area and the first facial area correspond to different expressions, and the different expressions may include at least one of different expression types, the same expression type and different expression amplitudes.
As an example, after receiving a first input of a user, in response to the first input, a second facial region may be obtained, which may be determined according to a picture style type of the first picture, an expression type of the first facial region in the first picture, and historical expression replacement behavior data of the user. Then, the first face region of the first object may be replaced with the second face region, a second picture is obtained, and the second picture is displayed.
In the embodiment of the application, the first face area in the displayed first picture is replaced by the second face area different from the first face area. Because the expressions corresponding to the first face area and the second face area are different, the facial expression in the first picture is replaced, and compared with the picture processing effect in the prior art that only lines, colors, filters and the like of the picture can be adjusted and calibrated to change display parameters of the picture, the picture processing method provided by the embodiment of the application can realize the replacement of the facial expression and provide richer picture processing effects, so that the user requirements can be better met, and the user experience is improved.
In some embodiments, the first picture may include face regions of a plurality of objects, and accordingly, before the first input to the first target control is received in step S110, the following steps may be further performed:
receiving a second input;
determining a first object in response to a second input;
displaying a first picture; the first face region includes a face region of the first subject.
Wherein the second input may be used to indicate a selection of the first object from the plurality of objects. The first object may be any object in the first picture, such as any person in the picture.
As an example, in a case where the first picture includes face regions of a plurality of objects and the picture editing preview interface has been entered, the electronic device may detect a position of the face region of each object in the first picture, may mark and display the face region of each object, and may provide a corresponding control for the face region of each object for the user to click and select. Then, the user may click a control corresponding to the first object in the first picture, so that the electronic device may receive an instruction input by the user, that is, a second input. The electronic device, after receiving the second input, may determine the first object in the first picture in response to the second input. As shown in fig. 4, fig. 4 illustrates a display manner of a facial region of an object and a control corresponding to the facial region, where the control corresponding to the object in fig. 4 is denoted as "replace", and after the user clicks the "replace" control, the electronic device receives a second input to determine the first object. After the first object is determined, a first picture may be displayed, and a first face region in the first picture includes a face region of the first object.
In this way, the face area of each object in the first picture is displayed in a mark form, so that the user can more intuitively see the face areas of all objects in the first picture; moreover, when the first picture includes the facial regions of the plurality of objects, the user can select the first object which is to be subjected to picture processing, that is, facial expression replacement, according to the requirement, so that the user experience can be further improved.
In some embodiments, a plurality of candidate face regions of the first object may be further displayed for selection by the user, and accordingly, after the step S110 and before the step S120 of displaying the second picture, the following steps may be further performed:
in response to a first input, searching a target picture from a picture set;
determining a candidate face area according to an area where a face image of a first object in a target picture is located;
receiving a third input to the second face region;
accordingly, a specific implementation manner of displaying the second picture in step S120 at this time may be:
in response to a third input, a second picture is displayed.
The target picture may be a picture including a face image of the first subject, and the target picture may be one or a plurality of pictures. The expressions of the candidate face regions are different from each other and from the candidate face region to the first face region, the candidate face regions include the second face region, and the candidate face regions may be plural or one.
As an example, the user may also select a second target region from the candidate face regions of the first object before displaying the second picture. As a specific example, the electronic device may, in response to the first input, find a target picture including a facial image of the first object from a set of pictures of the electronic device, such as may be a target picture from an album of the electronic device. Then, a candidate face region may be determined according to a region where the face image of the first object in each target picture is located, for example, the region where the face image of the first object in each target picture is located may be extracted, and all extracted face regions are deduplicated to obtain the candidate face region. Then, the candidate face regions may be displayed for the user to select, as shown in fig. 5, where fig. 5 shows one display manner of the candidate face regions by taking 3 candidate face regions as an example. Then, a third input that the user selects a second face region from the candidate face regions of the first object may be received, and in response to the third input, the first face region of the first object in the first picture is replaced with the second face region, so as to obtain a second picture, and the second picture is displayed.
Thus, on the one hand, since the pictures in the picture set of the electronic device of the user may be updated over time, the pictures saved in the picture set by the user are generally more representative of the expression of the facial region that the user likes. Therefore, when the first input of the user is received, the target picture is searched from the picture set in real time, and the candidate face area is determined, so that the determined candidate face area is more accurate and richer and better accords with the preference of the user, and the user experience can be further improved. On the other hand, a plurality of candidate face regions corresponding to the first object are displayed on the interface, so that the user can visually see all the candidate face regions which can replace the first face region of the first object currently. In this way, the user can select a face region that the user wants to replace according to personal preferences, so that the user experience can be further improved.
In some embodiments, the candidate face region may be determined according to at least one of a style type of the first picture, an expression type corresponding to the first face region, and historical expression replacement behavior data of the user, and accordingly, a specific implementation manner of determining the candidate face region according to the face region of the first object included in the target picture may be as follows:
acquiring preset parameters;
and determining a candidate face area from the area where the face image of the first object in the target picture is located according to preset parameters.
The preset parameters may include at least one of a picture style type of the first picture, an expression type corresponding to the first face area, and historical expression replacement behavior data of the user.
As an example, when determining a candidate face region according to a region where a face image of a first object in a target picture is located, preset parameters may be obtained, where the preset parameters may include at least one of a picture style type of the first picture, an expression type corresponding to the first face region, and historical expression replacement behavior data of a user. The picture style type of the first picture may be, for example, a picture style type of the first picture determined according to expressions, actions of other objects in the first picture, a background environment in the first picture, and the like, and the picture style type may be, for example, a literature, a port, a fresh, and the like; the expression type corresponding to the first face area of the first object in the first picture can be smile, laugh, exaggeration, funny, lovely, cry, anger and the like; the historical expression replacement behavior data of the user may be related data of expression replacement of the first object performed by the user in a past period of time, such as a style type of a picture corresponding to each historical expression replacement, an expression type corresponding to an original facial area of the object subjected to facial area replacement in the picture, and a selected facial area for replacing the original facial area of the object in the picture.
After preset parameters including at least one of a picture style type of the first picture, an expression type corresponding to the first face region, and historical expression replacement behavior data are acquired, according to the preset parameters, a region where a face image that may be used by a user when performing face region replacement is located is selected from a region where the face image of the first object in the target picture is located, and the selected region is determined to be a candidate face region. The area where the facial image possibly used by the user for facial area replacement is located is selected, for example, historical expression replacement behavior data is matched with the picture style type of the first picture and the first expression type of the first facial area, and a candidate facial area is determined.
In this way, when determining the candidate face region, at least one of the picture style type of the first picture, the expression type corresponding to the first face region, and the historical expression replacement behavior data of the user may be considered, that is, the determined candidate face region may have at least one attribute of the picture style type, the first expression type of the first face region, and the historical replacement behavior habit, so that the determined candidate face region may better meet the actual replacement requirement of the user, and the user experience may be further improved.
In some embodiments, the time interval between the generation time of the target picture including the facial image of the first object and the generation time of the first picture searched from the picture set may be less than or equal to a preset time.
As an example, when a target picture is selected, a generation time of a first picture may be determined, and a picture having an interval duration with the generation time of the first picture within a preset duration is selected from the picture set, where the preset duration may be a maximum allowable duration of a preset interval duration between the target picture and a current time. Then, a target picture including a face image of the first subject may be selected among pictures within a preset time period from an interval time period of the generation time of the first picture.
Therefore, the fact that the style type and the expression type of the picture liked by the user can change along with the change of time is considered, the first picture with the interval duration of the generation time of the first picture within the preset duration is selected, the candidate face area is determined, the determined candidate face area can be made to better accord with the preference of the user, and therefore the user experience can be further improved.
In some embodiments, the second picture may include a third face region of the second object, and at this time, the face regions of other objects in the second picture may be replaced according to the second face region, and accordingly, after the second picture is displayed in response to the first input in step S120, the following steps may be further performed:
and displaying the third picture.
The third picture may be a picture generated after the third face region in the second picture is replaced by the fourth face region of the second object, and the fourth face region may be determined according to the second face region.
The second object is an object other than the first object in the second image, and the third face region is a face region of the second object in the second picture. The fourth face region is a face region for replacing a third face region of the second object in the second picture.
As an example, in the case that the third face region of the second object is included in the second picture, the electronic device may also automatically determine, after displaying the second picture, a fourth face region for replacing the third face region of the second object according to the second face region of the first object. Then, a third face region of the second object in the second picture may be replaced with a fourth face region of the second object to obtain a third picture, and the third picture is displayed.
In this way, considering that the atmosphere of the same picture is the same and the expression types of different objects are generally similar, the fourth surface region of the second object belonging to the same picture as the first object is determined based on the second surface region of the first object, so that the fourth surface region can better meet the picture atmosphere and the picture type, the picture after expression replacement can better meet the user requirements, and the user experience can be further improved.
In some embodiments, the above process of displaying the third picture may be performed by an electronic device according to a user input, and accordingly, before the displaying the third picture, the following steps may be further included:
receiving a fourth input;
at this time, a specific implementation manner of displaying the third picture may be as follows: in response to a fourth input, a third picture is displayed.
Wherein the fourth input may be for indicating a selection of a second object for face replacement from the second picture.
As an example, after displaying the second picture, if the user further wants to perform expression substitution on an object other than the first object in the second picture, the object that wants to perform expression substitution in the second picture, that is, the second object, may be selected, so that the electronic device may receive an input instruction of the user, that is, a fourth input. The electronic device, after receiving the fourth input, may determine, in response to the fourth input, a fourth face region to replace the third face region of the second object according to the second face region of the first object, e.g., a face region of the second object in the set of pictures that is similar to the expression type corresponding to the second face region may be determined as the fourth face region of the second object according to the expression type corresponding to the second face region. Then, the third face region of the second object in the second picture may be replaced with the fourth face region of the second object to obtain a third picture, and the third picture is displayed.
It is to be understood that, after receiving the fourth input and determining the fourth face region of the second object, the electronic device may further prompt the user whether to perform face region replacement. The user can control the electronic device to execute the picture processing through an input mode such as touch or voice, so that the electronic device can receive an input instruction. After receiving the input instruction, the electronic device may replace a third face region of the second object in the second picture with a fourth face region of the second object in response to the input instruction, obtain a third picture, and display the third picture.
It should be noted that the fourth face region may be a face region having a higher similarity of expression type with the second face region. In the case where there are a plurality of face regions having a higher expression type similarity with the second face region, the face region having the highest similarity may be determined as the fourth face region, or the plurality of face regions may be displayed for the user to select.
Therefore, on one hand, the fourth surface area of the second object belonging to the same picture as the first object is determined based on the second surface area of the first object, so that the fourth surface area of the second object can better accord with the picture atmosphere and the picture type, the picture after expression replacement can better accord with the user requirement, and the user experience can be further improved. On the other hand, the user can freely select the object needing expression replacement according to the requirement, so that the user experience can be further improved.
In some embodiments, after the step S120, the following steps may be further performed:
receiving a fifth input for the second picture;
and responding to a fifth input, and displaying a fourth picture.
The fourth picture is a picture generated after the second face area in the second picture is replaced by the fifth face area; the fifth face region is a face region other than the second face region among the candidate face regions.
As an example, if the user is still not satisfied with the second picture, a face region other than the second face region, i.e., a fifth face region, may be selected from the candidate face regions. Then, the electronic device may receive an input of the user, that is, a fifth input, and in response to the fifth input, may replace the second face region in the second picture with a fifth face region, so as to update the second picture to a fourth picture, and display the fourth picture.
As shown in fig. 6, the user may select a fifth face region from the candidate face regions, and if the fifth face region may slide in the picture display region according to the arrow direction, the electronic device may automatically replace the second face region as the fifth face region, or may click the fifth face region on the candidate face region display region, the electronic device may automatically replace the second face region as the fifth face region, or may slide in the candidate face region display region to display other candidate face regions when the candidate face region display region is insufficient to display all candidate face regions. It will be appreciated that fig. 6 is merely illustrative and that in practice the selection of the fifth face region, the viewing of the other candidate face regions, may also be achieved in other ways.
In this way, when the user is still unsatisfied with the second picture, other face regions in the candidate face regions can be continuously selected, and thus, the generated fourth picture can better accord with the user preference, and the user experience can be further improved.
In some embodiments, after the step S120, at least one of the following steps may be further performed:
updating the first target control into a second target control;
and updating the first target control into a third target control.
The second target control is used for indicating to save the second picture, and the third target control is used for indicating to return to display the first picture.
As one example, after displaying the second picture, the first target control on the interface may be updated to a second target control, which may be used to save the second picture. Assuming that fig. 7 is a second picture, the second target control may be denoted as "save" as shown in fig. 8, and if the user is satisfied with the second picture, the user may click the second target control to save the second picture.
Alternatively, after displaying the second picture, the first target control on the interface may be updated to a third target control, which may be used to return to displaying the first picture. The second target control may be denoted as "back" as shown in fig. 9, and if the user is not satisfied with the second picture, the third target control may be clicked to return to displaying the first picture.
It can be understood that, if the user is satisfied with the second picture and the face replacement processing is saved, a fourth control may be displayed, where the fourth control may be used to prompt the user whether to replace the face region of another object besides the first object in the first picture, and the replacement of the face region of another object is similar to the replacement method of the face region of the first object, and is not described herein again. If the user does not replace the face area of the other object or confirm to store the second picture, the second target control or the third target control may be displayed, or the second target control and the third target control may be simultaneously displayed on the interface. And the above-mentioned processing can be executed after the fourth picture and the fifth picture are displayed.
In this way, if the user is satisfied with the second picture, the picture may be saved, and if the user is not satisfied with the second picture, the display of the first picture may be returned, and the picture processing result of this time may be discarded. Therefore, the user can store the self-favorite picture, and the user experience can be further improved.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution main body may be an image processing apparatus, or a control module in the image processing apparatus for executing the loaded image processing method. In the embodiment of the present application, an example in which an image processing device executes a processing method for loading an image is taken to describe the image processing method provided in the embodiment of the present application.
Fig. 10 shows a schematic structural diagram of a picture processing apparatus provided in an embodiment of the present application, and as shown in fig. 10, the image processing apparatus 1000 may include:
a first receiving module 1010, configured to receive a first input when a first picture is displayed; the first picture comprises a first face region;
a first display module 1020 for displaying a second picture in response to the first input; the second picture is a picture generated after a first face area in the first picture is replaced by a second face area, and the first face area and the second face area have different corresponding expressions.
In the embodiment of the application, the first face area in the displayed first picture is replaced by the second face area different from the first face area. Because the expressions corresponding to the first face area and the second face area are different, the facial expression in the first picture is replaced, and compared with the picture processing effect in the prior art that only lines, colors, filters and the like of the picture can be adjusted and calibrated to change display parameters of the picture, the picture processing method provided by the embodiment of the application can realize the replacement of the facial expression and provide richer picture processing effects, so that the user requirements can be better met, and the user experience is improved.
Optionally, the first picture comprises facial regions of a plurality of subjects;
the picture processing apparatus 1000 further includes:
the second receiving module is used for receiving a second input;
a first determining module for determining a first object in response to the second input;
the second display module is used for displaying the first picture; the first facial region includes a facial region of the first subject.
Therefore, when the first picture comprises the facial areas of the plurality of objects, the user can select the first object which needs to be subjected to picture processing, namely facial expression replacement according to the requirement, and therefore the user experience can be further improved.
Optionally, the image processing apparatus 1000 further includes:
the searching module is used for responding to the first input and searching a target picture from a picture set; the target picture is a picture including a facial image of a first subject;
the second determining module is used for determining a candidate face area according to the area where the face image of the first object in the target picture is located; the expressions are different from one another between the candidate face regions and between the candidate face region and the first face region, the candidate face regions including the second face region;
a third receiving module for receiving a third input to the second face region;
a third display module to display the second picture in response to the third input.
Thus, on the one hand, since the pictures in the picture set of the electronic device of the user may be updated over time, the pictures saved in the picture set by the user are generally more representative of the expression of the facial region that the user likes. Therefore, when the first input of the user is received, the target picture is searched from the picture set in real time, and the candidate face area is determined, so that the determined candidate face area is more accurate and richer and better accords with the preference of the user, and the user experience can be further improved. On the other hand, a plurality of candidate face regions corresponding to the first object are displayed on the interface, so that the user can visually see all the candidate face regions which can replace the first face region of the first object currently. In this way, the user can select a face region that the user wants to replace according to personal preferences, so that the user experience can be further improved.
Optionally, the second determining module includes:
the acquisition unit is used for acquiring preset parameters; the preset parameters comprise at least one item of picture style type of the first picture, expression type corresponding to the first face area and historical expression replacement behavior data of the user;
and the determining unit is used for determining the candidate face area from the area where the face image of the first object in the target picture is located according to the preset parameters.
In this way, when determining the candidate face region, at least one of the picture style type of the first picture, the expression type corresponding to the first face region, and the historical expression replacement behavior data of the user may be considered, that is, the determined candidate face region may have at least one attribute of the picture style type, the expression type corresponding to the first face region, and the historical replacement behavior habit, so that the determined candidate face region may better meet the actual replacement requirement of the user, and the user experience may be further improved.
Optionally, an interval duration between the generation time of the target picture and the generation time of the first picture is less than or equal to a preset duration.
Therefore, the fact that the style type and the expression type of the picture liked by the user can change along with the change of time is considered, the first picture with the interval duration of the generation time of the first picture within the preset duration is selected, the candidate face area is determined, the determined candidate face area can be made to better accord with the preference of the user, and therefore the user experience can be further improved.
Optionally, the second picture may include a third face region of the second object, and the picture processing apparatus 1000 further includes:
a fourth display module, configured to display a third picture, where the third picture is a picture generated after the third face region in the second picture is replaced by a fourth face region of a second object, and the fourth face region is determined according to the second face region.
In this way, considering that the atmosphere of the same picture is the same and the expression types of different objects are generally similar, the fourth surface region of the second object belonging to the same picture as the first object is determined based on the second surface region of the first object, so that the fourth surface region can better meet the picture atmosphere and the picture type, the picture after expression replacement can better meet the user requirements, and the user experience can be further improved.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing device provided in the embodiment of the present application can implement each process implemented by the image processing device in the method embodiments of fig. 1 to 9, and is not described herein again to avoid repetition.
Optionally, an electronic device is further provided in this embodiment of the present application, and includes a processor 1110, a memory 1109, and a program or an instruction that is stored in the memory 1109 and is executable on the processor 1110, where the program or the instruction is executed by the processor 1110 to implement each process of the above-described image processing method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1100 includes, but is not limited to: a radio frequency unit 1101, a network module 1102, an audio output unit 1103, an input unit 1104, a sensor 1105, a display unit 1106, a user input unit 1107, an interface unit 1108, a memory 1109, a processor 1110, and the like.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
A user input unit 1107, configured to receive a first input when a first picture is displayed;
a display unit 1106 configured to display a second picture in response to the first input.
In the embodiment of the application, the first face area in the displayed first picture is replaced by the second face area different from the first face area. Because the expressions corresponding to the first face area and the second face area are different, the facial expression in the first picture is replaced, and compared with the picture processing effect in the prior art that only lines, colors, filters and the like of the picture can be adjusted and calibrated to change display parameters of the picture, the picture processing method provided by the embodiment of the application can realize the replacement of the facial expression and provide richer picture processing effects, so that the user requirements can be better met, and the user experience is improved.
Optionally, the first picture comprises facial regions of a plurality of subjects;
a user input unit 1107 for receiving a second input;
a processor 1110 for determining a first object in response to the second input;
a display unit 1106 is configured to display the first picture.
Therefore, when the first picture comprises the facial areas of the plurality of objects, the user can select the first object which needs to be subjected to picture processing, namely facial expression replacement according to the requirement, and therefore the user experience can be further improved.
Optionally, the display unit 1106 is further configured to search for a target picture from the picture set in response to the first input; determining a candidate face area according to an area where a face image of a first object in the target picture is located;
a user input unit 1107 for receiving a third input to the second face region;
a display unit 1106 configured to display the second picture in response to the third input.
Thus, on the one hand, since the pictures in the picture set of the electronic device of the user may be updated over time, the pictures saved in the picture set by the user are generally more representative of the expression of the facial region that the user likes. Therefore, when the first input of the user is received, the target picture is searched from the picture set in real time, and the candidate face area is determined, so that the determined candidate face area is more accurate and richer and better accords with the preference of the user, and the user experience can be further improved. On the other hand, a plurality of candidate face regions corresponding to the first object are displayed on the interface, so that the user can visually see all the candidate face regions which can replace the first face region of the first object currently. In this way, the user can select a face region that the user wants to replace according to personal preferences, so that the user experience can be further improved.
Optionally, the processor 1110 is further configured to:
acquiring preset parameters;
and determining the candidate face area from the area where the face image of the first object in the target picture is located according to the preset parameters.
In this way, when the candidate face area is determined, at least one of the picture style type of the first picture, the expression type corresponding to the first face area, and the historical expression replacement behavior data of the user may be considered, that is, the determined candidate face data has at least one attribute of the picture style type, the expression type corresponding to the first face area, and the historical replacement behavior habit, so that the determined candidate face area can better meet the actual replacement requirement of the user, and the user experience can be further improved.
Optionally, the display unit 1106 is further configured to display a third picture, where the third picture is a picture generated after the third face region in the second picture is replaced by a fourth face region of a second object, and the fourth face region is determined according to the second face region.
In this way, considering that the atmosphere of the same picture is the same and the expression types of different objects are generally similar, the fourth surface region of the second object belonging to the same picture as the first object is determined based on the second surface region of the first object, so that the fourth surface region can better meet the picture atmosphere and the picture type, the picture after expression replacement can better meet the user requirements, and the user experience can be further improved.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (14)

1. An image processing method, comprising:
receiving a first input under the condition that a first picture is displayed; the first picture comprises a first face region;
displaying a second picture in response to the first input; the second picture is a picture generated after a first face area in the first picture is replaced by a second face area, and the first face area and the second face area have different corresponding expressions.
2. The method of claim 1, wherein the first picture includes facial regions of a plurality of subjects;
before the receiving the first input, the method further comprises:
receiving a second input;
determining a first object in response to the second input;
displaying the first picture; the first facial region includes a facial region of the first subject.
3. The method of claim 1 or 2, wherein after receiving the first input and before displaying the second picture, further comprising:
in response to the first input, searching a target picture from a picture set; the target picture is a picture including a facial image of a first subject;
determining a candidate face area according to an area where a face image of a first object in the target picture is located; the expressions are different from one another between the candidate face regions and between the candidate face region and the first face region, the candidate face regions including the second face region;
receiving a third input to the second face region;
the displaying the second picture specifically includes: in response to the third input, displaying the second picture.
4. The method according to claim 3, wherein determining the candidate face region according to the region in which the face image of the first object in the target picture is located comprises:
acquiring preset parameters; the preset parameters comprise at least one item of picture style type of the first picture, expression type corresponding to the first face area and historical expression replacement behavior data of the user;
and determining the candidate face area from the area where the face image of the first object in the target picture is located according to the preset parameters.
5. The method according to claim 3, wherein a time interval between the generation time of the target picture and the generation time of the first picture is less than or equal to a preset time.
6. The method of any of claims 2-5, wherein the second picture includes a third facial region of the second object, and wherein, after displaying the second picture in response to the first input, further comprising:
displaying a third picture, wherein the third picture is a picture generated after the third face region in the second picture is replaced by a fourth face region of a second object, and the fourth face region is determined according to the second face region.
7. A picture processing apparatus, comprising:
the first receiving module is used for receiving first input under the condition that a first picture is displayed; the first picture comprises a first face region;
a first display module for displaying a second picture in response to the first input; the second picture is a picture generated after a first face area in the first picture is replaced by a second face area, and the first face area and the second face area have different corresponding expressions.
8. The apparatus of claim 7, wherein the first picture comprises facial regions of a plurality of subjects;
the picture processing apparatus further includes:
the second receiving module is used for receiving a second input;
a first determining module for determining a first object in response to the second input;
the second display module is used for displaying the first picture; the first facial region includes a facial region of the first subject.
9. The apparatus according to claim 7 or 8, wherein the picture processing apparatus further comprises:
the searching module is used for responding to the first input and searching a target picture from a picture set; the target picture is a picture including a facial image of a first subject;
the second determining module is used for determining a candidate face area according to the area where the face image of the first object in the target picture is located; the expressions are different from one another between the candidate face regions and between the candidate face region and the first face region, the candidate face regions including the second face region;
a third receiving module for receiving a third input to the second face region;
a third display module to display the second picture in response to the third input.
10. The apparatus of claim 9, wherein the second determining module comprises:
the acquisition unit is used for acquiring preset parameters; the preset parameters comprise at least one item of picture style type of the first picture, expression type corresponding to the first face area and historical expression replacement behavior data of the user;
and the determining unit is used for determining the candidate face area from the area where the face image of the first object in the target picture is located according to the preset parameters.
11. The apparatus according to claim 9, wherein a time interval between the generation time of the target picture and the generation time of the first picture is less than or equal to a preset time.
12. The apparatus according to any one of claims 8-11, wherein the second picture comprises a third face region of the second object, and the picture processing apparatus further comprises:
a fourth display module, configured to display a third picture, where the third picture is a picture generated after the third face region in the second picture is replaced by a fourth face region of a second object, and the fourth face region is determined according to the second face region.
13. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the picture processing method as claimed in any one of claims 1 to 6.
14. A readable storage medium, on which a program or instructions are stored, which, when executed by a processor, carry out the steps of the picture processing method according to any one of claims 1 to 6.
CN202110196637.2A 2021-02-22 2021-02-22 Picture processing method, device, equipment and storage medium Pending CN112818147A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110196637.2A CN112818147A (en) 2021-02-22 2021-02-22 Picture processing method, device, equipment and storage medium
PCT/CN2022/077076 WO2022174826A1 (en) 2021-02-22 2022-02-21 Image processing method and apparatus, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110196637.2A CN112818147A (en) 2021-02-22 2021-02-22 Picture processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112818147A true CN112818147A (en) 2021-05-18

Family

ID=75864630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110196637.2A Pending CN112818147A (en) 2021-02-22 2021-02-22 Picture processing method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112818147A (en)
WO (1) WO2022174826A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022174826A1 (en) * 2021-02-22 2022-08-25 维沃移动通信有限公司 Image processing method and apparatus, device, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100066840A1 (en) * 2007-02-15 2010-03-18 Sony Corporation Image processing device and image processing method
CN103888658A (en) * 2012-12-21 2014-06-25 索尼公司 Information Processing Device And Recording Medium
CN107123081A (en) * 2017-04-01 2017-09-01 北京小米移动软件有限公司 image processing method, device and terminal
US20180075289A1 (en) * 2015-11-25 2018-03-15 Tencent Technology (Shenzhen) Company Limited Image information processing method and apparatus, and computer storage medium
CN108520493A (en) * 2018-03-30 2018-09-11 广东欧珀移动通信有限公司 Processing method, device, storage medium and the electronic equipment that image is replaced
CN110378840A (en) * 2019-07-23 2019-10-25 厦门美图之家科技有限公司 Image processing method and device
CN112083863A (en) * 2020-09-17 2020-12-15 维沃移动通信有限公司 Image processing method and device, electronic equipment and readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258316B (en) * 2013-03-29 2017-02-15 东莞宇龙通信科技有限公司 Method and device for picture processing
US10573349B2 (en) * 2017-12-28 2020-02-25 Facebook, Inc. Systems and methods for generating personalized emoticons and lip synching videos based on facial recognition
CN108985241B (en) * 2018-07-23 2023-05-02 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN112818147A (en) * 2021-02-22 2021-05-18 维沃移动通信有限公司 Picture processing method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100066840A1 (en) * 2007-02-15 2010-03-18 Sony Corporation Image processing device and image processing method
CN103888658A (en) * 2012-12-21 2014-06-25 索尼公司 Information Processing Device And Recording Medium
US20180075289A1 (en) * 2015-11-25 2018-03-15 Tencent Technology (Shenzhen) Company Limited Image information processing method and apparatus, and computer storage medium
CN107123081A (en) * 2017-04-01 2017-09-01 北京小米移动软件有限公司 image processing method, device and terminal
CN108520493A (en) * 2018-03-30 2018-09-11 广东欧珀移动通信有限公司 Processing method, device, storage medium and the electronic equipment that image is replaced
CN110378840A (en) * 2019-07-23 2019-10-25 厦门美图之家科技有限公司 Image processing method and device
CN112083863A (en) * 2020-09-17 2020-12-15 维沃移动通信有限公司 Image processing method and device, electronic equipment and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022174826A1 (en) * 2021-02-22 2022-08-25 维沃移动通信有限公司 Image processing method and apparatus, device, and storage medium

Also Published As

Publication number Publication date
WO2022174826A1 (en) 2022-08-25

Similar Documents

Publication Publication Date Title
CN113079316B (en) Image processing method, image processing device and electronic equipment
CN113596555B (en) Video playing method and device and electronic equipment
CN112449110B (en) Image processing method and device and electronic equipment
CN111857460A (en) Split screen processing method, split screen processing device, electronic equipment and readable storage medium
CN112269522A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN112083854A (en) Application program running method and device
CN113849092A (en) Content sharing method and device and electronic equipment
CN114374663B (en) Message processing method and message processing device
CN112134987B (en) Information processing method and device and electronic equipment
CN111885298B (en) Image processing method and device
CN112035026B (en) Information display method and device, electronic equipment and storage medium
CN112818147A (en) Picture processing method, device, equipment and storage medium
CN112734661A (en) Image processing method and device
CN112328829A (en) Video content retrieval method and device
CN112328149B (en) Picture format setting method and device and electronic equipment
CN112416143B (en) Text information editing method and device and electronic equipment
CN111796733B (en) Image display method, image display device and electronic equipment
CN113779293A (en) Image downloading method, device, electronic equipment and medium
CN113805997A (en) Information display method and device, electronic equipment and storage medium
CN113010072A (en) Searching method and device, electronic equipment and readable storage medium
CN113127425A (en) Picture processing method and device and electronic equipment
CN112084151A (en) File processing method and device and electronic equipment
CN111858395A (en) Data management method and device
CN111694999A (en) Information processing method and device and electronic equipment
CN112764632B (en) Image sharing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination