CN112734661A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN112734661A
CN112734661A CN202011606748.8A CN202011606748A CN112734661A CN 112734661 A CN112734661 A CN 112734661A CN 202011606748 A CN202011606748 A CN 202011606748A CN 112734661 A CN112734661 A CN 112734661A
Authority
CN
China
Prior art keywords
image
processed
input
person
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011606748.8A
Other languages
Chinese (zh)
Inventor
董丽君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011606748.8A priority Critical patent/CN112734661A/en
Publication of CN112734661A publication Critical patent/CN112734661A/en
Priority to PCT/CN2021/140738 priority patent/WO2022143382A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method and device, and belongs to the technical field of communication. Wherein, the method comprises the following steps: receiving a first input of a user; determining a target reference person image in the image to be processed in response to the first input; receiving a second input of the user; determining a to-be-processed human image in the to-be-processed image in response to the second input; obtaining makeup information of the target reference person according to the target reference person image; and processing the figure image to be processed according to the makeup information to obtain a processed figure image. The scheme can realize a method for supporting makeup sharing migration on the multi-person group photo, so that the image processing is more intelligent, the makeup in the whole picture is more uniform, the embarrassment of the plain person of the group photo and the discordance of the whole group photo are avoided, and the user experience is improved.

Description

Image processing method and device
Technical Field
The application belongs to the technical field of communication equipment, and particularly relates to an image processing method and device.
Background
At present, in order to meet the requirements of users, the beautifying effect is opened and applied to all faces which can be detected when a picture is taken. However, when all faces detected on the picture are made up through the face beautifying effect, the effect of the photo group user who originally has no makeup is better, and the makeup of the made up photo group user is more concentrated, so that the difference of the face beautifying effect of the people who have been made up and have not been made up is larger in the existing photographing face beautifying mode, and the user experience is poor.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method and device, which can solve the problem of poor user experience of the existing image processing scheme.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
receiving a first input of a user;
determining a target reference person image in the image to be processed in response to the first input;
receiving a second input of the user;
determining a to-be-processed human image in the to-be-processed image in response to the second input;
obtaining makeup information of the target reference person according to the target reference person image;
and processing the figure image to be processed according to the makeup information to obtain a processed figure image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the first receiving module is used for receiving a first input of a user;
a first determination module for determining a target reference person image in the image to be processed in response to the first input;
the second receiving module is used for receiving a second input of the user;
a second determining module, configured to determine, in response to the second input, a to-be-processed person image in the to-be-processed image;
the first acquisition module is used for acquiring the makeup information of the target reference person according to the target reference person image;
and the first processing module is used for processing the figure image to be processed according to the makeup information to obtain a processed figure image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, the first input of the user is received; determining a target reference person image in the image to be processed in response to the first input; receiving a second input of the user; determining a to-be-processed human image in the to-be-processed image in response to the second input; obtaining makeup information of the target reference person according to the target reference person image; processing the figure image to be processed according to the makeup information to obtain a processed figure image; the method for supporting makeup sharing migration on the multi-person group photo can be realized, so that image processing is more intelligent, makeup in the whole picture is more uniform, embarrassment of a group photo plain person and discordance of the whole group photo person are avoided, user experience is improved, and the problem of poor user experience of the existing image processing scheme is well solved.
Drawings
FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a flowchart of an embodiment of an image processing method;
FIG. 3 is a first diagram illustrating image processing according to an embodiment of the present disclosure;
FIG. 4 is a second diagram of image processing according to an embodiment of the present application;
FIG. 5 is a third schematic diagram of image processing according to an embodiment of the present application;
FIG. 6 is a fourth schematic diagram of image processing according to an embodiment of the present application;
FIG. 7 is a fifth exemplary image processing diagram according to an embodiment of the present disclosure;
FIG. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 9 is a first schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The image processing method provided by the embodiment of the application is applied to electronic equipment, and as shown in fig. 1, the method includes:
step 11: a first input is received from a user.
The first input may be an instruction manually input by a user or a voice instruction, which is not limited herein.
Step 12: in response to the first input, a target reference person image in the image to be processed is determined.
Wherein the image to be processed comprises: any one of a photographed image, a preview image, or a video frame image. The image on which the photographing is completed refers to an image, such as a photograph, which has been acquired after the photographing operation has been performed.
Step 13: a second input by the user is received.
The second input may be an instruction manually input by a user or a voice instruction, which is not limited herein.
Step 14: and determining the image of the person to be processed in the image to be processed in response to the second input.
The number of the character images to be processed may be at least one. The image to be processed where the target reference character image is located and the image to be processed where the image to be processed is located are the same image or different images.
Step 15: and obtaining makeup information of the target reference person according to the target reference person image.
The specific means for obtaining makeup information may be in the conventional manner and will not be described herein.
Step 16: and processing the figure image to be processed according to the makeup information to obtain a processed figure image.
The makeup of the person in the treated person image is similar to, or even identical to, the makeup of the person in the target reference person image.
Further, before receiving the first input of the user, the method further includes: under the condition that a third input to the image to be processed is received, carrying out face detection on the image to be processed to obtain a candidate figure image; correspondingly, the determining the target reference person image in the image to be processed in response to the first input comprises: determining the target reference person image from the candidate person images in response to the first input; the determining, in response to the second input, a to-be-processed human image in the to-be-processed image includes: and determining the character image to be processed from the candidate character image in response to the second input.
Therefore, the personalized requirements of the user can be met more humanizedly.
Wherein, the third input may be an input performed through the first function key, and the first input and the second input may be preset click operations on a screen of the electronic device.
Specific examples thereof are: as shown in fig. 4, after receiving an input (corresponding to the above-described third input) for the function key of "makeup sharing", face detection is performed; after the detection is completed, "a person who is a makeup person (corresponding to the above-described target reference person image)" is selected from two faces in response to the first input, as shown in fig. 5; as shown in fig. 6, in response to the second input, "person to be made up (corresponding to the above-described person image to be processed)" is selected from the two faces.
Further, after processing the to-be-processed person image according to the makeup information to obtain a processed person image, the method further includes: receiving a fourth input from the user; in response to the fourth input, saving the processed person image.
Specifically, the to-be-processed image including the target reference person image and the processed person image may be saved.
Therefore, the image with uniform makeup in the whole picture can be saved and obtained for the user to use.
The image processing method provided in the embodiment of the present application is further described below, where the to-be-processed image is taken as an example of a captured group photo.
In view of the foregoing technical problems, an embodiment of the present application provides an image processing method, which can be specifically implemented as a method for supporting makeup sharing migration on a multi-person group photo: identifying the face of the shot (group photo) photo, and marking a cosmetic applicator (namely the target reference person image) and other cosmetic applicators (namely the to-be-processed person image) by a user after identification; then, the user is supported to confirm the migration, the person to be made up can have the same makeup as the person to be made up after the confirmation, and the makeup in the whole picture is uniform; thus, the embarrassment of the photo group-photo person and the discordance of the whole group-photo person can be avoided. That is to say this scheme: the combination of the capabilities of face detection and makeup migration is fully utilized, and the user is supported to support makeup migration sharing in the same picture.
Specifically, the scheme provided by the embodiment of the present application can be as shown in fig. 2, and includes:
step 21: after the electronic equipment shoots the photo, the shot photo is displayed under the condition of receiving an opening instruction of a user; and displaying alternative editing keys under the condition of receiving an editing instruction of a user, as shown in fig. 3; after receiving the "makeup sharing" instruction (corresponding to the third input described above), makeup sharing detection is turned on.
That is, after the user takes a picture, the user opens the picture just taken and clicks "makeup sharing", that is, the makeup sharing test is started.
Step 22: after the makeup sharing detection is turned on, the face of a person in the current screen (i.e., the displayed photograph) is first detected, as shown in fig. 4.
Step 23: after the detection is completed, an option of selecting the makeup person and the person to be made up is provided, as shown in fig. 5 and 6 (the result of detecting faces, such as several faces, may also be displayed); receiving a selection instruction of a user, and determining a cosmetic applicator and a cosmetic applicator (namely, the target reference character image is determined from the candidate character image in response to the first input, and the character image to be processed is determined from the candidate character image in response to the second input). The method specifically comprises the following steps: the selected face under the condition that the option is selected is the face under the option; for example, if the face on the left side in the figure is selected in the case where the option of the cosmetic applicator is selected, the face is the face of the cosmetic applicator (see fig. 5).
That is, the user is supported to select the face of the person to be made up, and the makeup of the face of the person to be made up is migrated (copied) to the face of the person to be made up.
Step 24: after the migration effect is determined, the preservation can be directly determined, and finally the image forming effect is the makeup unifying effect, as shown in fig. 7.
Specifically, the photo may be a photo in which the face of the person to be made up is processed according to the person to be made up after receiving an instruction from the user to confirm the person to be made up and the person to be made up; after receiving the save instruction, the processed photograph is saved (the processed personal image is saved in response to the fourth input described above).
Therefore, the scheme provided by the embodiment of the application can help the user to transfer the makeup of other people in the group photo to the face of the user or other people who do not make up, and the makeup uniformity of the overall shooting effect is achieved; therefore, the time cost of taking makeup by multiple people can be saved, good photographing experience of the makeup can be provided for the user, the makeup can be unified by other people except for the people who do not need the makeup in the photographed piece, and the unfavorable image that the user does not make up and the other people group together is avoided.
It is explained here that the solution provided by the embodiment of the present application can also be applied to photographing, and when (during) photographing, a certain makeup of a cosmetic person is directly migrated and shared to the face of another person in the photograph; or the makeup of a certain person in the video is migrated and shared to the face of other people to achieve the fun effect by being applied to the video media stream, which is not limited herein.
In summary, the image processing method provided by the embodiment of the present application receives a first input of a user; determining a target reference person image in the image to be processed in response to the first input; receiving a second input of the user; determining a to-be-processed human image in the to-be-processed image in response to the second input; obtaining makeup information of the target reference person according to the target reference person image; processing the figure image to be processed according to the makeup information to obtain a processed figure image; the method for supporting makeup sharing migration on the multi-person group photo can be realized, so that image processing is more intelligent, makeup in the whole picture is more uniform, embarrassment of a group photo plain person and discordance of the whole group photo person are avoided, user experience is improved, and the problem of poor user experience of the existing image processing scheme is well solved.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
An embodiment of the present application further provides an image processing apparatus, as shown in fig. 8, including:
a first receiving module 81, configured to receive a first input of a user;
a first determination module 82, configured to determine a target reference person image in the image to be processed in response to the first input;
a second receiving module 83, configured to receive a second input of the user;
a second determining module 84, configured to determine, in response to the second input, a to-be-processed person image in the to-be-processed image;
a first obtaining module 85, configured to obtain makeup information of a target reference person according to the target reference person image;
and the first processing module 86 is used for processing the figure image to be processed according to the makeup information to obtain a processed figure image.
Further, the image processing apparatus further includes: the first detection module is used for carrying out face detection on the image to be processed under the condition that a third input to the image to be processed is received before a first input of a user is received, so as to obtain a candidate person image; correspondingly, the first determining module includes: a first determination sub-module for determining the target reference person image from the candidate person images in response to the first input; the second determining module includes: and the second determining sub-module is used for determining the character image to be processed from the candidate character image in response to the second input.
Wherein the image to be processed comprises: any one of a photographed image, a preview image, or a video frame image.
In the embodiment of the application, the image to be processed where the target reference person image is located and the image to be processed where the image to be processed is located are the same image or different images.
Further, the image processing apparatus further includes: the third receiving module is used for receiving a fourth input of the user after processing the figure image to be processed according to the makeup information to obtain a processed figure image; and the first saving module is used for responding to the fourth input and saving the processed character image.
The image processing device provided by the embodiment of the application receives a first input of a user; determining a target reference person image in the image to be processed in response to the first input; receiving a second input of the user; determining a to-be-processed human image in the to-be-processed image in response to the second input; obtaining makeup information of the target reference person according to the target reference person image; processing the figure image to be processed according to the makeup information to obtain a processed figure image; the method for supporting makeup sharing migration on the multi-person group photo can be realized, so that image processing is more intelligent, makeup in the whole picture is more uniform, embarrassment of a group photo plain person and discordance of the whole group photo person are avoided, user experience is improved, and the problem of poor user experience of the existing image processing scheme is well solved.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 7, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 9, an electronic device 90 is further provided in this embodiment of the present application, and includes a processor 91, a memory 92, and a program or an instruction stored in the memory 92 and executable on the processor 91, where the program or the instruction is executed by the processor 91 to implement each process of the foregoing embodiment of the image processing method, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Wherein, the processor 1010 is configured to receive a first input of a user through the user input unit 107; determining a target reference person image in the image to be processed in response to the first input; receiving a second input of the user through the user input unit 107; determining a to-be-processed human image in the to-be-processed image in response to the second input; obtaining makeup information of the target reference person according to the target reference person image; and processing the figure image to be processed according to the makeup information to obtain a processed figure image.
According to the embodiment of the application, a first input of a user is received; determining a target reference person image in the image to be processed in response to the first input; receiving a second input of the user; determining a to-be-processed human image in the to-be-processed image in response to the second input; obtaining makeup information of the target reference person according to the target reference person image; processing the figure image to be processed according to the makeup information to obtain a processed figure image; the method for supporting makeup sharing migration on the multi-person group photo can be realized, so that image processing is more intelligent, makeup in the whole picture is more uniform, embarrassment of a group photo plain person and discordance of the whole group photo person are avoided, user experience is improved, and the problem of poor user experience of the existing image processing scheme is well solved.
Optionally, the processor 1010 is further configured to, before receiving the first input of the user, perform face detection on the to-be-processed image to obtain a candidate person image in a case that a third input to the to-be-processed image is received through the user input unit 107;
correspondingly, the processor 1010 is specifically configured to determine the target reference person image from the candidate person images in response to the first input; and determining the character image to be processed from the candidate character image in response to the second input.
Optionally, the image to be processed includes: any one of a photographed image, a preview image, or a video frame image.
Optionally, the image to be processed where the target reference person image is located and the image to be processed where the image to be processed is located are the same image or different images.
Optionally, the processor 1010 is further configured to receive a fourth input from the user through the user input unit 107 after processing the to-be-processed person image according to the makeup information to obtain a processed person image; in response to the fourth input, saving the processed person image.
The scheme provided by the embodiment of the application can help the user to transfer the makeup of other people in the group photo to the face of the user or other people who do not make up, so that the makeup uniformity of the whole shooting effect is achieved; therefore, the time cost of taking makeup by multiple people can be saved, good photographing experience of the makeup can be provided for the user, the makeup can be unified by other people except for the people who do not need the makeup in the photographed piece, and the unfavorable image that the user does not make up and the other people group together is avoided.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image processing method, comprising:
receiving a first input of a user;
determining a target reference person image in the image to be processed in response to the first input;
receiving a second input of the user;
determining a to-be-processed human image in the to-be-processed image in response to the second input;
obtaining makeup information of the target reference person according to the target reference person image;
and processing the figure image to be processed according to the makeup information to obtain a processed figure image.
2. The image processing method according to claim 1, further comprising, before receiving the first input of the user:
under the condition that a third input to the image to be processed is received, carrying out face detection on the image to be processed to obtain a candidate figure image;
the determining a target reference person image in the image to be processed in response to the first input comprises:
determining the target reference person image from the candidate person images in response to the first input;
the determining, in response to the second input, a to-be-processed human image in the to-be-processed image includes:
and determining the character image to be processed from the candidate character image in response to the second input.
3. The image processing method according to claim 1, wherein the image to be processed comprises: any one of a photographed image, a preview image, or a video frame image.
4. The image processing method according to any one of claims 1 to 3, wherein the image to be processed in which the target reference person image is located and the image to be processed in which the image to be processed is located are the same image or different images.
5. The image processing method according to claim 1, further comprising, after processing the person image to be processed based on the makeup information to obtain a processed person image:
receiving a fourth input from the user;
in response to the fourth input, saving the processed person image.
6. An image processing apparatus characterized by comprising:
the first receiving module is used for receiving a first input of a user;
a first determination module for determining a target reference person image in the image to be processed in response to the first input;
the second receiving module is used for receiving a second input of the user;
a second determining module, configured to determine, in response to the second input, a to-be-processed person image in the to-be-processed image;
the first acquisition module is used for acquiring the makeup information of the target reference person according to the target reference person image;
and the first processing module is used for processing the figure image to be processed according to the makeup information to obtain a processed figure image.
7. The image processing apparatus according to claim 6, further comprising:
the first detection module is used for carrying out face detection on the image to be processed under the condition that a third input to the image to be processed is received before a first input of a user is received, so as to obtain a candidate person image;
the first determining module includes:
a first determination sub-module for determining the target reference person image from the candidate person images in response to the first input;
the second determining module includes:
and the second determining sub-module is used for determining the character image to be processed from the candidate character image in response to the second input.
8. The image processing apparatus according to claim 6, wherein the image to be processed includes: any one of a photographed image, a preview image, or a video frame image.
9. The image processing apparatus according to any one of claims 6 to 8, wherein the image to be processed in which the target reference person image is present and the image to be processed in which the image to be processed is present are the same image or different images.
10. The image processing apparatus according to claim 6, further comprising:
the third receiving module is used for receiving a fourth input of the user after processing the figure image to be processed according to the makeup information to obtain a processed figure image;
and the first saving module is used for responding to the fourth input and saving the processed character image.
CN202011606748.8A 2020-12-30 2020-12-30 Image processing method and device Pending CN112734661A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011606748.8A CN112734661A (en) 2020-12-30 2020-12-30 Image processing method and device
PCT/CN2021/140738 WO2022143382A1 (en) 2020-12-30 2021-12-23 Image processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011606748.8A CN112734661A (en) 2020-12-30 2020-12-30 Image processing method and device

Publications (1)

Publication Number Publication Date
CN112734661A true CN112734661A (en) 2021-04-30

Family

ID=75610759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011606748.8A Pending CN112734661A (en) 2020-12-30 2020-12-30 Image processing method and device

Country Status (2)

Country Link
CN (1) CN112734661A (en)
WO (1) WO2022143382A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114143454A (en) * 2021-11-19 2022-03-04 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
WO2022143382A1 (en) * 2020-12-30 2022-07-07 维沃移动通信有限公司 Image processing method and apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10235730B1 (en) * 2013-05-20 2019-03-19 Visualmits, Llc Casino table games with interactive content
CN108509846B (en) * 2018-02-09 2022-02-11 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, computer device, storage medium, and computer program product
CN109712090A (en) * 2018-12-18 2019-05-03 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN111756995A (en) * 2020-06-17 2020-10-09 维沃移动通信有限公司 Image processing method and device
CN112734661A (en) * 2020-12-30 2021-04-30 维沃移动通信有限公司 Image processing method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022143382A1 (en) * 2020-12-30 2022-07-07 维沃移动通信有限公司 Image processing method and apparatus
CN114143454A (en) * 2021-11-19 2022-03-04 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN114143454B (en) * 2021-11-19 2023-11-03 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
WO2022143382A1 (en) 2022-07-07

Similar Documents

Publication Publication Date Title
CN112135046A (en) Video shooting method, video shooting device and electronic equipment
CN113794834B (en) Image processing method and device and electronic equipment
CN112306607A (en) Screenshot method and device, electronic equipment and readable storage medium
CN111857512A (en) Image editing method and device and electronic equipment
CN112911147B (en) Display control method, display control device and electronic equipment
CN112422817B (en) Image processing method and device
CN112532885B (en) Anti-shake method and device and electronic equipment
CN112291475B (en) Photographing method and device and electronic equipment
CN112269522A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN112462990A (en) Image sending method and device and electronic equipment
WO2022143382A1 (en) Image processing method and apparatus
CN112449110B (en) Image processing method and device and electronic equipment
CN114466140B (en) Image shooting method and device
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN112383708B (en) Shooting method and device, electronic equipment and readable storage medium
CN111796733B (en) Image display method, image display device and electronic equipment
CN113962840A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114245017A (en) Shooting method and device and electronic equipment
CN113473012A (en) Virtualization processing method and device and electronic equipment
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN113691443B (en) Image sharing method and device and electronic equipment
CN112367470B (en) Image processing method and device and electronic equipment
CN113489901B (en) Shooting method and device thereof
CN114071016A (en) Image processing method, image processing device, electronic equipment and storage medium
CN117331469A (en) Screen display method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination