CN112669233A - Image processing method, image processing apparatus, electronic device, storage medium, and program product - Google Patents

Image processing method, image processing apparatus, electronic device, storage medium, and program product Download PDF

Info

Publication number
CN112669233A
CN112669233A CN202011565300.6A CN202011565300A CN112669233A CN 112669233 A CN112669233 A CN 112669233A CN 202011565300 A CN202011565300 A CN 202011565300A CN 112669233 A CN112669233 A CN 112669233A
Authority
CN
China
Prior art keywords
image
target
face
face image
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011565300.6A
Other languages
Chinese (zh)
Inventor
张艺琨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011565300.6A priority Critical patent/CN112669233A/en
Publication of CN112669233A publication Critical patent/CN112669233A/en
Pending legal-status Critical Current

Links

Images

Abstract

The present disclosure relates to an image processing method, an apparatus, an electronic device, a storage medium, and a program product, the method comprising: acquiring an image to be processed and a candidate material, wherein the image to be processed comprises a face image, and determining a target material in the candidate material; identifying face attribute information of a face image, and adjusting original display parameters of the target material according to the face attribute information to obtain target display parameters; and processing the face image according to the target display parameters to obtain and display the processed face image. The method and the device realize the adaptation of the target display parameters and the face image in the image to be processed, carry out targeted beautifying treatment on the face image in the image to be processed by utilizing the target display parameters, add a personalized makeup effect to the face image in the image to be processed, and improve the beautifying effect of the image to be processed.

Description

Image processing method, image processing apparatus, electronic device, storage medium, and program product
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, a storage medium, and a program product.
Background
With the popularization of mobile terminals, more and more users prefer to use mobile terminals to capture images or videos. In order to make the shot image or video look more beautiful, a user usually performs a beauty process on the shot image or video through a beauty function of the mobile terminal.
In the related art, a target object such as a human face in an image or a video is subjected to makeup effect increase by selecting a makeup material, so that the purpose of beautifying the target object is achieved. Different target objects require different beauty treatment modes. However, the beauty treatment method in the related art is too single, resulting in a technical problem of poor beauty effect.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, an electronic device, a storage medium, and a program product, which at least solve the problem of poor beautifying effect caused by too single beautifying processing mode in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including:
acquiring an image to be processed and a candidate material, wherein the image to be processed comprises a face image, and the candidate material is used for processing the face image;
determining target materials in the candidate materials;
identifying face attribute information of the face image, and adjusting original display parameters of the target material according to the face attribute information to obtain target display parameters;
and processing the face image according to the target display parameters to obtain and display the processed face image.
In one embodiment, the number of the face images is multiple; the adjusting the original display parameters of the target material according to the face attribute information to obtain target display parameters comprises:
determining a target face image according to the face attribute information of each face image; the target face image is a face image of which the display parameters need to be adjusted;
adjusting original display parameters of the target material according to the face attribute information of the target face image to obtain the target display parameters;
the processing the face image according to the target display parameters to obtain and display the processed face image includes:
and carrying out image processing on the target face image according to the target display parameters to obtain and display the processed target face image.
In one embodiment, the image to be processed further comprises other face images except the target face image; the processing the face image according to the target display parameters to obtain and display the processed face image, further comprising:
and processing the other face images according to the original display parameters to obtain and display the processed other face images.
In one embodiment, the method further comprises:
displaying an attribute scanning identifier of the target face image under the condition that the target face exists in the image to be processed;
and in the process of displaying the attribute scanning identifier, identifying the face attribute information of the target face image.
In one embodiment, the displaying the attribute scan identifier of the target face image includes:
displaying a transparent window at the position of the target human face image;
and displaying a scanning animation file in the transparent window, wherein the scanning animation file is used for representing the process of identifying the target face image to obtain the face attribute information and the process of processing the target face image.
In one embodiment, a switch control is arranged in a display interface of the image to be processed, and the switch control is used for controlling whether a display parameter adjusting function is started or not; the attribute scanning identifier for displaying the target face image comprises:
and responding to the opening instruction of the switch control, and displaying the attribute scanning identification of the target face image.
In one embodiment, after the processing the face image according to the target display parameter to obtain and display the processed face image, the method further includes:
and responding to a closing instruction of the switch control, processing the face image according to the original display parameters to obtain and display the processed face image.
In one embodiment, the method further comprises:
displaying prompt information in a display interface for displaying the image to be processed; the prompt information is used for reminding the position of a switch control, and the switch control is used for controlling whether a display parameter adjusting function is started or not.
In one embodiment, the method further comprises:
and displaying the contrast effect of a first image and a second image, wherein the first image is used for displaying the face image processed by adopting the original display parameters, and the second image is used for displaying the face image processed by adopting the target display parameters.
In one embodiment, the adjusting the original display parameters of the target material according to the face attribute information includes:
adjusting the original display parameters according to at least one of the gender type, the makeup status information, the color value information, and the skin condition information.
In one embodiment, the adjusting the original display parameters of the target material according to the face attribute information to obtain target display parameters includes:
acquiring a corresponding relation between pre-configured face attribute information and a display adjustment coefficient according to the identification of the target material;
determining a display adjustment coefficient corresponding to the face attribute information of the face image according to the corresponding relation;
and adjusting the original display parameters according to the display adjustment coefficient to obtain the target display parameters.
In one embodiment, before the adjusting the original display parameters of the target material according to the face attribute information to obtain the target display parameters, the method further includes:
acquiring a display parameter adjusting instruction, wherein the display parameter adjusting instruction carries a target coefficient;
the adjusting the original display parameters of the target material according to the face attribute information to obtain target display parameters comprises:
and adjusting the original display parameters according to the face attribute information and the target coefficient to obtain the target display parameters.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus, the apparatus including:
the image material display module is configured to execute acquisition of an image to be processed and a candidate material, wherein the image to be processed comprises a face image, and the candidate material is used for processing the face image;
a target material determination module configured to perform determining target materials among the candidate materials;
the display parameter adjusting module is configured to identify face attribute information of the face image, and adjust original display parameters of the target material according to the face attribute information to obtain target display parameters;
and the first image display module is configured to process the face image according to the target display parameters, and obtain and display the processed face image.
In one embodiment, the number of the face images is multiple; the display parameter adjusting module is further configured to determine a target face image according to the face attribute information of each face image; the target face image is a face image of which the display parameters need to be adjusted; adjusting original display parameters of the target material according to the face attribute information of the target face image to obtain the target display parameters;
the first image display module is further configured to perform image processing on the target face image according to the target display parameters, and obtain and display the processed target face image.
In one embodiment, the image to be processed further comprises other face images except the target face image; the first image display module is further configured to perform processing on the other face images according to the original display parameters, and obtain and display the processed other face images.
In one embodiment, the image processing apparatus further includes:
the scanning identification display module is configured to display the attribute scanning identification of the target face image under the condition that the target face exists in the image to be processed;
the face information recognition module is configured to recognize the face attribute information of the target face image in the process of displaying the attribute scanning identifier.
In one embodiment, the scan identifier display module is further configured to display a transparent window at the position of the target face image; and displaying a scanning animation file in the transparent window, wherein the scanning animation file is used for representing the process of identifying the target face image to obtain the face attribute information and the process of processing the target face image.
In one embodiment, a switch control is arranged in a display interface of the image to be processed, and the switch control is used for controlling whether a display parameter adjusting function is started or not; the scanning identification display module is further configured to execute an opening instruction responding to the switch control to display the attribute scanning identification of the target face image.
In one embodiment, the image processing apparatus further includes:
and the second image display module is configured to execute a closing instruction responding to the switch control, process the face image according to the original display parameters, and obtain and display the processed face image.
In one embodiment, the image processing apparatus further includes:
the prompt information display module is configured to display prompt information in a display interface displaying the image to be processed; the prompt information is used for reminding the position of a switch control, and the switch control is used for controlling whether a display parameter adjusting function is started or not.
In one embodiment, the image processing apparatus further includes:
and the contrast effect display module is configured to execute the contrast effect of displaying a first image and a second image, wherein the first image is used for displaying the face image processed by adopting the original display parameters, and the second image is used for displaying the face image processed by adopting the target display parameters.
In one embodiment, the display parameter adjustment module is further configured to perform adjusting the original display parameters according to at least one of the gender type, the makeup status information, the color value information, and the skin condition information.
In one embodiment, the display parameter adjustment module is further configured to execute obtaining a corresponding relationship between pre-configured face attribute information and a display adjustment coefficient according to the identifier of the target material; determining a display adjustment coefficient corresponding to the face attribute information of the face image according to the corresponding relation; and adjusting the original display parameters according to the display adjustment coefficient to obtain the target display parameters.
In one embodiment, the image processing apparatus further includes an adjustment instruction obtaining module configured to execute an adjustment instruction according to obtained display parameters, where the display parameter adjustment instruction carries a target coefficient;
the display parameter adjusting module is further configured to adjust the original display parameters according to the face attribute information and the target coefficients to obtain the target display parameters.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method described in any embodiment of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, a computer-readable storage medium, in which instructions, which, when executed by a processor of an electronic device, enable the electronic device to perform the method as described in any one of the embodiments of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program stored in a readable storage medium, from which at least one processor of an apparatus reads and executes the computer program, such that the apparatus performs the image processing method described in any one of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: determining a target material in a candidate material by acquiring an image to be processed and the candidate material, wherein the image to be processed comprises a face image; identifying face attribute information of a face image, and adjusting original display parameters of the target material according to the face attribute information to obtain target display parameters; and processing the face image according to the target display parameters to obtain and display the processed face image. The method and the device realize the adaptation of the target display parameters and the face image in the image to be processed, carry out targeted beautifying processing on the face image in the image to be processed by utilizing the target display parameters, add a personalized makeup effect to the face image in the image to be processed, and improve the beautifying effect of the image to be processed.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a diagram illustrating an application environment of an image processing method according to an exemplary embodiment.
FIG. 2a is a flow chart illustrating a method of image processing according to an exemplary embodiment.
FIG. 2b is a schematic diagram illustrating a display interface of an image to be processed according to an example embodiment.
FIG. 2c is a flow diagram illustrating a method of image processing according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating step S230 according to an exemplary embodiment.
Fig. 4a is a flowchart illustrating step S310 according to an exemplary embodiment.
Fig. 4b to 4d are schematic diagrams illustrating a display interface of an image to be processed according to an exemplary embodiment.
FIG. 5 is a diagram illustrating a display interface of a to-be-processed image according to an exemplary embodiment.
Fig. 6 is a flowchart illustrating an image processing method according to another exemplary embodiment.
FIG. 7 is a flow diagram illustrating adjusting original display parameters according to an example embodiment.
FIG. 8 is a flowchart illustrating adjusting original display parameters according to an example embodiment.
FIG. 9 is a flow diagram illustrating an image processing method according to an exemplary embodiment.
Fig. 10 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
Fig. 11 is an internal block diagram of an electronic device shown in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The image processing method provided by the present disclosure can be applied to the application environment shown in fig. 1. Wherein the terminal 110 interacts with the server 120 through the network. The terminal 110 is a client that runs an image processing application or an application having an image processing function. The terminal 110 includes a screen for human-computer interaction, a display interface for displaying images and cosmetic materials, operation controls such as switch controls in the display interface, and attribute scanning identifiers such as scanning animation files of images. The server 120 is used to store user behavior data, such as state information of the switch control generated by operating the switch control. The terminal 110 acquires the image to be processed in response to the shooting instruction, and the terminal 110 displays the image to be processed. Responding to an image processing instruction, the terminal acquires and displays candidate materials, wherein the image to be processed comprises a face image, and the candidate materials are used for processing the face image; determining target materials in the candidate materials; identifying face attribute information of the face image, and adjusting original display parameters of a target material according to the face attribute information to obtain target display parameters; and processing the face image according to the target display parameters to obtain and display the processed face image. The terminal 110 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 120 may be implemented by an independent server or a server cluster formed by a plurality of servers.
Fig. 2a is a flowchart illustrating an image processing method according to an exemplary embodiment, and as shown in fig. 2a, the image processing method is used in the terminal 110, and includes the following steps.
In step S210, an image to be processed and candidate materials are acquired.
The image to be processed refers to an image that needs to be beautified, and the image to be processed may be a preview image, an image stored in the terminal 110, or an image downloaded through a network. The image to be processed comprises a face image. The candidate material is a makeup template or an effect template for beautifying the image, and is used for processing the face image. The candidate material can be a makeup template such as gas makeup, pure makeup, European makeup, natural makeup and the like, and the candidate material can also be an effect template such as fun, body changing, face beautifying, beauty and the like. Specifically, the terminal 110 acquires an image to be processed, and previews the image to be processed, so that the terminal 110 displays the image to be processed. As shown in fig. 2b, an image processing control is arranged in the display interface of the image to be processed, the terminal acquires the candidate material in response to the triggering operation of the image processing control, and the candidate material is displayed in the display interface of the image to be processed.
In step S220, target materials are determined among the candidate materials.
Specifically, candidate material is displayed in the user interface for the user to view and select the target material. Among the candidate materials, part of the material is downloaded to the local and can be directly used, and part of the material is not downloaded to the local and needs to be downloaded before being used. The user triggers a selection operation of the materials in the interface, and the target materials are determined in the candidate materials in response to the selection operation of the materials. The target material, namely the material selected by the user, is used for beautifying the image to be processed.
In step S230, the face attribute information of the face image is identified, and the original display parameters of the target material are adjusted according to the face attribute information, so as to obtain target display parameters.
In step S240, the face image is processed according to the target display parameters, and the processed face image is obtained and displayed.
The face attribute information is information related to the skin roughness, the face value, the gender type, the makeup density, the skin oiliness degree and the like of the face image. Specifically, the image to be processed may be identified by the bottom layer identification model, for example, the image to be processed may be subjected to face identification by the face identification model, a face image may be obtained from the image to be processed, and the face image may be further identified by the attribute identification model to obtain face attribute information of the face image.
The candidate materials have original display parameters, the images to be processed may include a plurality of face images, different face images have different face attribute information, if the original display parameters are adopted, all the face images respectively have the same makeup effect, and the beautifying effect of the images to be processed by adopting the original display parameters is found to be not ideal through analysis. Therefore, the original display parameters are adjusted according to the face attribute information of the face image. And the target display parameters are obtained by adjusting the original display parameters of the target material according to the face attribute information of the face image. Specifically, after the face attribute information of the face image in the image to be processed is obtained, the original display parameters of the target material are adjusted by using the face attribute information of the face image, so as to obtain the target display parameters of the face image in the image to be processed. And performing image processing on the face image in the image to be processed by using the target display parameters of the target material, adding a makeup effect corresponding to the target display parameters of the target material, and obtaining and displaying the face image after the image processing by the terminal.
It should be noted that, the terminal has configured the bottom layer identification model in advance locally, so that the user is prevented from downloading the bottom layer identification model, the user time is saved, and the user operation cost is reduced. In addition, the data set used for training the underlying recognition model is obtained through data expansion, for example, data expansion of shading, lighting and small-range rotation can be performed on sample images in the data set, the test set expands data such as expressions, large angles, shading and children of teenagers (14-25 years old), and images in the data set cover various scenes (forward and backward light, indoor and outdoor), various age layers, various skin colors, different expressions, different angles, different distances, different degrees of shading, multiple people in the same frame and the like.
In the image processing method, an image to be processed and a candidate material are obtained, the image to be processed comprises a face image, and a target material is determined in the candidate material; identifying face attribute information of the face image, and adjusting original display parameters of a target material according to the face attribute information to obtain target display parameters; and processing the face image according to the target display parameters to obtain and display the processed face image. The method and the device realize the adaptation of the target display parameters and the face image in the image to be processed, carry out targeted beautifying treatment on the face image in the image to be processed by utilizing the target display parameters, add a personalized makeup effect to the face image in the image to be processed, and improve the beautifying effect of the image to be processed.
In an exemplary embodiment, as shown in fig. 2c, the number of face images is plural. Adjusting original display parameters of the target material according to the face attribute information to obtain target display parameters, wherein the target display parameters comprise:
in step S232, a target face image is determined based on the face attribute information of each face image.
In step S234, the original display parameters of the target material are adjusted according to the face attribute information of the target face image, so as to obtain target display parameters.
Processing the face image according to the target display parameters to obtain and display the processed face image, wherein the processing comprises the following steps:
in step S236, the target face image is subjected to image processing according to the target display parameters, and the processed target face image is obtained and displayed.
The target face image is a face image of which the display parameters need to be adjusted. Specifically, the face attribute information of the face image is obtained through the bottom layer recognition model, and the face attribute information comprises at least one of gender type, skin type, face value, makeup and skin oiliness. Different face attribute information requires different image processing modes, for example, if a face image of a user has a thick makeup appearance, it is not necessary to perform image processing on the face image using a display parameter for increasing the thick makeup appearance. If the gender type of a user is male, the face image of the user needs to be processed by a display parameter with a small beautification degree. Specifically, the image to be processed includes a plurality of face images, different face images correspond to respective face attribute information, different face attribute information requires different display parameters, part of the face images can use the original display parameters to perform image processing, and part of the face images can use the adjusted original display parameters to perform image processing. Therefore, the target face image of which the display parameters need to be adjusted is determined according to the face attribute information of each face image. After the target face image is determined, the original display parameters of the target material are adjusted according to the face attribute information of the target face image, and corresponding target display parameters are obtained. And performing image processing on the target face image in the image to be processed by using the target display parameters of the target material, adding a makeup effect corresponding to the target display parameters of the target material, and obtaining and displaying the image-processed target face image by the terminal.
In the embodiment, the target face image is determined from the plurality of face images of the image to be processed, the original display parameters of the target material are adjusted according to the face attribute information of the target face image, the target display parameters are obtained, the target face image is subjected to image processing according to the target display parameters, the processed target face image is obtained and displayed, the differential image processing of the target face image is achieved, and the display effect of the image to be processed is improved.
In an exemplary embodiment, the image to be processed further includes other face images besides the target face image. Processing the face image according to the target display parameters to obtain and display the processed face image, and further comprising: and processing other face images according to the original display parameters to obtain and display the processed other face images.
Specifically, the image to be processed includes not only the target face image but also other face images other than the target face image. The display parameters for processing the target face image are required to be adjusted, and the display parameters for processing other face images are not required to be adjusted, so that the target face image is subjected to image processing by using the target display parameters obtained after adjustment, and the processed target face image is obtained and displayed. And processing other face images by using the original display parameters to obtain and display the processed other face images.
In the embodiment, different display systems are respectively adopted for processing a target face image and other face images in an image to be processed, target display parameters are adopted for processing the target face image, original display parameters are adopted for processing other face images, a targeted beautifying effect is respectively added for the target face image and other face images, and diversified image processing requirements of different users in the image to be processed are met.
In an exemplary embodiment, as shown in fig. 3, the method further comprises the steps of:
in step S310, in the case that the target face exists in the image to be processed, the attribute scan identifier of the target face image is displayed.
In step S320, in the process of displaying the attribute scan identifier, the face attribute information of the target face image is recognized.
The attribute scanning mark is used for informing a user that the bottom layer recognition model is recognizing the face image, and can be a static mark such as a character prompt, a picture prompt, a bubble prompt and the like, or a dynamic mark such as an animation file and the like. In this embodiment, the relative position between the attribute scan identifier and the face image is not limited, for example, the attribute scan identifier may be located at any position around the left side, the right side, the upper side, the lower side, and the like of the face image. Specifically, the face recognition is performed on the image to be processed, when the target face exists in the image to be processed, the target face image in the image to be processed needs to be further recognized, and it takes a period of time to recognize the target face image, and at this time, the attribute scanning identifier may be displayed in the user interface to inform the user that the target face image is being recognized. Further, in the process of displaying the attribute scanning identifier, the bottom layer identification model identifies the target face image to obtain face attribute information of the face image, such as gender type, skin type, face value, makeup, skin oiliness and the like.
In the embodiment, the attribute scanning identifier of the target face image in the image to be processed is displayed in the user interface, so that the user is informed that the bottom layer model is recognizing the face attribute information of the target face image, the visual layer reminding effect can be realized, and the user experience is improved.
In an exemplary embodiment, as shown in fig. 4a, in step S310, displaying the attribute scan identifier of the target face image may specifically be implemented by the following steps:
in step S410, at the position of the target face image, a transparent window is displayed.
In step S420, the scanned animation file is displayed in the transparent window.
The transparent window can be displayed in the user interface in a form of a pop-up window, a floating window and the like. The shape of the transparent window may be circular, polygonal, triangular, annular, etc., and the relative position of the transparent window and the face image is not limited in this embodiment, for example, as shown in fig. 4b, the transparent window may surround the face image. The scanning animation file is used for representing a process of recognizing a target face image to obtain face attribute information and a process of processing the target face image, for example, the scanning animation file can present a visual effect of scanning from top to bottom along a face. Specifically, the image to be processed is subjected to face recognition, and when a face is recognized to exist in the image to be processed, the positions of the face images are obtained, so that the position of the target face image can be determined. And further identifying the target face image in the image to be processed, wherein a period of time is required for identifying the target face image, and a transparent window is displayed at the position of the target face image, and a scanning animation file is displayed in the transparent window to inform a user that the target face image is being identified. Further, in the process of displaying the scanned animation file, the bottom layer recognition model recognizes the target face image to obtain face attribute information of the target face image, such as gender type, skin type, face value, makeup, skin oiliness and the like, and determines and adjusts original display parameters of the target material according to the face attribute information to obtain target display parameters of the target material, so that the target display parameters of the target material are adopted to perform image processing on the target face image.
In this embodiment, the transparent window is displayed at the position of the target face image, and the scanning animation file is displayed in the transparent window. The target face image is identified and processed by displaying the vivid representation of the scanned animation file, so that the reminding effect of a visual layer can be realized, and the user experience is improved.
In an exemplary embodiment, after displaying the transparent window, the method further comprises: and displaying the first prompt message in the process of displaying the scanned animation file.
The first prompt information is used for reminding that image processing is being performed on the target face image. The first prompt message may be presented in the user interface in text form, bubble form, or bubble form. Specifically, under the condition that the target human face exists in the image to be processed, a transparent window is displayed at the position of the target human face image, and a scanning animation file is displayed in the transparent window. Because the scanned animation file can be used for representing the process of image processing on the target face image, in the process of displaying the scanned animation file, the first prompt message is displayed to inform the user that the image processing on the target face image is being carried out.
In an exemplary embodiment, after displaying the first prompt message, the method further includes: and after the scanning animation file is displayed, displaying second prompt information. And the second prompt information is used for reminding that the image processing of the target face image is finished. Furthermore, after the image processing is completed on the target face image, a transparent window is not displayed in the user interface, and the user interface displays the target face image after the image processing.
Illustratively, the face attribute information is a gender type, and the face image is adapted with appropriate gender makeup according to the gender type. Taking the boy face image in the image to be processed as the target face image, as shown in fig. 4c, the boy face image in the image to be processed is being recognized and image-processed, and the user is informed of increasing the boy makeup effect for the boy face image in the image to be processed through the first prompt message "in the man makeup adaptation". As shown in fig. 4d, after the image processing is completed on the male face image in the image to be processed, that is, after a suitable male makeup effect is added to the male face image, the user is reminded that the image processing on the face image has been completed through the second prompt message "male makeup adaptation takes effect".
In an exemplary embodiment, after displaying the transparent window, the method further comprises: and in the process of displaying the scanned animation file, locking an operation control in a display interface of the image to be processed.
Specifically, as shown in fig. 4c, a plurality of operation controls are displayed in the display interface of the image to be processed. In the process of displaying the scanned animation file, in order to ensure effective execution of image processing, the operation control in the display interface is locked, so that the terminal does not respond to the operation generated on the operation control in the display interface. In the embodiment, the accurate processing of the target face image is ensured by locking the operation control in the display interface of the image to be processed, so that the beautifying effect of the face image is improved.
In an exemplary embodiment, a switch control is arranged in the display interface of the image to be processed, and the switch control is used for controlling whether to enable the display parameter adjustment function. As shown in fig. 5, a switch control for controlling "makeup fitting" is provided in the display interface of the image to be processed, and if the switch control is turned on, the display parameter adjustment function is enabled to perform makeup fitting for different face images. And if the switch control is closed, the display parameter adjusting function is not started, and the original display parameters of each face image are used for image processing. In step S310, the attribute scan identifier of the target face image is displayed, which may be specifically implemented by the following steps: and responding to the starting instruction of the switch control, and displaying the attribute scanning identification of the target face image.
Specifically, a switch control is arranged in a display interface of the image to be processed, a user can control whether to start a display parameter adjustment function through the switch control, and in response to a starting instruction of the switch control, the display interface of the image to be processed displays the attribute scanning identifier of the target face image. Further, in order to save the time taken by downloading the model, the bottom layer of the terminal may be configured with a switch trigger recognition model in advance, and in order to save the starting time of starting the switch trigger recognition model, the switch may be set to be in an on state by default. And detecting the state of the switch control by the switch triggering recognition model, and if the switch control is detected to be in the opening state, indicating that a display parameter adjusting function needs to be started to carry out makeup fitting on different face images. Therefore, the face recognition model needs to be started to recognize the image to be processed, and the attribute scanning identifier of the face image in the image to be processed is displayed when the face exists in the image to be processed.
In the embodiment, the switch control is arranged in the display interface of the image to be processed, so that a user can control whether to start the display parameter adjusting function or not through the switch control, the flexibility of the display parameter adjusting function is improved, and different requirements of different users are met.
In an exemplary embodiment, after displaying the face image after image processing using the target display parameters of the target material, the method further includes: and responding to a closing instruction of the switch control, processing the face image according to the original display parameters, and obtaining and displaying the processed face image.
The face image after image processing by using the target display parameters of the target material can be recorded as a target image, and if the beauty effect of the target image does not meet the expectation of the user, the user can trigger the closing operation of the switch control, that is, the adjustment of the original display parameters is abandoned. Specifically, after the target image is displayed in the user interface, the user may close the switch control, respond to a closing instruction of the switch control, process the face image according to the original display parameters, and obtain and display the processed face image. Further, the state of the switch control can be detected through the switch triggering recognition model, and if the switch control is detected to be in the closed state, the user gives up the face image subjected to image processing by adopting the target display parameters, and the face image subjected to image processing by adopting the original display parameters of the target material is displayed.
In this embodiment, after the face image after image processing is performed by using the target display parameters of the target material is displayed, whether the face image after image processing is performed by using the target display parameters of the target material or the face image after image processing is performed by using the original display parameters of the target material is displayed is determined by detecting the state of the switch control, so that the flexibility of the display parameter adjustment function is further improved, and the face images with different beauty effects are provided for the user to select.
In an exemplary embodiment, after detecting the state of the switch control, the method further comprises: and sending the state information of the switch control to the server to indicate the server to count the utilization rate of the switch control.
The state information of the switch control comprises an opening state and a closing state. Specifically, in order to fully understand the requirement of the user on the display parameter adjustment function, the terminal may send the state information of the switch control to the server, and the server performs statistics on the state information to obtain the utilization rate of the switch control. Further, the switch control can be closed by default, so that the opening utilization rate of the switch control is obtained by acquiring the state information of the switch control and counting the state information.
In an exemplary embodiment, the method further comprises: and displaying the third prompt information in a display interface for displaying the image to be processed.
As shown in fig. 6, the third prompt message is used to remind the user of the position of the switch control. The switch control is used for controlling whether the display parameter adjusting function is started or not. And if the switch control is started, starting a display parameter adjusting function to perform makeup fitting aiming at different face images. And if the switch control is closed, the display parameter adjusting function is not started, and the original display parameters of each face image are used for image processing. Specifically, in order to reduce the operation difficulty of the application program and save the operation cost of the user, third prompt information is displayed in a display interface for displaying the image to be processed to prompt the position of the switch control, and simultaneously inform the user that the application program provides a display parameter adjustment function, and inform the user whether to start the display parameter adjustment function according to actual requirements. For example, the third prompting message can be presented in the user interface in a text form, a bubble form or a bubble form. Illustratively, the makeup for men weakens as a function point of the application program, and after a new application program is installed, bubbles are directly popped up in the user interface, so that the user is reminded of being provided with the makeup for men adaptation function, and the user is informed of the position of a switch control of the makeup for men adaptation function.
In an exemplary embodiment, before displaying the image to be processed and the candidate material, the method further comprises: and displaying the first example graph, the second example graph, prompt information of the first example graph and prompt information of the second example graph.
Wherein, when the display parameter adjustment function is known for the first time by the user as a new function, the user interface may display some function interpretation information. Specifically, the first example graph, the second example graph, the prompt message of the first example graph and the prompt message of the second example graph are arranged in the user interface. The first exemplary graph is used for displaying an exemplary face image subjected to image processing by adopting the original display parameters, and the second exemplary graph is used for displaying an exemplary face image subjected to image processing by adopting the target display parameters. The first exemplary diagram and the second exemplary diagram may appear in the user interface at the same time, for example, the first exemplary diagram and the second exemplary diagram may be arranged in a left-right manner or in an upper-lower manner. The first example graph and the second example graph may also appear in the user interface in chronological order. And the first example graph is provided with reminding information, the prompting information of the first example graph is used for explaining the state of the first example graph, and the user is informed that the first example graph is an example face image after image processing is carried out by adopting the target display parameters. And reminding information is arranged on the second example graph, the prompting information of the second example graph is used for explaining the state of the second example graph, and the user is informed that the second example graph is an example face image after image processing is carried out by adopting the original display parameters.
In this embodiment, before displaying the image to be processed and the candidate material, the user is informed of displaying the parameter adjustment function by displaying the first example diagram, the second example diagram, the prompt information of the first example diagram and the prompt information of the second example diagram, so that the understanding cost of the user is reduced, and the usability of the parameter adjustment function is improved.
In an exemplary embodiment, the method further comprises: and displaying the contrast effect of the first image and the second image.
Specifically, a first image and a second image are displayed in the user interface. The first image is used for displaying the face image processed by adopting the original display parameters, and the second image is used for displaying the face image processed by adopting the target display parameters. The first image and the second image may appear in the user interface at the same time, for example, the first image and the second image may be arranged left and right or up and down. The first image and the second image may also appear in the user interface in a sequential order, and the contrast effect of the first image and the second image is displayed in these ways.
In this embodiment, by displaying the contrast effect of the first image and the second image, understanding of the display parameter adjustment function by the user is accelerated, the understanding cost of the user is reduced, and the utilization rate of the display parameter adjustment function is improved.
In an exemplary embodiment, identifying the face attribute information of the face image in the image to be processed includes: adjusting the original display parameters according to at least one of gender type, makeup status information, color value information, and skin condition information.
Specifically, the gender type of the face image is identified through a gender identification model, and the gender type comprises a male and a female. And identifying the makeup state information of the face image through the shape state identification model, wherein the makeup state information comprises heavy makeup, light makeup and the like. Identifying the color value information of the face image through a color value identification model, wherein the color value information comprises several types of information such as high color value, common color value, low color value and the like; and identifying skin information of the face image through a skin identification model, wherein the skin information comprises roughness, smoothness and the like. And adjusting the original display parameters of the target material by using at least one of the gender type, the makeup state information, the color value information and the skin information to obtain target display parameters matched with the face image.
In the embodiment, the target display parameters are obtained by identifying at least one of the gender type, the makeup state information, the color value information and the skin information of the face image in the image to be processed and adjusting the original display parameters of the target material according to at least one of the gender type, the makeup state information, the color value information and the skin information.
In an exemplary embodiment, as shown in fig. 7, adjusting the original display parameters of the target material according to the face attribute information of the face image includes the following steps:
in step S710, a corresponding relationship between the pre-configured face attribute information and the display adjustment coefficient is obtained according to the identifier of the target material.
In step S720, a display adjustment coefficient corresponding to the face attribute information of the face image is determined according to the correspondence.
In step S730, the original display parameters are adjusted according to the display adjustment coefficients to obtain target display parameters.
The method comprises the steps that after a terminal acquires an image to be processed, the image to be processed and candidate materials are displayed and displayed, each candidate material is provided with an identifier, and a target material is determined from the candidate materials in response to the selection operation of the materials, so that the identifiers of the image to be processed and the target material can be issued to a bottom layer of the terminal. The terminal bottom layer SDK (Software Development Kit) is pre-configured with a corresponding relationship between face attribute information and display adjustment coefficients corresponding to each material. For example, the corresponding relationship between the pre-configured face attribute information and the display adjustment coefficient is that the gender type is male, the color value information is high, the skin type information is smooth, and the display adjustment coefficient is 0.5 when the makeup state information is light makeup. If the identified face attribute information is: the gender type is male, the color value information is high, the skin information is smooth, the makeup state information is light makeup, and the display adjustment coefficient is 0.5.
Specifically, the corresponding relationship between the pre-configured face attribute information and the display adjustment coefficient is obtained according to the identification of the target material. Further, the bottom layer SDK is configured with a logic rule for image processing, and then determines a display adjustment coefficient according to a corresponding relationship between pre-configured face attribute information and the display adjustment coefficient, where the display adjustment coefficient corresponds to the face attribute information of the face image. And adjusting the original display parameters of the target material through the display adjustment coefficient to obtain the target display parameters of the target material. For example, after the display adjustment coefficient is determined by the configured logic rule, the display adjustment coefficient is multiplied by the original display parameter, and the product of the two is used as the target display parameter. Illustratively, for a certain target material, the bottom layer SDK is configured with a boy's display adjustment coefficient of 0.2, and the original display parameter is 1 force. Then, when the gender type of the face image is identified as male, the target display parameter of the target material is 0.2 force, and the makeup force of the face image of the male is weakened through the configuration of the display adjustment coefficient. It can be seen that the adjustment of the original display parameters of the target material depends on the logic rules configured with the bottom layer SDK.
Further, the display adjustment coefficient may correspond to a value range; before adjusting the original display parameters according to the display adjustment coefficient to obtain the target display parameters, the method further includes: acquiring a display parameter adjusting instruction, wherein the display parameter adjusting instruction carries a target coefficient; adjusting the original display parameters according to the display adjustment coefficient to obtain target display parameters, including: and adjusting the original display parameters according to the target coefficient and the value range of the display adjustment coefficient to obtain target display parameters.
Specifically, the range includes an upper limit and a lower limit. After receiving a display parameter adjustment instruction triggered by a user, the original display parameters may be adjusted according to a target coefficient set by the user, for example, the original display parameters may be adjusted by combining the target coefficient set by the user and a value range of the display adjustment coefficient. If the target coefficient set by the user is in the value range, adjusting by using the target coefficient; if the target coefficient is smaller than the lower limit of the value range, the lower limit of the value range is used for adjustment, and if the target coefficient is larger than the upper limit of the value range, the upper limit of the value range is used for adjustment.
In the embodiment, different materials have different display adjustment coefficients by pre-configuring the corresponding relation between the face attribute information and the display adjustment coefficients, so that the personalized and adaptive makeup effect is added to different face images.
In an exemplary embodiment, as shown in fig. 8, before adjusting the original display parameters of the target material according to the face attribute information to obtain the target display parameters, the method further includes:
in step S810, a display parameter adjustment instruction is obtained, where the display parameter adjustment instruction carries a target coefficient.
Adjusting original display parameters of the target material according to the face attribute information to obtain target display parameters, wherein the target display parameters comprise:
in step S820, the original display parameters are adjusted according to the face attribute information and the target coefficients to obtain target display parameters.
The display interface of the image to be processed is provided with a display parameter adjusting control, the display parameter adjusting control can be displayed through clicking and other operations, and a user triggers a display parameter adjusting instruction for adjusting the display parameters. The display parameter adjustment instruction carries a target coefficient set by a user, and the target coefficient can be used for adjusting the original display parameter. Specifically, in order to further meet the personalized image processing requirements of the user, a display parameter adjustment instruction is obtained, so that the original display parameters are adjusted by combining the target coefficients carried by the display parameter adjustment instruction and the face attribute information obtained through recognition, and target display parameters are obtained.
In the embodiment, the original display parameters are adjusted according to the target coefficients carried by the display parameter adjusting instruction and the face attribute information to obtain the target display parameters, so that the user requirements and the actual situation of the face image are both considered, the processed image not only can meet the user requirements, but also can be matched with the face attribute information of the face image, and meanwhile, the image quality and the user experience are improved.
Fig. 9 is a flowchart illustrating an image processing method according to an exemplary embodiment, where the image processing method is used in the terminal 110, as shown in fig. 9, and includes the steps of:
in step S902, an image to be processed and candidate materials are acquired.
The image to be processed comprises a plurality of face images. The candidate materials are used for processing the face image. And a switch control is arranged in the display interface of the image to be processed and is used for controlling whether the display parameter adjusting function is started or not.
In step S904, target materials are determined among the candidate materials.
In step S906, in the display interface that displays the image to be processed, the prompt information is displayed.
The prompt information is used for reminding the position of the switch control, and the switch control is used for controlling whether the display parameter adjusting function is started or not.
In step S908, in response to the on instruction of the switch control, and in a case that the target face is identified to exist in the image to be processed, the attribute scanning identifier of the target face image is displayed.
In step S910, face attribute information of the face image is recognized.
Wherein the face attribute information includes at least one of gender type, makeup status information, color value information, and skin type information. Specifically, a transparent window is displayed at the position of the target human face image; and displaying a scanning animation file in the transparent window, wherein the scanning animation file is used for representing the process of identifying the target face image to obtain the face attribute information and the process of processing the target face image.
In step S912, a target face image is determined based on the face attribute information of each face image.
The target face image is a face image of which the display parameters need to be adjusted; the image to be processed also comprises other face images except the target face image.
In step S914, the original display parameters of the target material are adjusted according to the face attribute information of the target face image, so as to obtain target display parameters.
Specifically, the original display parameters are adjusted according to at least one of gender type, makeup status information, color value information, and skin condition information. Or acquiring the corresponding relation between the preset face attribute information and the display adjustment coefficient according to the identification of the target material; determining a display adjustment coefficient corresponding to the face attribute information of the face image according to the corresponding relation; and adjusting the original display parameters according to the display adjustment coefficient to obtain target display parameters. Or adjusting the original display parameters according to the face attribute information and the target coefficients carried by the display parameter adjusting instruction to obtain the target display parameters.
In step S916, image processing is performed on the target face image according to the target display parameter, so as to obtain and display the processed target face image.
In step S918, the other face images are processed according to the original display parameters, so as to obtain and display the processed other face images.
In step S920, a contrast effect of the first image and the second image is displayed.
The first image is used for displaying the face image processed by adopting the original display parameters, and the second image is used for displaying the face image processed by adopting the target display parameters.
In step S922, in response to the closing instruction of the switch control, the target face image is processed according to the original display parameters, and the processed target face image is obtained and displayed.
In an exemplary embodiment, the present application provides an image processing method for use in a terminal 110, comprising the steps of:
in step S1002, in the display interface that displays the image to be processed, third prompt information is displayed.
And the third prompt message is used for reminding the position of a switch control, and the switch control is used for controlling whether to start the display parameter adjusting function.
In step S1004, the first example graph, the second example graph, the prompt information of the first example graph, and the prompt information of the second example graph are displayed.
The first example graph is used for displaying the example face image after image processing is carried out by adopting the original display parameters, the second example graph is used for displaying the example face image after image processing is carried out by adopting the target display parameters, the prompt information of the first example graph is used for explaining the state of the first example graph, and the prompt information of the second example graph is used for explaining the state of the second example graph.
In step S1006, the image to be processed and the candidate material are displayed.
The display interface of the image to be processed is provided with a switch control, and the switch control is used for controlling whether to start a display parameter adjusting function.
In step S1008, in response to a selection operation on the material, a target material is determined from the candidate materials.
In step S1010, the state of the switch control is detected.
In step S1012, if it is detected that the switch control is in the on state and a human face exists in the image to be processed, a transparent window is displayed at the position of the human face image.
In step S1014, the scanned animation file is displayed in the transparent window.
The scanning animation file is used for representing a process of recognizing the face image to obtain the face attribute information and a process of processing the face image. At least one of gender type, makeup state information, color value information, and skin type information of the face attribute information. Specifically, the gender type of the face image is identified through a gender identification model; identifying makeup state information of the face image through a face state identification model; identifying the color value information of the face image through a color value identification model; and identifying the skin information of the face image through a skin identification model. Acquiring a corresponding relation between pre-configured face attribute information and a display adjustment coefficient according to the identification of the target material; determining a display adjustment coefficient corresponding to the face attribute information of the face image according to a corresponding relation between the pre-configured face attribute information and the display adjustment coefficient; and adjusting the original display parameters of the target material according to the display adjustment coefficient to obtain the target display parameters of the target material.
In step S1016, in the process of displaying the scanned animation file, the face attribute information of the face image is recognized, and the first prompt information is displayed.
The first prompt message is used for reminding that image processing is being performed on the face image.
In step S1018, in the process of displaying the scanned animation file, the operation control in the display interface of the image to be processed is locked.
In step S1020, after the scan animation file is completely displayed, the second prompt information is displayed.
Wherein the second prompt information is used to remind that the image processing of the face image has been completed.
In step S1022, the face image subjected to the image processing using the target display parameters of the target material is displayed.
In step S1024, if it is detected that the switch control is in the off state, the face image after image processing is performed by using the original display parameters of the target material is displayed.
In step S1026, the state information of the switch control is sent to the server to indicate the server to count the usage rate of the switch control.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above-mentioned flowcharts may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the steps or the stages in other steps.
Fig. 10 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment. Referring to fig. 10, the apparatus includes an image material display module 1002, a target material determination module 1004, a display parameter adjustment module 1006, and a first image display module 1008.
An image material display module 1002 configured to perform acquiring an image to be processed including a face image and a candidate material for processing the face image;
a target material determination module 1004 configured to perform determining target materials among the candidate materials;
a display parameter adjustment module 1006, configured to perform face attribute information for identifying the face image, and adjust an original display parameter of the target material according to the face attribute information, so as to obtain a target display parameter;
and the first image display module 1008 is configured to perform processing on the face image according to the target display parameter, and obtain and display the processed face image.
In an exemplary embodiment, the number of the face images is multiple; the display parameter adjusting module is further configured to determine a target face image according to the face attribute information of each face image; the target face image is a face image of which the display parameters need to be adjusted; adjusting original display parameters of the target material according to the face attribute information of the target face image to obtain the target display parameters;
the first image display module is further configured to perform image processing on the target face image according to the target display parameters, and obtain and display the processed target face image.
In an exemplary embodiment, the image to be processed further includes other face images besides the target face image; the first image display module is further configured to perform processing on the other face images according to the original display parameters, and obtain and display the processed other face images.
In an exemplary embodiment, the image processing apparatus further includes: the scanning identification display module is configured to display the attribute scanning identification of the target face image under the condition that the target face exists in the image to be processed;
the face information recognition module is configured to recognize the face attribute information of the target face image in the process of displaying the attribute scanning identifier.
In an exemplary embodiment, the scanning identification display module is further configured to display a transparent window at a position of the target face image; and displaying a scanning animation file in the transparent window, wherein the scanning animation file is used for representing the process of identifying the target face image to obtain the face attribute information and the process of processing the target face image.
In an exemplary embodiment, a switch control is arranged in a display interface of the image to be processed, and the switch control is used for controlling whether to start a display parameter adjustment function; the scanning identification display module is further configured to execute an opening instruction responding to the switch control to display the attribute scanning identification of the target face image.
In an exemplary embodiment, the image processing apparatus further includes: and the second image display module is configured to execute a closing instruction responding to the switch control, process the face image according to the original display parameters, and obtain and display the processed face image.
In an exemplary embodiment, the image processing apparatus further includes: the prompt information display module is configured to display prompt information in a display interface displaying the image to be processed; the prompt information is used for reminding the position of a switch control, and the switch control is used for controlling whether a display parameter adjusting function is started or not.
In an exemplary embodiment, the image processing apparatus further includes:
and the contrast effect display module is configured to execute the contrast effect of displaying a first image and a second image, wherein the first image is used for displaying the face image processed by adopting the original display parameters, and the second image is used for displaying the face image processed by adopting the target display parameters.
In an exemplary embodiment, the display parameter adjusting module is further configured to perform adjusting the original display parameters according to at least one of the gender type, the makeup status information, the color value information, and the skin condition information.
In an exemplary embodiment, the display parameter adjustment module is further configured to execute obtaining, according to the identifier of the target material, a correspondence between pre-configured face attribute information and a display adjustment coefficient; determining a display adjustment coefficient corresponding to the face attribute information of the face image according to the corresponding relation; and adjusting the original display parameters according to the display adjustment coefficient to obtain the target display parameters.
In an exemplary embodiment, the image processing apparatus further includes an adjustment instruction obtaining module configured to execute an adjustment instruction according to a display parameter obtained, where the display parameter adjustment instruction carries a target coefficient;
the display parameter adjusting module is further configured to adjust the original display parameters according to the face attribute information and the target coefficients to obtain the target display parameters.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 11 is a block diagram illustrating an electronic device 1100 for image processing in accordance with an exemplary embodiment. For example, the electronic device 1100 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and so forth.
Referring to fig. 11, electronic device 1100 may include one or more of the following components: a processing component 1102, a memory 1104, a power component 1106, a multimedia component 1108, an audio component 1110, an input/output (I/O) interface 1112, a sensor component 1114, and a communications component 1116.
The processing component 1102 generally controls the overall operation of the electronic device 1100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 1102 may include one or more processors 1120 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 1102 may include one or more modules that facilitate interaction between the processing component 1102 and other components. For example, the processing component 1102 may include a multimedia module to facilitate interaction between the multimedia component 1108 and the processing component 1102.
The memory 1104 is configured to store various types of data to support operations at the electronic device 1100. Examples of such data include instructions for any application or method operating on the electronic device 1100, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1104 may be implemented by any type or combination of volatile or non-volatile storage devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1106 provides power to the various components of the electronic device 1100. The power components 1106 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 1100.
The multimedia component 1108 includes a screen that provides an output interface between the electronic device 1100 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1108 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 1100 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1110 is configured to output and/or input audio signals. For example, the audio component 1110 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 1100 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1104 or transmitted via the communication component 1116. In some embodiments, the audio assembly 1110 further includes a speaker for outputting audio signals.
The I/O interface 1112 provides an interface between the processing component 1102 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1114 includes one or more sensors for providing various aspects of state assessment for the electronic device 1100. For example, the sensor assembly 1114 may detect an open/closed state of the electronic device 1100, the relative positioning of components, such as a display and keypad of the electronic device 1100, the sensor assembly 1114 may also detect a change in the position of the electronic device 1100 or a component of the electronic device 1100, the presence or absence of user contact with the electronic device 1100, orientation or acceleration/deceleration of the electronic device 1100, and a change in the temperature of the electronic device 1100. The sensor assembly 1114 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1114 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1114 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1116 is configured to facilitate wired or wireless communication between the electronic device 1100 and other devices. The electronic device 1100 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 1116 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1116 also includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1100 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 1104 comprising instructions, executable by the processor 1120 of the electronic device 1100 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements the image processing method in any of the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring an image to be processed and a candidate material, wherein the image to be processed comprises a face image, and the candidate material is used for processing the face image;
determining target materials in the candidate materials;
identifying face attribute information of the face image, and adjusting original display parameters of the target material according to the face attribute information to obtain target display parameters;
and processing the face image according to the target display parameters to obtain and display the processed face image.
2. The image processing method according to claim 1, wherein the number of the face images is plural; the adjusting the original display parameters of the target material according to the face attribute information to obtain target display parameters comprises:
determining a target face image according to the face attribute information of each face image; the target face image is a face image of which the display parameters need to be adjusted;
adjusting original display parameters of the target material according to the face attribute information of the target face image to obtain the target display parameters;
the processing the face image according to the target display parameters to obtain and display the processed face image includes:
and carrying out image processing on the target face image according to the target display parameters to obtain and display the processed target face image.
3. The image processing method according to claim 2, wherein the image to be processed further includes a face image other than the target face image; the processing the face image according to the target display parameters to obtain and display the processed face image, further comprising:
and processing the other face images according to the original display parameters to obtain and display the processed other face images.
4. The image processing method according to claim 2, characterized in that the method further comprises:
displaying an attribute scanning identifier of the target face image under the condition that the target face exists in the image to be processed;
and in the process of displaying the attribute scanning identifier, identifying the face attribute information of the target face image.
5. The image processing method according to claim 4, wherein the displaying the attribute scan identifier of the target face image comprises:
displaying a transparent window at the position of the target human face image;
and displaying a scanning animation file in the transparent window, wherein the scanning animation file is used for representing the process of identifying the target face image to obtain the face attribute information and the process of processing the target face image.
6. The image processing method according to claim 4 or 5, wherein a switch control is arranged in the display interface of the image to be processed, and the switch control is used for controlling whether to start a display parameter adjustment function; the attribute scanning identifier for displaying the target face image comprises:
and responding to the opening instruction of the switch control, and displaying the attribute scanning identification of the target face image.
7. An image processing apparatus characterized by comprising:
the image material display module is configured to execute acquisition of an image to be processed and a candidate material, wherein the image to be processed comprises a face image, and the candidate material is used for processing the face image;
a target material determination module configured to perform determining target materials among the candidate materials;
the display parameter adjusting module is configured to identify face attribute information of the face image, and adjust original display parameters of the target material according to the face attribute information to obtain target display parameters;
and the first image display module is configured to process the face image according to the target display parameters, and obtain and display the processed face image.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method of any one of claims 1 to 6.
9. A computer-readable storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the image processing method of any one of claims 1 to 6 when executed by a processor.
CN202011565300.6A 2020-12-25 2020-12-25 Image processing method, image processing apparatus, electronic device, storage medium, and program product Pending CN112669233A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011565300.6A CN112669233A (en) 2020-12-25 2020-12-25 Image processing method, image processing apparatus, electronic device, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011565300.6A CN112669233A (en) 2020-12-25 2020-12-25 Image processing method, image processing apparatus, electronic device, storage medium, and program product

Publications (1)

Publication Number Publication Date
CN112669233A true CN112669233A (en) 2021-04-16

Family

ID=75409382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011565300.6A Pending CN112669233A (en) 2020-12-25 2020-12-25 Image processing method, image processing apparatus, electronic device, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN112669233A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344837A (en) * 2021-06-28 2021-09-03 展讯通信(上海)有限公司 Face image processing method and device, computer readable storage medium and terminal
WO2023142915A1 (en) * 2022-01-29 2023-08-03 京东方科技集团股份有限公司 Image processing method, apparatus and device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832784A (en) * 2017-10-27 2018-03-23 维沃移动通信有限公司 A kind of method of image beautification and a kind of mobile terminal
CN108765264A (en) * 2018-05-21 2018-11-06 深圳市梦网科技发展有限公司 Image U.S. face method, apparatus, equipment and storage medium
CN111275650A (en) * 2020-02-25 2020-06-12 北京字节跳动网络技术有限公司 Beautifying processing method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832784A (en) * 2017-10-27 2018-03-23 维沃移动通信有限公司 A kind of method of image beautification and a kind of mobile terminal
CN108765264A (en) * 2018-05-21 2018-11-06 深圳市梦网科技发展有限公司 Image U.S. face method, apparatus, equipment and storage medium
CN111275650A (en) * 2020-02-25 2020-06-12 北京字节跳动网络技术有限公司 Beautifying processing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
韩程伟: "《手机摄影攻略》", 30 June 2018, 浙江摄影出版社 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344837A (en) * 2021-06-28 2021-09-03 展讯通信(上海)有限公司 Face image processing method and device, computer readable storage medium and terminal
WO2023142915A1 (en) * 2022-01-29 2023-08-03 京东方科技集团股份有限公司 Image processing method, apparatus and device, and storage medium

Similar Documents

Publication Publication Date Title
CN110662083B (en) Data processing method and device, electronic equipment and storage medium
CN106791893B (en) Video live broadcasting method and device
US10152207B2 (en) Method and device for changing emoticons in a chat interface
CN105979383B (en) Image acquiring method and device
KR20180057366A (en) Mobile terminal and method for controlling the same
EP3147907A1 (en) Control method and apparatus for playing audio
CN104850432B (en) Adjust the method and device of color
CN111381746B (en) Parameter adjusting method, device and storage medium
US10230891B2 (en) Method, device and medium of photography prompts
CN105631804B (en) Image processing method and device
CN104978200A (en) Application program display method and device
CN112669233A (en) Image processing method, image processing apparatus, electronic device, storage medium, and program product
CN111526287A (en) Image shooting method, image shooting device, electronic equipment, server, image shooting system and storage medium
CN104853223B (en) The inserting method and terminal device of video flowing
CN104915104A (en) Display method and device of keyboard interface
CN112788354A (en) Live broadcast interaction method and device, electronic equipment, storage medium and program product
CN107566878A (en) The method and device of live middle display picture
CN104902318B (en) Control method for playing back and terminal device
CN104883603B (en) Control method for playing back, system and terminal device
CN113079493A (en) Information matching display method and device and electronic equipment
CN110502993B (en) Image processing method, image processing device, electronic equipment and storage medium
WO2022262211A1 (en) Content processing method and apparatus
CN109788367A (en) A kind of information cuing method, device, electronic equipment and storage medium
KR20160087969A (en) Mobile terminal and dual lcd co-processing method thereof
CN115314728A (en) Information display method, system, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination