CN109242767B - Method for obtaining beauty parameters and terminal equipment - Google Patents
Method for obtaining beauty parameters and terminal equipment Download PDFInfo
- Publication number
- CN109242767B CN109242767B CN201811142699.XA CN201811142699A CN109242767B CN 109242767 B CN109242767 B CN 109242767B CN 201811142699 A CN201811142699 A CN 201811142699A CN 109242767 B CN109242767 B CN 109242767B
- Authority
- CN
- China
- Prior art keywords
- images
- image
- expression information
- beauty
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003796 beauty Effects 0.000 title claims abstract description 172
- 238000000034 method Methods 0.000 title claims abstract description 66
- 230000014509 gene expression Effects 0.000 claims abstract description 141
- 230000000694 effects Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 10
- 239000002537 cosmetic Substances 0.000 claims 1
- 238000004891 communication Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 11
- 238000013135 deep learning Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000013145 classification model Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 230000002087 whitening effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000001943 fluorescence-activated cell sorting Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the invention discloses a method for acquiring beauty parameters and terminal equipment, relates to the technical field of communication, and aims to solve the problems that in the prior art, the operation of adjusting the beauty parameters by a user is complicated and the time consumption is long. The method comprises the following steps: acquiring N pieces of expression information, wherein the N pieces of expression information are respectively used for indicating the expressions of a user when browsing N first images; determining M second images from the N first images according to the N expression information; acquiring target beauty parameters according to the beauty parameters of at least one second image; wherein M, N is a positive integer. The scheme is particularly applied to a scene for acquiring the beauty parameters.
Description
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a method for acquiring beauty parameters and terminal equipment.
Background
Along with the improvement of the requirements of people on the image display effect and the continuous development of terminal technology, the photographing modes also tend to be diversified, wherein the beautifying mode is the photographing mode with the highest use frequency.
At present, the beauty modes in the prior art can provide some adjustable beauty parameters so as to facilitate users to adjust the beauty parameters according to personal preference and obtain satisfactory beauty effects.
However, in adjusting the beauty parameters, the user may need to adjust the beauty parameters a plurality of times to obtain a satisfactory beauty effect. Therefore, the operation of the user for adjusting the beauty parameters is complicated and takes a long time.
Disclosure of Invention
The embodiment of the invention provides a method and terminal equipment for acquiring beauty parameters, which are used for solving the problems that the operation of user for adjusting the beauty parameters is complicated and the time consumption is long in the prior art.
In order to solve the technical problems, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for obtaining a beauty parameter, where the method includes:
acquiring N pieces of expression information, wherein the N pieces of expression information are respectively used for indicating the expressions of a user when browsing N first images;
determining M second images from the N first images according to the N expression information;
acquiring target beauty parameters according to the beauty parameters of at least one second image;
wherein M, N is a positive integer.
In a second aspect, an embodiment of the present invention provides a terminal device, including: an acquisition module and a determination module;
the acquisition module is used for acquiring N pieces of expression information, wherein the N pieces of expression information are respectively used for indicating the expression of a user when browsing N first images;
The determining module is used for determining M second images from the N first images according to the N expression information acquired by the acquiring module;
the acquisition module is further used for acquiring target beauty parameters according to the beauty parameters of the at least one second image determined by the determination module;
wherein M, N is a positive integer.
In a third aspect, an embodiment of the present invention provides a terminal device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program implementing the steps of the method for acquiring beauty parameters as in the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of obtaining a beauty parameter as in the first aspect.
In the embodiment of the invention, the terminal equipment acquires N pieces of expression information which are respectively used for indicating the expression of a user when browsing N first images; determining M second images from the N first images according to the N expression information; acquiring target beauty parameters according to the beauty parameters of at least one second image; wherein M, N is a positive integer. According to the scheme, the terminal equipment obtains N pieces of expression information, and determines M second images from the N first images according to the N pieces of expression information, wherein each second image is a satisfactory image of a user; and obtaining target beauty parameters satisfied by the user according to the beauty parameters of at least one second image in the M second images. Each second image is a user-satisfactory image, and the obtained beauty parameters are user-satisfactory beauty parameters according to at least one second image. Therefore, the user can obtain the target beauty parameters satisfied by the user without repeatedly adjusting the beauty parameters on the photographing interface. Therefore, the problems that the operation of adjusting the beauty parameters by a user in the prior art is complicated and the time consumption is long can be solved.
Drawings
Fig. 1 is a schematic architecture diagram of an android operating system according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for obtaining a beauty parameter according to an embodiment of the present invention;
FIG. 3 is a second flowchart of a method for obtaining a beauty parameter according to an embodiment of the present invention;
FIG. 4 is a third flowchart of a method for obtaining a beauty parameter according to an embodiment of the present invention;
fig. 5 is one of schematic structural diagrams of a terminal device according to an embodiment of the present invention;
fig. 6 is a second schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 7 is a third schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 8 is a hardware schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims, are used for distinguishing between different objects and not necessarily for describing a particular sequential order of objects. For example, the first image, the second image, the third image, the fourth image, and the like are used to distinguish between different images, and are not used to describe a particular order of images.
In embodiments of the invention, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, the meaning of "a plurality of" means two or more, for example, the meaning of a plurality of processing units means two or more; the plurality of elements means two or more elements and the like.
The embodiment of the invention provides a method for acquiring beauty parameters, wherein terminal equipment acquires N pieces of expression information which are respectively used for indicating the expression of a user when browsing N first images; determining M second images from the N first images according to the N expression information; acquiring target beauty parameters according to the beauty parameters of at least one second image; wherein M, N is a positive integer. According to the scheme, the terminal equipment obtains N pieces of expression information, and determines M second images from the N first images according to the N pieces of expression information, wherein each second image is a satisfactory image of a user; and obtaining target beauty parameters satisfied by the user according to the beauty parameters of at least one second image in the M second images. Each second image is a user-satisfactory image, and the obtained beauty parameters are user-satisfactory beauty parameters according to at least one second image. Therefore, the user can obtain the target beauty parameters satisfied by the user without repeatedly adjusting the beauty parameters on the photographing interface. Therefore, the problems that the operation of adjusting the beauty parameters by a user in the prior art is complicated and the time consumption is long can be solved.
The software environment to which the method for obtaining the beauty parameters provided by the embodiment of the invention is applied is described below by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, respectively: an application program layer, an application program framework layer, a system runtime layer and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third party application programs) in the android operating system.
The application framework layer is a framework of applications, and developers can develop some applications based on the application framework layer while adhering to the development principle of the framework of the applications.
The system runtime layer includes libraries (also referred to as system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of the android operating system, and belongs to the bottommost layer of the software hierarchy of the android operating system. The kernel layer provides core system services and a driver related to hardware for the android operating system based on a Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the method for acquiring the beauty parameters provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the method for acquiring the beauty parameters may be operated based on the android operating system shown in fig. 1. The method for acquiring the beauty parameters provided by the embodiment of the invention can be realized by the processor or the terminal through running the software program in the android operating system.
The terminal device in the embodiment of the invention can be a mobile terminal device or a non-mobile terminal device. The mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personaldigital assistant, PDA), or the like; the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiment of the present invention is not particularly limited.
The execution main body of the method for obtaining the beauty parameters provided by the embodiment of the invention can be the terminal equipment (including mobile terminal equipment and non-mobile terminal equipment), or can be a functional module and/or a functional entity capable of realizing the method in the terminal equipment, and the implementation main body can be specifically determined according to actual use requirements, and the embodiment of the invention is not limited. The method for obtaining the beauty parameters provided by the embodiment of the invention is exemplified by the terminal equipment.
Referring to fig. 2, an embodiment of the present invention provides a method for obtaining a beauty parameter, which may include the following steps 201 to 203.
The N expression information is used for indicating the expression of the user when browsing N first images respectively, namely each expression information is used for indicating the expression of the user when browsing one of the N first images, each second image is an image meeting certain requirements of the user, and each expression is optionally used for indicating the preference degree of the user on the browsed image. Wherein N is a positive integer.
It should be noted that: the images in the embodiment of the invention are all images with a beautifying effect.
The N first images may be images in an album of the terminal device, or may be images in a chat interface (for example, a WeChat chat interface) in an application of the terminal device, or may be other images, which is not limited in the embodiment of the present invention.
And each time the terminal equipment displays a first image, acquiring expression information. Each piece of expression information is used for indicating the expression of the user when browsing one first image, each piece of expression information is used for indicating the preference degree of the user on the browsed image, namely, each piece of expression information is used for indicating the preference degree of the user on the browsed image.
Illustratively, each time a user browses a first image, the user inputs a preference level for the first image. For example, the terminal device may acquire the expression words such as like, dislike, very like, etc. input by the user; the terminal equipment can acquire the emoticons which are input by the user and used for indicating like, dislike, very like and the like; the terminal equipment can acquire the expression image when the user browses each first image; the terminal device may also obtain expression information in other manners, which is not limited in the embodiment of the present invention.
This step may be implemented by the following steps 201 a-201 b, for example.
In step 201a, the terminal device collects N expression images, where the N expression images are images of expressions of the user when browsing the N first images.
The terminal equipment collects N expression images, and each expression image is an image of an expression when a user browses one of the N first images.
Under the condition that the terminal equipment detects that the user browses the first image, the terminal equipment uses the front-facing camera to acquire the expression image when the user browses the first image.
Step 201b, the terminal device analyzes each expression image to obtain one expression information, so as to obtain the N expression information.
The terminal equipment analyzes each expression image to obtain one expression information by adopting a facial expression recognition technology so as to obtain the N expression information. In the embodiment of the invention, the facial expression recognition technology can be a Principal component analysis (Principal ComponentAnalysis, PCA) method based on a characteristic face, a facial motion coding analysis method (FacialActionsCodeSystem, FACS), a facial motion parameter method (faceanimation parameter fap) in MPEG-4, a deep learning method and the like, and the embodiment of the invention is not limited by referring to the prior related technology.
Optionally, the preference degree indicated by the expression information corresponding to each second image is within the first numerical range. Wherein M is a positive integer.
This step may be implemented, for example, by the following steps 202 a-202 b.
Step 202a, the terminal device divides the N first images into at least one image set according to the N expression information.
The preference degree indicated by each expression information corresponding to different image sets is in different numerical ranges.
For example, the N first images may be divided into two image sets, one favorite image set and one disfavorite image set, according to the preference degree; the N first images may be divided into three image sets, a very favorite image set, a favorite image set, and a disfavorite image set according to the preference degree; the N first images can be divided into more image sets according to the preference degree; the embodiments of the present invention are not limited.
The first numerical range may be any one of different numerical ranges, and the preferred first numerical range is the highest preference in the different numerical ranges, which is specifically set according to practical situations, and the embodiment of the present invention is not limited.
And the terminal equipment classifies the N first images according to the N expression information, wherein the preference degree indicated by the expression information corresponding to each second image is in a first numerical range, and the M second images are of one type. The specific terminal device may classify the N pieces of expression information according to the preference degrees indicated by the expression information, determine M pieces of expression information of which preference degrees are within a first numerical range, and then determine the M second images belonging to a class corresponding to each of the M pieces of expression information according to the M pieces of expression information.
Alternatively, the N first images may be the same as the M second images, i.e., the N first images are the M second images. Optionally, if the N first images are different from the M second images, the images other than the M second images in the N first images may be classified into one type or multiple types, and may be specifically set according to actual situations, which is not limited by the embodiment of the present invention.
Step 202b, the terminal device determines each first image in the target image set in the at least one image set as the M second images.
The preference degree indicated by each expression information corresponding to the target image set is in the first numerical range. The minimum value of the first numerical range is larger than any value in other numerical ranges, and the other numerical ranges are numerical ranges corresponding to other image sets except the target image set in the at least one image set. And if the preference degree indicated by the expression information corresponding to the images in the target image set is highest, the M second images are the favorite beauty images of the user.
Optionally, in combination with step 201 a-step 201b and step 202 a-step 202b, the terminal device may use class methods such as a deep learning classifier, a linear classifier, a neural network classifier, a support vector machine, a hidden markov model, etc., to classify the N expression pictures, and the specific classification method may refer to the related art, which is not repeated in the embodiments of the present invention.
For example, a deep learning classification model may be used to classify the N expression images (recorded as first expression images) to obtain M second expression images with indicated preference degrees within a first range, where the deep learning classification model is generated according to historical expression images, and a historical expression image is used to indicate preference degrees of a user for a historical first image. And then the terminal equipment acquires M second images corresponding to the M second expression images. The specific method for obtaining the deep learning classification model can refer to the prior related art, and the embodiment of the invention is not limited.
For example, a deep learning classification model (which needs to be acquired according to a large amount of data) may be acquired according to the following method, in the case that a history user (for example, a user with a collection of not less than 1000) browses a face-beautifying image (first image), an expression image of the user is acquired by a front camera when the user browses each of the face-beautifying Yan Tuxiang (first image) (this time the face is directed to the screen), the expression indicated by a professional marker (generally, different users with a need of marking not less than 1000 bits and not less than 3 ten thousand expression images) corresponds to likes or dislikes (the degree of preference is classified into two types, i.e., likes and dislikes), these expression images and the corresponding face-beautifying image (first image) are input into the deep learning basic model, and the deep learning classification model in which the expression indicated by the expression image corresponds to the liked face-beautifying image (second image) is learned.
And the terminal equipment acquires the target beauty parameters according to the beauty parameters of at least one second image in the M second images.
This step may be implemented, for example, by the following steps 203 a-203 b.
In step 203a, the terminal device obtains the beauty parameters of each second image in the at least one second image, so as to obtain at least one first beauty parameter.
Optionally, the terminal device may select at least one second image from the M second images according to a preset rule.
The method may include selecting at least one second image from the M second images at random, sorting the M second images, selecting at least one second image corresponding to a specific sequence number from the M second images (for example, at least one second image corresponding to a singular sequence number), or selecting according to other preset rules, which is not limited in the embodiment of the present invention.
The terminal equipment acquires the beauty parameters of each second image in the at least one second image to obtain at least one first beauty parameter. The method for obtaining the beauty parameters of each second image by the terminal device may refer to any method for obtaining the beauty parameters of the images in the prior art, and the embodiment of the invention is not limited.
For example, when the beauty image is shot, the beauty parameters corresponding to the beauty image are stored in the network sharing database, and when the terminal equipment needs to acquire the beauty parameters of the beauty image, the terminal equipment can go to the network sharing database to acquire the corresponding beauty parameters.
Step 203b, the terminal device weights the at least one first beauty parameter to obtain the target beauty parameter.
The terminal device weights the at least one first beauty parameter as a target beauty parameter. The terminal equipment carries out weighting operation on the at least one first beauty parameter, and the beauty parameter obtained after weighting is used as a target beauty parameter. The weighting coefficient may be obtained according to a certain set rule, for example, may be obtained randomly, or may be the inverse of the number of the first beauty parameters (i.e. the at least one first beauty parameter is weighted to be the average value of the at least one first beauty parameter), which is not limited in the embodiment of the present invention. The sum of the respective weighting coefficients is 1.
It should be noted that: in the embodiment of the invention, the beauty parameters refer to all beauty parameters generally, and do not refer to a certain kind of beauty parameters. For example, if the beauty parameters include four (of course, other) degrees of skin grinding, whitening, redness and fair, the beauty Yan Canshu (first beauty parameter) in the embodiment of the present invention is four degrees of skin grinding, whitening, redness and fair. The weighting of the four beauty parameters is to weight the four beauty parameters respectively.
Preferably, the target beauty parameters may be obtained according to the beauty parameters of each of the M second images, that is, the beauty Yan Canshu of each second image is weighted, so as to obtain the target beauty parameters, for example, an average value of the beauty parameters of each of the M second images may be obtained as the target beauty parameters.
The embodiment of the invention provides a method for acquiring beauty parameters, wherein terminal equipment acquires N pieces of expression information which are respectively used for indicating the expression of a user when browsing N first images; determining M second images from the N first images according to the N expression information; acquiring target beauty parameters according to the beauty parameters of at least one second image; wherein M, N is a positive integer. According to the scheme, the terminal equipment obtains N pieces of expression information, and determines M second images from the N first images according to the N pieces of expression information, wherein each second image is a satisfactory image of a user; and obtaining target beauty parameters satisfied by the user according to the beauty parameters of at least one second image in the M second images. Each second image is a user-satisfactory image, and the obtained beauty parameters are user-satisfactory beauty parameters according to at least one second image. Therefore, the user can obtain the target beauty parameters satisfied by the user without repeatedly adjusting the beauty parameters on the photographing interface. Therefore, the problems that the operation of adjusting the beauty parameters by a user in the prior art is complicated and the time consumption is long can be solved.
As shown in fig. 3 in conjunction with fig. 2, the method for obtaining the beauty parameters according to the embodiment of the present invention may further include the following step 204 after step 203.
The terminal device will set the target beauty parameters as default beauty parameters in all shooting applications in the terminal device. The shooting application may be, for example, a camera application, a beauty show application, a WeChat application, etc., which is not limited in the embodiment of the present invention.
Therefore, when a user uses any shooting application to shoot, the beautifying effect of the beautifying parameters displayed on the shooting preview interface in the shooting application is satisfactory to the user, so that the user can obtain the satisfactory beautifying parameters without multiple adjustments, the adjusting frequency of the user is reduced, and the user experience is improved.
As shown in fig. 4 in conjunction with fig. 2, before step 201, the method for obtaining a beauty parameter according to the embodiment of the present invention may further include the following step 205; this step 201 may be specifically implemented by the following step 201 c; after step 203, the method for obtaining the beauty parameters according to the embodiment of the present invention may further include the following step 206.
Step 205, the terminal device receives an input of a user on a shooting preview interface.
The input of the user on the shooting preview interface may be a combination input of inputs such as click input and slide input, and specifically determined according to actual conditions, which is not limited by the embodiment of the present invention.
For example, the input may be an input that a user opens an album interface from a shooting preview interface and browses N first images at the album interface; the input may also be the input of N first images recommended by the user browsing the terminal device at the shooting preview interface; the input may also be other, and embodiments of the invention are not limited.
Step 201c, the terminal device responds to the input to obtain the N expression information.
For a specific description, reference may be made to the description of step 201, which is not repeated here.
And 206, the terminal equipment adjusts the beauty parameters of the shooting preview interface into the target beauty parameters.
After the terminal device obtains the target beauty parameters, the beauty parameters of the shooting preview interface are adjusted to the target beauty parameters, and then the user can shoot by using the target beauty parameters (the beauty effect is satisfactory to the user).
Because the requirements on the beautifying effect are different when the users possibly are different, the target beautifying parameters are acquired when shooting each time, and the current target beautifying parameters can be enabled to meet the current requirements of the users.
As shown in fig. 5, an embodiment of the present invention provides a terminal device 120, where the terminal device 120 includes: an acquisition module 121 and a determination module 122;
the acquiring module 121 is configured to acquire N pieces of expression information, where the N pieces of expression information are respectively used to indicate expressions of a user when browsing N first images;
the determining module 122 is configured to determine M second images from the N first images according to the N expression information acquired by the acquiring module 121;
the obtaining module 121 is further configured to obtain a target beauty parameter according to the beauty parameter of the at least one second image determined by the determining module 122;
wherein M, N is a positive integer.
Optionally, each expression is used for indicating the preference degree of the user on the browsed image, and the preference degree indicated by the expression information corresponding to each second image is in the first numerical range.
Optionally, the acquiring module 121 is specifically configured to acquire N expression images, where the N expression images are images of expressions when the user browses the N first images respectively; analyzing each expression image to obtain one expression information so as to obtain the N expression information.
Optionally, the determining module 122 is specifically configured to divide the N first images into at least one image set according to the N expression information acquired by the acquiring module 121, where preference degrees indicated by the respective expression information corresponding to different image sets are in different numerical ranges; determining each first image in a target image set in the at least one image set as the M second images, wherein the preference degree indicated by each expression information corresponding to the target image set is in the first numerical range; the minimum value of the first numerical range is larger than any value in other numerical ranges, and the other numerical ranges are numerical ranges corresponding to other image sets except the target image set in the at least one image set.
Optionally, the obtaining module 121 is further specifically configured to obtain a beauty parameter of each second image in the at least one second image, so as to obtain at least one first beauty parameter; and weighting the at least one first beauty parameter to obtain the target beauty parameter.
Optionally, in conjunction with fig. 5, as shown in fig. 6, the terminal device 120 further includes: a setting module 123; the setting module 123 is configured to set the target beauty parameter acquired by the acquiring module 121 as a default beauty parameter of the terminal device after the target beauty parameter is acquired.
Optionally, in conjunction with fig. 5, as shown in fig. 7, the terminal device 120 further includes: a receiving module 124 and an adjusting module 125; the receiving module 124 is configured to receive an input of a user on a shooting preview interface before the N pieces of expression information are acquired; the obtaining module 121 is further specifically configured to obtain the N expression information in response to the input received by the receiving module 124; the adjusting module 125 is configured to adjust the beauty parameters of the shooting preview interface to the target beauty parameters acquired by the acquiring module 121 after the target beauty parameters are acquired.
The terminal device provided in the embodiment of the present invention can implement each process shown in any one of fig. 2 to fig. 4 in the foregoing method embodiment, and in order to avoid repetition, details are not repeated here.
The embodiment of the invention provides terminal equipment, which acquires N pieces of expression information, wherein the N pieces of expression information are respectively used for indicating the expression of a user when browsing N first images; determining M second images from the N first images according to the N expression information; acquiring target beauty parameters according to the beauty parameters of at least one second image; wherein M, N is a positive integer. According to the scheme, the terminal equipment obtains N pieces of expression information, and determines M second images from the N first images according to the N pieces of expression information, wherein each second image is a satisfactory image of a user; and obtaining target beauty parameters satisfied by the user according to the beauty parameters of at least one second image in the M second images. Each second image is a user-satisfactory image, and the obtained beauty parameters are user-satisfactory beauty parameters according to at least one second image. Therefore, the user can obtain the target beauty parameters satisfied by the user without repeatedly adjusting the beauty parameters on the photographing interface. Therefore, the problems that the operation of adjusting the beauty parameters by a user in the prior art is complicated and the time consumption is long can be solved.
Fig. 8 is a schematic hardware structure of a terminal device for implementing various embodiments of the present invention. As shown in fig. 8, the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. It will be appreciated by those skilled in the art that the terminal device structure shown in fig. 8 does not constitute a limitation of the terminal device, and the terminal device may comprise more or less components than shown, or may combine certain components, or may have a different arrangement of components. In the embodiment of the invention, the terminal equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal equipment, a wearable device, a pedometer and the like.
The processor 110 is configured to obtain N pieces of expression information, where each piece of expression information is configured to indicate an expression when the user browses one of the N first images, and each expression is configured to indicate a preference degree of the user for the browsed image; according to the N pieces of expression information, M pieces of second images are determined from the N pieces of first images, and the preference degree indicated by the expression information corresponding to each second image is in a first numerical range; obtaining target beauty parameters according to the beauty parameters of at least one second image in the M second images; wherein M, N is a positive integer.
The terminal equipment provided by the embodiment of the invention acquires N pieces of expression information which are respectively used for indicating the expressions of a user when browsing N first images; determining M second images from the N first images according to the N expression information; acquiring target beauty parameters according to the beauty parameters of at least one second image; wherein M, N is a positive integer. According to the scheme, the terminal equipment obtains N pieces of expression information, and determines M second images from the N first images according to the N pieces of expression information, wherein each second image is a satisfactory image of a user; and obtaining target beauty parameters satisfied by the user according to the beauty parameters of at least one second image in the M second images. Each second image is a user-satisfactory image, and the obtained beauty parameters are user-satisfactory beauty parameters according to at least one second image. Therefore, the user can obtain the target beauty parameters satisfied by the user without repeatedly adjusting the beauty parameters on the photographing interface. Therefore, the problems that the operation of adjusting the beauty parameters by a user in the prior art is complicated and the time consumption is long can be solved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be configured to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the received downlink data with the processor 110; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 may also communicate with networks and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user to send and receive e-mail, browse web pages, access streaming media, etc.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the terminal device 100. The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used for receiving an audio or video signal. The input unit 104 may include a graphic ProcessingUnit, GPU and a microphone 1042, and the graphic 1041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. Microphone 1042 may receive sound and be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 101 in the case of a telephone call mode.
The terminal device 100 further comprises at least one sensor 105, such as a light sensor, a motion sensor and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and the proximity sensor can turn off the display panel 1061 and/or the backlight when the terminal device 100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when the accelerometer sensor is stationary, and can be used for recognizing the gesture (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking) and the like of the terminal equipment; the sensor 105 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (LiquidCrystalDisplay, LCD), an Organic Light-emitting diode (Organic Light-EmittingDiode, OLED), or the like.
The user input unit 107 is operable to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 1071 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. Further, the touch panel 1071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 107 may include other input devices 1072 in addition to the touch panel 1071. In particular, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 110 to determine the type of touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of touch event. Although in fig. 8, the touch panel 1071 and the display panel 1061 are two independent components for implementing the input and output functions of the terminal device, in some embodiments, the touch panel 1071 may be integrated with the display panel 1061 to implement the input and output functions of the terminal device, which is not limited herein.
The interface unit 108 is an interface to which an external device is connected to the terminal apparatus 100. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and an external device.
The processor 110 is a control center of the terminal device, connects respective parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power source 111 (e.g., a battery) for supplying power to the respective components, and optionally, the power source 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system.
In addition, the terminal device 100 includes some functional modules, which are not shown, and will not be described herein.
Optionally, the embodiment of the present invention further provides a terminal device, which may include the processor 110, the memory 109, and the computer program stored in the memory 109 and capable of running on the processor 110 as shown in fig. 8, where the computer program when executed by the processor 110 implements each process of the method for obtaining the beauty parameters shown in any one of fig. 2 to fig. 4 in the embodiment of the method, and the same technical effects can be achieved, so that repetition is avoided and no further description is given here.
The embodiment of the present invention further provides a computer readable storage medium, on which a computer program is stored, where the computer program when executed by a processor implements each process of the method for obtaining the beauty parameters shown in any one of fig. 2 to fig. 4 in the above method embodiment, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein. Wherein the computer readable storage medium is selected from Read-only memory (ROM), random access memory (RandomAccessMemory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.
Claims (9)
1. A method of obtaining a cosmetic parameter, the method comprising:
acquiring N pieces of expression information, wherein the N pieces of expression information are respectively used for indicating the expressions of a user when browsing N first images, each expression is used for indicating the preference degree of the user on the browsed first images, and the N first images are all images with a beautifying effect;
determining M second images from the N first images according to the N expression information, wherein the preference degree indicated by the expression information corresponding to each second image is in a first numerical range, and the first numerical range is the numerical range with the highest preference degree in different numerical ranges;
acquiring target beauty parameters according to the beauty parameters of at least one second image;
Wherein M, N is a positive integer.
2. The method of claim 1, wherein the obtaining N pieces of expression information includes:
collecting N expression images, wherein the N expression images are images of expressions of a user when browsing the N first images respectively;
and analyzing each expression image to obtain one piece of expression information so as to obtain the N pieces of expression information.
3. The method of claim 1, wherein determining M second images from the N first images according to the N expression information comprises:
dividing the N first images into at least one image set according to the N expression information, wherein the preference degrees indicated by the expression information corresponding to different image sets are in different numerical ranges;
determining each first image in a target image set in the at least one image set as the M second images, wherein the preference degree indicated by each expression information corresponding to the target image set is in the first numerical range;
the minimum value of the first numerical range is larger than any value in other numerical ranges, and the other numerical ranges are numerical ranges corresponding to other image sets except the target image set in the at least one image set.
4. The method of claim 1, wherein the obtaining the target beauty parameters from the beauty parameters of the at least one second image comprises:
obtaining the beauty parameters of each second image in the at least one second image to obtain at least one first beauty parameter;
and weighting the at least one first beauty parameter to obtain the target beauty parameter.
5. The method according to any one of claims 1 to 4, further comprising, after the obtaining the target beauty parameters:
and setting the target beauty parameters as default beauty parameters of the terminal equipment.
6. The method according to any one of claims 1 to 4, wherein before the acquiring N pieces of expression information, the method further includes:
receiving input of a user on a shooting preview interface;
the obtaining N expression information includes:
responding to the input, and acquiring the N expression information;
after the target beauty parameters are obtained, the method further comprises the following steps:
and adjusting the beauty parameters of the shooting preview interface to the target beauty parameters.
7. A terminal device, characterized in that the terminal device comprises: an acquisition module and a determination module;
The acquisition module is used for acquiring N pieces of expression information, wherein the N pieces of expression information are respectively used for indicating the expressions of a user when browsing N first images, each expression is used for indicating the preference degree of the user on the browsed first images, and the N first images are all images with a beautifying effect;
the determining module is configured to determine M second images from the N first images according to the N expression information acquired by the acquiring module, where a preference degree indicated by the expression information corresponding to each second image is in a first numerical range, and the first numerical range is a numerical range with highest preference degree in different numerical ranges;
the acquisition module is further used for acquiring target beauty parameters according to the beauty parameters of the at least one second image determined by the determination module;
wherein M, N is a positive integer.
8. Terminal device, characterized by comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which, when executed by the processor, implements the steps of the method of obtaining a beauty parameter according to any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method of obtaining a beauty parameter according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811142699.XA CN109242767B (en) | 2018-09-28 | 2018-09-28 | Method for obtaining beauty parameters and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811142699.XA CN109242767B (en) | 2018-09-28 | 2018-09-28 | Method for obtaining beauty parameters and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109242767A CN109242767A (en) | 2019-01-18 |
CN109242767B true CN109242767B (en) | 2023-05-05 |
Family
ID=65054508
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811142699.XA Active CN109242767B (en) | 2018-09-28 | 2018-09-28 | Method for obtaining beauty parameters and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109242767B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112230982A (en) * | 2020-10-15 | 2021-01-15 | 北京达佳互联信息技术有限公司 | Material processing method and device, electronic equipment and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8379999B2 (en) * | 2011-01-18 | 2013-02-19 | Chanan Gabay | Methods, circuits, devices, apparatuses and systems for providing image composition rules, analysis and improvement |
CN104637078B (en) * | 2013-11-14 | 2017-12-15 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device |
CN104715236A (en) * | 2015-03-06 | 2015-06-17 | 广东欧珀移动通信有限公司 | Face beautifying photographing method and device |
CN107730442A (en) * | 2017-10-16 | 2018-02-23 | 郑州云海信息技术有限公司 | A kind of face U.S. face method and device |
CN107832784B (en) * | 2017-10-27 | 2021-04-30 | 维沃移动通信有限公司 | Image beautifying method and mobile terminal |
-
2018
- 2018-09-28 CN CN201811142699.XA patent/CN109242767B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109242767A (en) | 2019-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107817939B (en) | Image processing method and mobile terminal | |
CN108877741A (en) | A kind of screen luminance adjustment method and terminal device | |
CN108712603B (en) | Image processing method and mobile terminal | |
CN110719402A (en) | Image processing method and terminal equipment | |
CN109005336B (en) | Image shooting method and terminal equipment | |
CN109495616B (en) | Photographing method and terminal equipment | |
CN107644396B (en) | Lip color adjusting method and device | |
CN109343693B (en) | Brightness adjusting method and terminal equipment | |
CN109788204A (en) | Shoot processing method and terminal device | |
CN108848309B (en) | Camera program starting method and mobile terminal | |
CN110533651B (en) | Image processing method and device | |
CN108388403B (en) | Method and terminal for processing message | |
CN109448069B (en) | Template generation method and mobile terminal | |
CN109246351B (en) | Composition method and terminal equipment | |
CN111080747B (en) | Face image processing method and electronic equipment | |
CN107728877B (en) | Application recommendation method and mobile terminal | |
CN110505660B (en) | Network rate adjusting method and terminal equipment | |
CN109639981B (en) | Image shooting method and mobile terminal | |
CN109104573B (en) | Method for determining focusing point and terminal equipment | |
CN108959585B (en) | Expression picture obtaining method and terminal equipment | |
CN110825475A (en) | Input method and electronic equipment | |
CN108628534B (en) | Character display method and mobile terminal | |
CN109242767B (en) | Method for obtaining beauty parameters and terminal equipment | |
CN111093035B (en) | Image processing method, electronic device, and storage medium | |
CN109286726B (en) | Content display method and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |