CN106326849A - Beauty processing method and device - Google Patents
Beauty processing method and device Download PDFInfo
- Publication number
- CN106326849A CN106326849A CN201610683353.5A CN201610683353A CN106326849A CN 106326849 A CN106326849 A CN 106326849A CN 201610683353 A CN201610683353 A CN 201610683353A CN 106326849 A CN106326849 A CN 106326849A
- Authority
- CN
- China
- Prior art keywords
- face
- real time
- time imaging
- recognition result
- described real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
Abstract
The invention provides a beauty processing method and device The beauty processing method comprises steps of collecting a real-time image, performing human face recognition on a collected real-time image, determining a recognition result, controlling a beauty processing function to be in a turn-on state when the identification result indicates that the real time image comprises a human face needing beauty processing, and controlling the beauty processing function to be in a turn-off state when the recognition result indicates that the real-time image does not include the human face that needs the beauty processing. In the beauty processing method and device, a terminal can automatically control the state of the beauty processing function according to the identification result obtained through performing human face recognition on the real-time image and improves an intelligentization degree. Besides, the beauty processing method and device can also avoid a problem that the unnecessary turning on of the beauty processing function causes resource waste and image quality impairment, simplify user operation and improve user experience.
Description
Technical field
It relates to the communications field, particularly relate to U.S. face processing method and processing device.
Background technology
At present, the application that U.S. face processes function is more and more extensive, but, the most described U.S. face processes function and is in unlatching shape
After state or closed mode, unless user's manual modification, otherwise terminal cannot automatically be revised described U.S. face and process the shape residing for function
State.
When described U.S. face process function is in opening, if currently need not use described U.S. face to process function,
A large amount of central processing unit (Central Processing Unit, CPU) resource and graphic process unit (Graphics will be taken
Processing Unit, GPU) resource, causes the wasting of resources, causes the image quality of image to lose simultaneously.When described U.S. face processes merit
When can be closed, just automatically the face in current real time imaging cannot be carried out U.S. face and process, poor user experience.
Summary of the invention
In view of this, present disclose provides U.S. face processing method and processing device, to solve the deficiency in correlation technique.
First aspect according to disclosure embodiment, it is provided that a kind of U.S. face processing method, described method includes:
Gather real time imaging;
The described real time imaging collected is carried out recognition of face, determines recognition result;
When described recognition result indicates described real time imaging to include the face needing to carry out U.S. face process, control U.S. face
Process function and be in opening;
When not including the face needing to carry out U.S. face process during described recognition result indicates described real time imaging, control institute
State U.S. face process function to be closed.
Alternatively, described determine recognition result, including:
When described real time imaging includes face, determine described recognition result be described real time imaging include needing into
The face that the U.S. face of row processes;
When described real time imaging does not includes face, determine that described recognition result is not include in described real time imaging needing
Carry out the face that U.S. face processes.
Alternatively, described determine recognition result, including:
When described real time imaging includes face, and the area of described face accounts for the percentage ratio of area of described real time imaging
When exceeding preset value, determine that described recognition result is that described real time imaging includes needing to carry out the face that U.S. face processes;
When not including face in described real time imaging, or the area of face described in described real time imaging accounts for described real-time figure
When the percentage ratio of the area of picture is not less than described preset value, determine that described recognition result is not include needs in described real time imaging
Carry out the face that U.S. face processes.
Alternatively, described determine recognition result, including:
When described real time imaging includes face, and described face is positioned at target area, determines that described recognition result is
Described real time imaging includes needing to carry out the face that U.S. face processes;
When not including face in described real time imaging, or described in described real time imaging, face is not located at described target area
Time interior, determine that described recognition result is not include in described real time imaging needing to carry out the face that U.S. face processes;
Wherein, described target area includes focal zone and/or user designated area.
Alternatively, described method also includes:
When described real time imaging includes that the number of face is multiple, process function by described U.S. face and be pointed to described
The described face of target area carries out U.S. face and processes;
Export the described real time imaging after being processed by U.S. face.
Second aspect according to disclosure embodiment, it is provided that a kind of U.S. face processing means, described device includes:
Image capture module, is configured to gather real time imaging;
Face recognition module, is configured to the described real time imaging to collecting and carries out recognition of face, determine recognition result;
First control module, is configured as the described recognition result described real time imaging of instruction and includes that needs carry out U.S. face
During the face processed, control U.S. face process function and be in opening;
Second control module, is configured as in the described recognition result described real time imaging of instruction not including that needs carry out U.S.
During the face that face processes, control described U.S. face process function and be closed.
Alternatively, described face recognition module includes:
First determines submodule, is configured as described real time imaging when including face, determines that described recognition result is
Described real time imaging includes needing to carry out the face that U.S. face processes;
Second determines submodule, when being configured as not including face in described real time imaging, determines described recognition result
For described real time imaging not including need to carry out the face that U.S. face processes.
Alternatively, described face recognition module includes:
3rd determines submodule, is configured as described real time imaging and includes face, and the area of described face accounts for institute
When the percentage ratio of the area stating real time imaging exceedes preset value, determine that described recognition result is that described real time imaging includes needs
Carry out the face that U.S. face processes;
4th determines submodule, is configured as in described real time imaging not including institute in face, or described real time imaging
State the area of face when accounting for the percentage ratio of area of described real time imaging not less than described preset value, determine that described recognition result is
Described real time imaging does not includes need to carry out the face that U.S. face processes.
Alternatively, described face recognition module includes:
5th determines submodule, is configured as described real time imaging and includes face, and described face is positioned at target area
In territory, determine that described recognition result is that described real time imaging includes needing to carry out the face that U.S. face processes;
6th determines submodule, is configured as in described real time imaging not including institute in face, or described real time imaging
State face when being not located in described target area, determine that described recognition result is not include in described real time imaging that needs carry out U.S.
The face that face processes;
Wherein, described target area includes focal zone and/or user designated area.
Alternatively, described device also includes:
Processing module, is configured as described real time imaging and includes when the number of face is multiple, by described U.S. face
Process function is pointed to the described face of described target area and carries out U.S. face process;
Image output module, is configured to export the described real time imaging after being processed by U.S. face.
The third aspect according to disclosure embodiment, it is provided that a kind of U.S. face processing means, including:
Processor;
For storing the memorizer of processor executable;
Wherein, described processor is configured to:
Gather real time imaging;
The described real time imaging collected is carried out recognition of face, determines recognition result;
When described recognition result indicates described real time imaging to include the face needing to carry out U.S. face process, control U.S. face
Process function and be in opening;
When not including the face needing to carry out U.S. face process during described recognition result indicates described real time imaging, control institute
State U.S. face process function to be closed.
Embodiment of the disclosure that the technical scheme of offer can include following beneficial effect:
In disclosure embodiment, terminal can automatically control according to real time imaging carries out the recognition result of recognition of face
Described U.S. face processes function state in which, improves Intelligent Terminal degree.Furthermore it is also possible to avoid described U.S. face to process merit
Can the wasting of resources that causes of unnecessary unlatching and the impaired problem of image quality, simplify user operation, improve Consumer's Experience.
In disclosure embodiment, when determining that real time imaging includes face, or described real time imaging can determined
In main body when being face, determine that recognition result is that described real time imaging includes needing to carry out the face that U.S. face processes.Optional
Ground, can exceed preset value, or described face position at the percentage ratio of the area that the area of described face accounts for described real time imaging
Time in target area, determine that the main body in described real time imaging is face.Correspondingly, can be accordingly corresponding true
Fixed described real time imaging does not includes the situation needing to carry out the face that U.S. face processes.In disclosure embodiment, terminal is according to upper
State recognition result and automatically control described U.S. face process function state in which.Realizing simplicity, availability is high, and improves terminal
Intelligence degree.
In disclosure embodiment, when U.S. face process function is in opening, terminal can be processed by described U.S. face
Function carries out U.S. face to the face in the current real time imaging gathered and processes, and then exports described real time imaging.In described U.S. face
When process function is closed, described terminal directly exports the described real time imaging collected.It is no longer necessary to user
Manually U.S. face is processed state to switch over, simplify user operation, and can export and use in the case of avoiding the wasting of resources
The image that family is satisfied, improves Consumer's Experience.
In disclosure embodiment, terminal can process function only to the people being in described target area by described U.S. face
Face carries out the real time imaging after U.S. face, and then output U.S. face process.By said process, described terminal can be with described target area
Described face in territory is that main body carries out U.S. face process, accelerates the speed that U.S. face processes, promotes Intelligent Terminal degree and user
Experience.
It should be appreciated that it is only exemplary and explanatory, not that above general description and details hereinafter describe
The disclosure can be limited.
Accompanying drawing explanation
Accompanying drawing herein is merged in description and constitutes the part of this specification, it is shown that meet the enforcement of the disclosure
Example, and for explaining the principle of the disclosure together with description.
Fig. 1 is that the disclosure is according to a kind of U.S. face process flow figure shown in an exemplary embodiment;
Fig. 2 A to 2D is that the disclosure processes scene schematic diagram according to the U.S. face shown in an exemplary embodiment;
Fig. 3 A to 3B is that the disclosure processes scene schematic diagram according to the U.S. face shown in an exemplary embodiment;
Fig. 4 is that the disclosure is according to the another kind of U.S. face process flow figure shown in an exemplary embodiment;
Fig. 5 is that the disclosure is according to the another kind of U.S. face process flow figure shown in an exemplary embodiment;
Fig. 6 is that the disclosure is according to a kind of U.S. face processing means block diagram shown in an exemplary embodiment;
Fig. 7 is that the disclosure is according to the another kind of U.S. face processing means block diagram shown in an exemplary embodiment;
Fig. 8 is that the disclosure is according to the another kind of U.S. face processing means block diagram shown in an exemplary embodiment;
Fig. 9 is that the disclosure is according to the another kind of U.S. face processing means block diagram shown in an exemplary embodiment;
Figure 10 is that the disclosure is according to the another kind of U.S. face processing means block diagram shown in an exemplary embodiment;
Figure 11 is that the disclosure is according to a kind of structural representation for U.S. face processing means shown in an exemplary embodiment
Figure.
Detailed description of the invention
Here will illustrate exemplary embodiment in detail, its example represents in the accompanying drawings.Explained below relates to
During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represents same or analogous key element.Following exemplary embodiment
Described in embodiment do not represent all embodiments consistent with the disclosure.On the contrary, they are only with the most appended
The example of the apparatus and method that some aspects that described in detail in claims, the disclosure are consistent.
The term run in the disclosure is only merely for describing the purpose of specific embodiment, and is not intended to be limiting the disclosure.
" a kind of ", " described " and " being somebody's turn to do " of the singulative run in disclosure and the accompanying claims book is also intended to include majority
Form, unless context clearly shows that other implications.It is also understood that the term "and/or" run herein refers to and wraps
Any or all containing one or more projects of listing being associated may combination.
Although should be appreciated that in the disclosure possible employing term first, second, third, etc. to describe various information, but this
A little information should not necessarily be limited by these terms.These terms are only used for same type of information is distinguished from each other out.Such as, without departing from
In the case of disclosure scope, the first information can also be referred to as the second information, and similarly, the second information can also be referred to as
One information.Depend on linguistic context, as run at this word " if " can be construed to " and ... time " or " when ...
Time " or " in response to determining ".
The U.S. face processing method that disclosure embodiment provides may be used for terminal, such as, smart mobile phone, panel computer, individual
Personal digital assistant (Personal Digital Assistant, PDA) etc..As it is shown in figure 1, Fig. 1 is according to an exemplary enforcement
A kind of U.S. face processing method exemplified, comprises the following steps:
In a step 101, real time imaging is gathered.
In this step, described terminal can pass through preassembled image collecting device, such as, image described in first-class collection
Real time imaging.
In a step 102, the described real time imaging collected is carried out recognition of face, determines recognition result.
In this step, described terminal carries out recognition of face according to correlation technique to the described real time imaging collected.Optional
Ground, can carry out the extraction of face characteristic parameter from described real time imaging by the face characteristic model pre-build, if
Extract described face characteristic parameter, determine that described real time imaging includes face, otherwise determine in described real time imaging not
Including face.
In disclosure embodiment, when determining described recognition result, can in the following ways in any one.
First kind of way, directly according to whether including in described real time imaging that face determines described recognition result.
In this kind of mode, described terminal can be in the manner described above when determining that described real time imaging includes face, directly
Connect and described recognition result is defined as the face that described real time imaging includes needing to carry out U.S. face process.Certainly, if described
Real time imaging does not include face, is then defined as described recognition result in described real time imaging not including that needs are carried out at U.S. face
The face of reason.
Whether the second way, determine described recognition result with face for main body according in described real time imaging.
When based on aforesaid way, described terminal determines that described real time imaging includes face, it is also possible to further determine that institute
Whether state real time imaging based on described face.It is alternatively possible to by the area shared by described face or described face be
The no target area that is positioned at determines.
Wherein, the percentage ratio of the area accounting for described real time imaging when the area of described face exceedes preset value, it may be determined that
Described real time imaging is based on described face.It is alternatively possible to add up the number of the pixel shared by described face and described
The total number of the pixel of real time imaging, the number of the pixel shared by described face accounts for the percentage ratio of described total number and is institute
State the percentage ratio that the area of face accounts for the area of described real time imaging.Certainly, if described real time imaging includes multiple face
Time, the number of the pixel shared by described face is the number sum of the pixel shared by all faces.
In disclosure embodiment, when described real time imaging includes face, and the area of described face accounts for described real-time figure
The percentage ratio of the area of picture exceedes preset value, determines that described recognition result is that described real time imaging includes that needs are carried out at U.S. face
The face of reason, otherwise, when not including face in described real time imaging, or described real time imaging includes face, but described face
When area accounts for the percentage ratio of the area of described real time imaging not less than described preset value, determine described recognition result be described in real time
Image does not includes need to carry out the face that U.S. face processes.
Or, when described face is positioned at described target area, such as shown in Fig. 2 A, it may be determined that described real time imaging
Based on described face.Alternatively, described target area can be the focal zone that described terminal pre-sets, and/or institute
State a certain region that user specifies.Described focal zone and user designated area can be misaligned, such as shown in Fig. 2 B;Can also
There is at least partly coincidence, such as shown in Fig. 2 C.Described target area can only include described focal zone, it is also possible to only includes use
Appointment region, family, or described focal zone and described user designated area can also be included simultaneously.
Certainly, if described real time imaging includes multiple face, it can be determined that whether multiple faces are in described target
In region, if the plurality of face there being at least one face be positioned at described target area, such as shown in Fig. 2 D, the most equally
May determine that described real time imaging is based on described face.
In disclosure embodiment, when described real time imaging includes face, and described face is positioned at target area, determines
Described recognition result is that described real time imaging includes needing carrying out the face that U.S. face processes, otherwise when in described real time imaging not
Including face, or described real time imaging includes face but time described face is not located in described target area, determines described identification
Result is not include in described real time imaging needing to carry out the face that U.S. face processes.
When described recognition result indicates described real time imaging to include the face needing to carry out U.S. face process, perform step
103, when not including the face needing to carry out U.S. face process during described recognition result indicates described real time imaging, perform step
104。
In step 103, control U.S. face process function and be in opening.
In this step, described terminal can be pressed automatically when determining that presently described U.S. face process function is closed
Photograph is closed U.S. face process function described in technical controlling and is in opening.It is alternatively possible to automatically control described U.S. face to process merit
Virtual key corresponding to energy is in opening.Such as shown in Fig. 3 A, the virtual icon of described slide block slides into right side, corresponds to
Described U.S. face processes function and is in opening.
At step 104, control described U.S. face process function to be closed.
In this step, described terminal can be pressed automatically when determining that presently described U.S. face process function is in opening
Photograph is closed U.S. face process function described in technical controlling and is closed.It is alternatively possible to automatically control described U.S. face to process merit
Virtual key corresponding to energy is closed.Such as shown in Fig. 3 B, the virtual icon of described slide block slides into left side, corresponds to
Described U.S. face processes function and is closed.
In above-described embodiment, terminal can automatically control institute according to real time imaging carries out the recognition result of recognition of face
State U.S. face and process function state in which, improve Intelligent Terminal degree.Furthermore it is also possible to avoid described U.S. face to process function
The wasting of resources that unnecessary unlatching causes and the impaired problem of image quality, simplify user operation, improve Consumer's Experience.
Certainly, described terminal can carry out U.S. by the described U.S. face process function being in opening to all faces
Face processes, and then the described real time imaging that output is after U.S. face processes.Or process function in described U.S. face and be in closedown shape
During state, directly export described real time imaging.It is no longer necessary to user manually U.S. face process state is switched over, simplifies user behaviour
Make, and customer satisfaction system image can be exported, improve Consumer's Experience in the case of avoiding the wasting of resources.
Further, as shown in Figure 4, Fig. 4 is at the another kind of U.S. face illustrated on the basis of aforementioned embodiment illustrated in fig. 1
Reason method, said method is further comprising the steps of:
In step 105, when the number of described face is multiple, processes function by described U.S. face and be pointed to described mesh
The described face in mark region carries out U.S. face and processes.
In this step, when the number of described face is multiple, function can be processed by described U.S. face and determine that U.S. face is joined
Number, described U.S. face parameter can be user setup or described terminal is preset, further, the most right by described U.S. face parameter
The described face being in described target area carries out U.S. face and processes, i.e. only corresponding to the described face in described target area
Pixel, carries out pixel parameter modification according to described U.S. face parameter.
In step 106, the described real time imaging after output is processed by U.S. face.
In this step, after the described terminal described face in being pointed to target area carries out U.S. face process, directly at screen
Described real time imaging is exported on curtain.
In above-described embodiment, described terminal can process function only to being in position in described real time imaging by described U.S. face
Described face in described target area carries out the described real time imaging after U.S. face, and then output U.S. face process.By above-mentioned
Process, described terminal can carry out U.S. face with the described face in described target area for main body and process, accelerate what U.S. face processed
Speed, promotes Intelligent Terminal degree and Consumer's Experience.
As a example by below with live application program (Application, App), above-mentioned U.S. face processing procedure is described.Ordinary circumstance
Under, described live App by the image acquisition device real time imaging that is currently up, and gives tacit consent to described U.S. face when opening
Process function and be in opening.The most described image collecting device switches, such as, be switched to rearmounted by front-facing camera
During photographic head, described U.S. face processes function and automatically switches to closed mode.The feelings switched based on described image collecting device
Condition, disclosure embodiment provides another kind of U.S. face processing method, as it is shown in figure 5, said method is further comprising the steps of:
In step 201, real time imaging is gathered.
In this step, described live App gathers institute automatically by the image collecting device after switching, such as post-positioned pick-up head
State real time imaging.
In step 202., the described real time imaging collected is carried out recognition of face.
In step 203, when described real time imaging includes face, and described face is positioned at target area, determines institute
Stating recognition result is that described real time imaging includes needing to carry out the face that U.S. face processes.
Wherein, alternatively, described target area can be user designated area.In disclosure embodiment, can be described
When real time imaging is with described face for main body, determine that described recognition result is that described real time imaging includes that needs are carried out at U.S. face
The face of reason.The number assuming described face is multiple, and at least one in the most described face needs to be positioned at described target area
In.
In step 204, control U.S. face process function and be in opening.
In this step, described terminal automatically controls described U.S. face process function and is switched to described unlatching by described closed mode
State.
In step 205, when the number of described face is multiple, processes function by described U.S. face and be pointed to described mesh
The described face in mark region carries out U.S. face and processes.
In this step, described terminal can process function by described U.S. face and only specify district to being positioned at user in multiple faces
Territory, the described face of the most described target area carries out U.S. face and processes.
In step 206, the described real time imaging after output is processed by U.S. face.
In step 207, when not including face in described real time imaging, or described real time imaging includes face but described
When face is not located in described target area, controls described U.S. face and process function and be closed.
In a step 208, described real time imaging is exported.
When described U.S. face process function be closed time, described terminal can directly export on screen described in real time
Image.
In above-described embodiment, after described U.S. face process function is in opening, described terminal is still according to relevant skill
Art gathers described real time imaging, and described real time imaging is carried out recognition of face.Once recognize in described real time imaging and do not wrap
Include face, determine that described recognition result is not include in described real time imaging needing to carry out the face that U.S. face processes, now control
Described U.S. face processes function and is switched to closed mode.Or recognize described real time imaging and include face, but described face is not
When being positioned at described target area, determine that described recognition result is not include in described real time imaging that needs are carried out at U.S. face equally
The face of reason, controls described U.S. face process function and is switched to closed mode.
Certainly, in live App, if user does not carries out image collecting device switching, equally to collecting
Described real time imaging carries out recognition of face, does not include in face, or described real time imaging if recognized in described real time imaging
When being not located in described target area including face but described face, same control described U.S. face and process function and be in closedown shape
State.
The U.S. face processing method that disclosure embodiment provides can be applied in live App, no longer needs during live
Wanting U.S. face described in user's manual switching to process the state of function, described terminal can control described U.S. automatically according to recognition result
Face processes the state of function, simplifies user operation, and can export customer satisfaction system figure in the case of avoiding the wasting of resources
Picture, improves Consumer's Experience.
Corresponding with preceding method embodiment, the disclosure additionally provides the embodiment of device.
As shown in Figure 6, Fig. 6 is that the disclosure is according to a kind of U.S. face processing means block diagram shown in an exemplary embodiment, bag
Include:
Image capture module 310, is configured to gather real time imaging;
Face recognition module 320, is configured to the described real time imaging to collecting and carries out recognition of face, determines identification knot
Really;
First control module 330, is configured as described recognition result and indicates described real time imaging to include that needs are carried out
During the face that U.S. face processes, control U.S. face process function and be in opening;
Second control module 340, be configured as described recognition result indicate in described real time imaging do not include needing into
During the face that the U.S. face of row processes, control described U.S. face process function and be closed.
As it is shown in fig. 7, Fig. 7 is the disclosure according to the another kind of U.S. face processing means block diagram shown in an exemplary embodiment,
This embodiment is on the basis of aforementioned embodiment illustrated in fig. 6, and described face recognition module 320 includes:
First determines submodule 321, is configured as described real time imaging when including face, determines described recognition result
Include needing to carry out the face that U.S. face processes for described real time imaging;
Second determines submodule 322, when being configured as not including face in described real time imaging, determines that described identification is tied
Fruit is not for including in described real time imaging needing to carry out the face that U.S. face processes.
As shown in Figure 8, Fig. 8 is the disclosure according to the another kind of U.S. face processing means block diagram shown in an exemplary embodiment,
This embodiment is on the basis of aforementioned embodiment illustrated in fig. 6, and described face recognition module 320 includes:
3rd determines submodule 323, is configured as described real time imaging and includes face, and the area of described face accounts for
When the percentage ratio of the area of described real time imaging exceedes preset value, determine that described recognition result is that described real time imaging includes needing
Carry out the face that U.S. face processes;
4th determines submodule 324, is configured as in described real time imaging not including in face, or described real time imaging
When the area of described face accounts for the percentage ratio of the area of described real time imaging not less than described preset value, determine described recognition result
For described real time imaging not including need to carry out the face that U.S. face processes.
As it is shown in figure 9, Fig. 9 is the disclosure according to the another kind of U.S. face processing means block diagram shown in an exemplary embodiment,
This embodiment is on the basis of aforementioned embodiment illustrated in fig. 6, and described face recognition module 320 includes:
5th determines submodule 325, is configured as described real time imaging and includes face, and described face is positioned at target
In region, determine that described recognition result is that described real time imaging includes needing to carry out the face that U.S. face processes;
6th determines submodule 326, is configured as in described real time imaging not including in face, or described real time imaging
When described face is not located in described target area, determine that described recognition result is not include in described real time imaging that needs are carried out
The face that U.S. face processes;
Wherein, described target area includes focal zone and/or user designated area.
As shown in Figure 10, Figure 10 is that the disclosure is according to the another kind of U.S. face processing means frame shown in an exemplary embodiment
Figure, this embodiment is on the basis of aforementioned embodiment illustrated in fig. 9, and described device also includes:
Processing module 350, is configured as described real time imaging and includes when the number of face is multiple, by described U.S.
Face process function is pointed to the described face of described target area and carries out U.S. face process;
Image output module 360, is configured to export the described real time imaging after U.S. face processes.
For device embodiment, owing to it corresponds essentially to embodiment of the method, so relevant part sees method in fact
The part executing example illustrates.Device embodiment described above is only schematically, wherein said as separating component
The unit illustrated can be or may not be physically separate, and the parts shown as unit can be or can also
It not physical location, i.e. may be located at a place, or can also be distributed on multiple NE.Can be according to reality
Need to select some or all of module therein to realize the purpose of disclosure scheme.Those of ordinary skill in the art are not paying
In the case of going out creative work, i.e. it is appreciated that and implements.
Accordingly, the disclosure also provides for a kind of U.S. face processing means, including:
Processor;
For storing the memorizer of processor executable;
Wherein, described processor is configured to:
Gather real time imaging;
The described real time imaging collected is carried out recognition of face, determines recognition result;
When described recognition result indicates described real time imaging to include the face needing to carry out U.S. face process, control U.S. face
Process function and be in opening;
When not including the face needing to carry out U.S. face process during described recognition result indicates described real time imaging, control institute
State U.S. face process function to be closed.
Figure 11 is the structural representation according to a kind of U.S. face processing means shown in an exemplary embodiment.Such as Figure 11 institute
Showing, according to a kind of U.S. face processing means 1100 shown in an exemplary embodiment, this device 1100 can be computer, mobile electricity
Words, digital broadcast terminal, messaging devices, game console, tablet device, armarium, body-building equipment, individual digital helps
The terminals such as reason.
With reference to Figure 11, device 1100 can include following one or more assembly: processes assembly 1101, memorizer 1102,
Power supply module 1103, multimedia groupware 1104, audio-frequency assembly 1105, the interface 1106 of input/output (I/O), sensor cluster
1107, and communications component 1108.
Process assembly 1101 and generally control the integrated operation of device 1100, such as with display, call, data communication,
The operation that camera operation and record operation are associated.Process assembly 1101 and can include that one or more processor 1109 performs
Instruction, to complete all or part of step of above-mentioned method.Additionally, process assembly 1101 can include one or more mould
Block, it is simple to process between assembly 1101 and other assembly is mutual.Such as, process assembly 1101 and can include multi-media module,
With facilitate multimedia groupware 1104 and process between assembly 1101 mutual.
Memorizer 1102 is configured to store various types of data to support the operation at device 1100.These data
Example include on device 1100 operation any application program or the instruction of method, contact data, telephone book data,
Message, picture, video etc..Memorizer 1102 can by any kind of volatibility or non-volatile memory device or they
Combination realizes, such as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), erasable can
Program read-only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory
Reservoir, disk or CD.
The various assemblies that power supply module 1103 is device 1100 provide electric power.Power supply module 1103 can include power management
System, one or more power supplys, and other generates, manages and distributes, with for device 1100, the assembly that electric power is associated.
The screen of one output interface of offer that multimedia groupware 1104 is included between described device 1100 and user.?
In some embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel,
Screen may be implemented as touch screen, to receive the input signal from user.Touch panel includes that one or more touch passes
Sensor is with the gesture on sensing touch, slip and touch panel.Described touch sensor can not only sense touch or slide dynamic
The border made, but also detect the persistent period relevant to described touch or slide and pressure.In certain embodiments, many
Media component 1104 includes a front-facing camera and/or post-positioned pick-up head.When device 1100 is in operator scheme, such as shooting mould
When formula or video mode, front-facing camera and/or post-positioned pick-up head can receive the multi-medium data of outside.Each preposition shooting
Head and post-positioned pick-up head can be a fixing optical lens system or have focal length and optical zoom ability.
Audio-frequency assembly 1105 is configured to output and/or input audio signal.Such as, audio-frequency assembly 1105 includes a wheat
Gram wind (MIC), when device 1100 is in operator scheme, during such as call model, logging mode and speech recognition mode, mike quilt
It is configured to receive external audio signal.The audio signal received can be further stored at memorizer 1102 or via communication
Assembly 1108 sends.In certain embodiments, audio-frequency assembly 1105 also includes a speaker, is used for exporting audio signal.
I/O interface 1106 provides interface, above-mentioned peripheral interface module for processing between assembly 1101 and peripheral interface module
Can be keyboard, put striking wheel, button etc..These buttons may include but be not limited to: home button, volume button, start button and
Locking press button.
Sensor cluster 1107 includes one or more sensor, for providing the state of various aspects to comment for device 1100
Estimate.Such as, what sensor cluster 1107 can detect device 1100 opens/closed mode, the relative localization of assembly, such as institute
Stating display and keypad that assembly is device 1100, sensor cluster 1107 can also detect device 1100 or device 1,100 1
The position change of individual assembly, the presence or absence that user contacts with device 1100, device 1100 orientation or acceleration/deceleration and dress
Put the variations in temperature of 1100.Sensor cluster 1107 can include proximity transducer, is configured to do not having any physics
The existence of object near detection during contact.Sensor cluster 1107 can also include optical sensor, as CMOS or ccd image sense
Device, for using in imaging applications.In certain embodiments, this sensor cluster 1107 can also include acceleration sensing
Device, gyro sensor, Magnetic Sensor, pressure transducer or temperature sensor.
Communications component 1108 is configured to facilitate the communication of wired or wireless mode between device 1100 and miscellaneous equipment.Dress
Put 1100 and can access wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.Exemplary at one
In embodiment, broadcast singal or broadcast that communications component 1108 receives from external broadcasting management system via broadcast channel are relevant
Information.In one exemplary embodiment, described communications component 1108 also includes near-field communication (NFC) module, to promote short distance
Communication.Such as, can be based on RF identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband
(UWB) technology, bluetooth (BT) technology and other technology realize.
In the exemplary embodiment, device 1100 can be by one or more application specific integrated circuits (ASIC), numeral
Signal processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electron component realize, be used for performing said method.
In the exemplary embodiment, a kind of non-transitory computer-readable recording medium including instruction, example are additionally provided
As included the memorizer 1102 of instruction, above-mentioned instruction can have been performed said method by the processor 1109 of device 1100.Example
If, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, soft
Dish and optical data storage devices etc..
Wherein, when the instruction in described storage medium is performed by described processor so that device 1100 is able to carry out one
Plant U.S. face processing method, including:
Gather real time imaging;
The described real time imaging collected is carried out recognition of face, determines recognition result;
When described recognition result indicates described real time imaging to include the face needing to carry out U.S. face process, control U.S. face
Process function and be in opening;
When not including the face needing to carry out U.S. face process during described recognition result indicates described real time imaging, control institute
State U.S. face process function to be closed.
Those skilled in the art, after considering description and putting into practice invention disclosed herein, will readily occur to its of the disclosure
Its embodiment.The disclosure is intended to any modification, purposes or the adaptations of the disclosure, these modification, purposes or
Person's adaptations is followed the general principle of the disclosure and includes the undocumented common knowledge in the art of the disclosure
Or conventional techniques means.Description and embodiments is considered only as exemplary, and the true scope of the disclosure and spirit are by following
Claim point out.
The foregoing is only the preferred embodiment of the disclosure, not in order to limit the disclosure, all essences in the disclosure
Within god and principle, any modification, equivalent substitution and improvement etc. done, should be included within the scope of disclosure protection.
Claims (11)
1. a U.S. face processing method, it is characterised in that described method includes:
Gather real time imaging;
The described real time imaging collected is carried out recognition of face, determines recognition result;
When described recognition result indicates described real time imaging to include the face needing to carry out U.S. face process, control U.S. face and process
Function is in opening;
When not including the face needing to carry out U.S. face process during described recognition result indicates described real time imaging, control described U.S.
Face processes function and is closed.
Method the most according to claim 1, it is characterised in that described determine recognition result, including:
When described real time imaging includes face, determine that described recognition result is that described real time imaging includes that needs carry out U.S.
The face that face processes;
When described real time imaging does not includes face, determine described recognition result be described real time imaging does not includes need into
The face that the U.S. face of row processes.
Method the most according to claim 1, it is characterised in that described determine recognition result, including:
When described real time imaging includes face, and the percentage ratio of the area of the described face area that accounts for described real time imaging exceedes
During preset value, determine that described recognition result is that described real time imaging includes needing to carry out the face that U.S. face processes;
When not including face in described real time imaging, or described in described real time imaging, the area of face accounts for described real time imaging
When the percentage ratio of area is not less than described preset value, determine that described recognition result is not include in described real time imaging that needs are carried out
The face that U.S. face processes.
Method the most according to claim 1, it is characterised in that described determine recognition result, including:
When described real time imaging includes face, and described face is positioned at target area, determines that described recognition result is described
Real time imaging includes needing to carry out the face that U.S. face processes;
When described real time imaging does not include face, or in described in described real time imaging, face is not located at described target area
Time, determine that described recognition result is not include in described real time imaging needing to carry out the face that U.S. face processes;
Wherein, described target area includes focal zone and/or user designated area.
Method the most according to claim 4, it is characterised in that described method also includes:
When described real time imaging includes that the number of face is multiple, process function by described U.S. face and be pointed to described target
The described face in region carries out U.S. face and processes;
Export the described real time imaging after being processed by U.S. face.
6. a U.S. face processing means, it is characterised in that described device includes:
Image capture module, is configured to gather real time imaging;
Face recognition module, is configured to the described real time imaging to collecting and carries out recognition of face, determine recognition result;
First control module, is configured as the described recognition result described real time imaging of instruction and includes that needs carry out U.S. face and process
Face time, control U.S. face and process function and be in opening;
Second control module, is configured as in the described recognition result described real time imaging of instruction not including that needs are carried out at U.S. face
During the face managed, control described U.S. face process function and be closed.
Device the most according to claim 6, it is characterised in that described face recognition module includes:
First determines submodule, is configured as described real time imaging when including face, determines that described recognition result is described
Real time imaging includes needing to carry out the face that U.S. face processes;
Second determines submodule, when being configured as not including face in described real time imaging, determines that described recognition result is institute
State and real time imaging does not includes need to carry out the face that U.S. face processes.
Device the most according to claim 6, it is characterised in that described face recognition module includes:
3rd determines submodule, is configured as described real time imaging and includes face, and the area of described face accounts for described reality
Time image the percentage ratio of area when exceeding preset value, determine that described recognition result is that described real time imaging includes that needs are carried out
The face that U.S. face processes;
4th determines submodule, is configured as in described real time imaging not including face, or people described in described real time imaging
When the area of face accounts for the percentage ratio of the area of described real time imaging not less than described preset value, determine that described recognition result is described
Real time imaging does not includes need to carry out the face that U.S. face processes.
Device the most according to claim 6, it is characterised in that described face recognition module includes:
5th determines submodule, is configured as described real time imaging and includes face, and described face is positioned at target area,
Determine that described recognition result is that described real time imaging includes needing to carry out the face that U.S. face processes;
6th determines submodule, is configured as in described real time imaging not including face, or people described in described real time imaging
When face is not located in described target area, determine that described recognition result is not include in described real time imaging that needs are carried out at U.S. face
The face of reason;
Wherein, described target area includes focal zone and/or user designated area.
Device the most according to claim 9, it is characterised in that described device also includes:
Processing module, is configured as described real time imaging and includes when the number of face is multiple, is processed by described U.S. face
Function is pointed to the described face of described target area and carries out U.S. face process;
Image output module, is configured to export the described real time imaging after being processed by U.S. face.
11. 1 kinds of U.S. face processing meanss, it is characterised in that including:
Processor;
For storing the memorizer of processor executable;
Wherein, described processor is configured to:
Gather real time imaging;
The described real time imaging collected is carried out recognition of face, determines recognition result;
When described recognition result indicates described real time imaging to include the face needing to carry out U.S. face process, control U.S. face and process
Function is in opening;
When not including the face needing to carry out U.S. face process during described recognition result indicates described real time imaging, control described U.S.
Face processes function and is closed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610683353.5A CN106326849A (en) | 2016-08-17 | 2016-08-17 | Beauty processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610683353.5A CN106326849A (en) | 2016-08-17 | 2016-08-17 | Beauty processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106326849A true CN106326849A (en) | 2017-01-11 |
Family
ID=57743142
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610683353.5A Pending CN106326849A (en) | 2016-08-17 | 2016-08-17 | Beauty processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106326849A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106851100A (en) * | 2017-01-20 | 2017-06-13 | 珠海市魅族科技有限公司 | A kind of photo processing method and system |
CN107222675A (en) * | 2017-05-23 | 2017-09-29 | 维沃移动通信有限公司 | The photographic method and mobile terminal of a kind of mobile terminal |
CN107820017A (en) * | 2017-11-30 | 2018-03-20 | 广东欧珀移动通信有限公司 | Image capturing method, device, computer-readable recording medium and electronic equipment |
CN107993209A (en) * | 2017-11-30 | 2018-05-04 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN108289172A (en) * | 2018-01-20 | 2018-07-17 | 深圳天珑无线科技有限公司 | Adjust the method, device and mobile terminal of shooting correlation function |
CN109561215A (en) * | 2018-12-13 | 2019-04-02 | 北京达佳互联信息技术有限公司 | Method, apparatus, terminal and the storage medium that U.S. face function is controlled |
CN110138957A (en) * | 2019-03-28 | 2019-08-16 | 西安易朴通讯技术有限公司 | Video record processing of taking pictures method and its processing system, electronic equipment |
CN111402154A (en) * | 2020-03-10 | 2020-07-10 | 北京字节跳动网络技术有限公司 | Image beautifying method and device, electronic equipment and computer readable storage medium |
CN113473013A (en) * | 2021-06-30 | 2021-10-01 | 展讯通信(天津)有限公司 | Display method and device for beautifying effect of image and terminal equipment |
CN115484386A (en) * | 2021-06-16 | 2022-12-16 | 荣耀终端有限公司 | Video shooting method and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413270A (en) * | 2013-08-15 | 2013-11-27 | 北京小米科技有限责任公司 | Method and device for image processing and terminal device |
CN103841323A (en) * | 2014-02-20 | 2014-06-04 | 小米科技有限责任公司 | Shooting parameter allocation method and device and terminal device |
CN104318262A (en) * | 2014-09-12 | 2015-01-28 | 上海明穆电子科技有限公司 | Method and system for replacing skin through human face photos |
CN104732210A (en) * | 2015-03-17 | 2015-06-24 | 深圳超多维光电子有限公司 | Target human face tracking method and electronic equipment |
CN104902177A (en) * | 2015-05-26 | 2015-09-09 | 广东欧珀移动通信有限公司 | Intelligent photographing method and terminal |
CN105303523A (en) * | 2014-12-01 | 2016-02-03 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN105721770A (en) * | 2016-01-20 | 2016-06-29 | 广东欧珀移动通信有限公司 | Shooting control method and shooting control device |
-
2016
- 2016-08-17 CN CN201610683353.5A patent/CN106326849A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413270A (en) * | 2013-08-15 | 2013-11-27 | 北京小米科技有限责任公司 | Method and device for image processing and terminal device |
CN103841323A (en) * | 2014-02-20 | 2014-06-04 | 小米科技有限责任公司 | Shooting parameter allocation method and device and terminal device |
CN104318262A (en) * | 2014-09-12 | 2015-01-28 | 上海明穆电子科技有限公司 | Method and system for replacing skin through human face photos |
CN105303523A (en) * | 2014-12-01 | 2016-02-03 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN104732210A (en) * | 2015-03-17 | 2015-06-24 | 深圳超多维光电子有限公司 | Target human face tracking method and electronic equipment |
CN104902177A (en) * | 2015-05-26 | 2015-09-09 | 广东欧珀移动通信有限公司 | Intelligent photographing method and terminal |
CN105721770A (en) * | 2016-01-20 | 2016-06-29 | 广东欧珀移动通信有限公司 | Shooting control method and shooting control device |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106851100B (en) * | 2017-01-20 | 2020-04-24 | 珠海市魅族科技有限公司 | Photo processing method and system |
CN106851100A (en) * | 2017-01-20 | 2017-06-13 | 珠海市魅族科技有限公司 | A kind of photo processing method and system |
CN107222675A (en) * | 2017-05-23 | 2017-09-29 | 维沃移动通信有限公司 | The photographic method and mobile terminal of a kind of mobile terminal |
CN107820017A (en) * | 2017-11-30 | 2018-03-20 | 广东欧珀移动通信有限公司 | Image capturing method, device, computer-readable recording medium and electronic equipment |
CN107993209A (en) * | 2017-11-30 | 2018-05-04 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN107820017B (en) * | 2017-11-30 | 2020-03-27 | Oppo广东移动通信有限公司 | Image shooting method and device, computer readable storage medium and electronic equipment |
CN108289172A (en) * | 2018-01-20 | 2018-07-17 | 深圳天珑无线科技有限公司 | Adjust the method, device and mobile terminal of shooting correlation function |
CN109561215A (en) * | 2018-12-13 | 2019-04-02 | 北京达佳互联信息技术有限公司 | Method, apparatus, terminal and the storage medium that U.S. face function is controlled |
CN110138957A (en) * | 2019-03-28 | 2019-08-16 | 西安易朴通讯技术有限公司 | Video record processing of taking pictures method and its processing system, electronic equipment |
CN111402154A (en) * | 2020-03-10 | 2020-07-10 | 北京字节跳动网络技术有限公司 | Image beautifying method and device, electronic equipment and computer readable storage medium |
CN115484386A (en) * | 2021-06-16 | 2022-12-16 | 荣耀终端有限公司 | Video shooting method and electronic equipment |
CN115484386B (en) * | 2021-06-16 | 2023-10-31 | 荣耀终端有限公司 | Video shooting method and electronic equipment |
CN113473013A (en) * | 2021-06-30 | 2021-10-01 | 展讯通信(天津)有限公司 | Display method and device for beautifying effect of image and terminal equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106326849A (en) | Beauty processing method and device | |
CN106028143A (en) | Video live broadcasting method and device | |
CN104850327B (en) | The screenshot method and device of mobile terminal, electronic equipment | |
CN105117209B (en) | Using exchange method and device | |
CN105974804A (en) | Method and device for controlling equipment | |
CN106201686A (en) | Management method, device and the terminal of application | |
CN106385352A (en) | Device binding method and device | |
CN106227419A (en) | Screenshotss method and device | |
CN105246068B (en) | SIM card selection method and device | |
CN105897862A (en) | Method and apparatus for controlling intelligent device | |
CN104933419A (en) | Method and device for obtaining iris images and iris identification equipment | |
CN104219445A (en) | Method and device for adjusting shooting modes | |
CN105426079A (en) | Picture brightness adjustment method and apparatus | |
CN106453032B (en) | Information-pushing method and device, system | |
CN105138956A (en) | Face detection method and device | |
CN104407769A (en) | Picture processing method, device and equipment | |
CN105739840A (en) | Terminal and starting method of application programs in terminal | |
CN106292994A (en) | The control method of virtual reality device, device and virtual reality device | |
CN106155703A (en) | The display packing of emotional state and device | |
CN107132983A (en) | Split screen window operation method and device | |
CN106465160A (en) | Network function switching method and device | |
CN106406659A (en) | Double-open application establishing method and device | |
CN105975305A (en) | Operating system event processing method and device as well as terminal | |
CN106200682A (en) | The automatic follower method of luggage case and device, electronic equipment | |
CN105159181B (en) | The control method and device of smart machine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170111 |