CN106161962B - A kind of image processing method and terminal - Google Patents
A kind of image processing method and terminal Download PDFInfo
- Publication number
- CN106161962B CN106161962B CN201610753658.9A CN201610753658A CN106161962B CN 106161962 B CN106161962 B CN 106161962B CN 201610753658 A CN201610753658 A CN 201610753658A CN 106161962 B CN106161962 B CN 106161962B
- Authority
- CN
- China
- Prior art keywords
- image
- face
- facial image
- region
- shooting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 19
- 230000001815 facial effect Effects 0.000 claims abstract description 150
- 238000012545 processing Methods 0.000 claims abstract description 62
- 238000000034 method Methods 0.000 claims abstract description 25
- 238000001514 detection method Methods 0.000 claims description 19
- 230000001960 triggered effect Effects 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 9
- 238000010586 diagram Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 10
- 238000004590 computer program Methods 0.000 description 7
- 238000013441 quality evaluation Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000000605 extraction Methods 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 210000004209 hair Anatomy 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/57—Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/815—Camera processing pipelines; Components thereof for controlling the resolution by using a single image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
An embodiment of the present invention provides a kind of image processing method, the method includes:When detecting shutter triggering, the first facial image in preview image is obtained;After the shutter triggers, Face datection is carried out to shooting image, obtains the second facial image;Judge whether the Face datection succeeds according to second facial image;If it is not, the human face region in the shooting image is determined according to first facial image;U.S. face processing is carried out to the human face region in the shooting image.The embodiment of the present invention additionally provides a kind of terminal.Face can occur in the case of obscuring in image is shot through the embodiment of the present invention, promote U.S. face effect.
Description
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of image processing method and terminal.
Background technology
With information technology fast development, terminal (such as mobile phone, tablet computer) uses more and more common, conduct of taking pictures
One important application of terminal becomes the attraction that major terminal manufacturer is praised highly.At present, it is effect of preferably being taken pictures, it is beautiful
Face technology is in application of taking pictures by pursuing.U.S. face technology depends on human face detection tech, in particular it is required that first detecting face
(human face detection tech needs to detect three positions, i.e. two eyes and lip can just accurately identify) is detecting face basis
It is upper to carry out U.S. face operation.Therefore, can detect face in preview image there are a kind of following situation, but in shooting process because
Cause to shoot image bat paste for certain reason (such as handshaking), in this case, can not detect face, therefore, traditional U.S. face
Technology can not come into force to shooting image, reduce user experience.
Invention content
An embodiment of the present invention provides a kind of image processing method and terminals, can occur what is obscured by face in image is shot
In the case of, promote U.S. face effect.
First aspect of the embodiment of the present invention provides a kind of image processing method, including:
When detecting shutter triggering, the first facial image in preview image is obtained;
After the shutter triggers, Face datection is carried out to shooting image, obtains the second facial image;
Judge whether the Face datection succeeds according to second facial image;
If it is not, the human face region in the shooting image is determined according to first facial image;
U.S. face processing is carried out to the human face region in the shooting image.
Second aspect of the embodiment of the present invention provides a kind of terminal, including:
Acquiring unit, for when detecting shutter triggering, obtaining the first facial image in preview image;
Detection unit after being triggered in the shutter, carries out Face datection to shooting image, obtains the second face
Image;
Judging unit, for judging the Face datection according to second facial image that the detection unit detects
Whether succeed;
Determination unit, if the judging result for the judging unit is no, according to obtaining the acquiring unit
First facial image determines the human face region in the shooting image;
Processing unit is carried out for the human face region in the shooting image that is determined to the determination unit at U.S. face
Reason.
The third aspect of the embodiment of the present invention provides a kind of terminal, including:
Processor and memory;Wherein, the processor is by calling the code in the memory or instructing to perform
The some or all of step of the described image processing method of first aspect.
Implement the embodiment of the present invention, have the advantages that:
Through the embodiment of the present invention, when detecting shutter triggering, the first facial image in preview image is obtained, fast
After door triggering, Face datection is carried out to shooting image, the second facial image is obtained, face is judged according to the second facial image
Success is detected whether, if it is not, the human face region in shooting image is determined according to the first facial image, to the face in shooting image
Region carries out U.S. face processing.So as to determine to clap using the first facial image in the case where the second facial image detects failure
The human face region in image is taken the photograph, and U.S. face processing is carried out to the human face region, realizes more preferably U.S. face effect.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is some embodiments of the present invention, for ability
For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is a kind of first embodiment flow diagram of image processing method provided in an embodiment of the present invention;
Fig. 2 is a kind of second embodiment flow diagram of image processing method provided in an embodiment of the present invention;
Fig. 3 a are a kind of first embodiment structure diagrams of terminal provided in an embodiment of the present invention;
Fig. 3 b are the structure diagrams of the determination unit of the described terminals of Fig. 3 a provided in an embodiment of the present invention;
Fig. 3 c are the structure diagrams of the processing unit of the described terminals of Fig. 3 a provided in an embodiment of the present invention;
Fig. 4 is a kind of second embodiment structure diagram of terminal provided in an embodiment of the present invention.
Specific embodiment
An embodiment of the present invention provides a kind of image processing method and terminals, can occur what is obscured by face in image is shot
In the case of, promote U.S. face effect.
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained without creative efforts
Example, shall fall within the protection scope of the present invention.
Term " first ", " second ", " third " in description and claims of this specification and the attached drawing and "
Four " etc. be for distinguishing different objects rather than for describing particular order.In addition, term " comprising " and " having " and it
Any deformation, it is intended that cover non-exclusive include.Such as it contains the process of series of steps or unit, method, be
The step of system, product or equipment are not limited to list or unit, but optionally further include the step of not listing or list
Member is optionally further included for the intrinsic other steps of these processes, method, product or equipment or unit.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments
Containing at least one embodiment of the present invention.It is identical that each position in the description shows that the phrase might not be each meant
Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and
Implicitly understand, embodiment described herein can be combined with other embodiments.
The described terminal of the embodiment of the present invention can include smart mobile phone (such as Android phone, iOS mobile phones,
Windows Phone mobile phones etc.), tablet computer, palm PC, laptop, mobile internet device (MID, Mobile
Internet Devices) or Wearable etc., above-mentioned terminal is only citing, and non exhaustive, including but not limited to above-mentioned end
End.
Referring to Fig. 1, the first embodiment flow diagram for a kind of image processing method provided in an embodiment of the present invention.
Image processing method described in the present embodiment, includes the following steps:
101st, when detecting shutter triggering, the first facial image in preview image is obtained.
Wherein, terminal can trigger shutter when receiving shooting instruction, when triggering shutter, can to preview image into
Row Face datection, so as to obtain the first facial image, herein, human face detection tech belongs to the prior art, does not make herein excessively superfluous
It states.Certainly, it when user opens front camera progress self-timer, shows preview image, is starting to show that preview image arrives
It receives in this period of shooting instruction, can carry out Face datection to the preview image, why select to trigger in shutter
When, obtain preview image in the first facial image the reason of be, after shutter is triggered to, obtain shooting image, this
Image is shot with the preview image when shutter trigger relatively, because between this period, position that face moves
It is very small, moreover, during this period of time, the probability of face movement is low, substantially may be considered what is remained unchanged.
Optionally, above-mentioned first facial image may include two positions of face, Hp position, shape of face position, shape of face wheel
Wide, Skin Color Information (for example, face color, speckle displacement etc.) etc., certainly, the first facial image can be with each pixel value
Position.
102nd, after the shutter triggers, Face datection is carried out to shooting image, obtains the second facial image.
Wherein, after shutter triggers, shooting image, that is, the image obtained after shooting, to the shooting image can be obtained
Face datection is carried out, can obtain the second facial image.Due to preview image and the image that shooting image is different moments, in difference
The position of moment, ambient light and people all may be fast changing, and therefore, the two differs.
Optionally, during step 102 is performed, if carry out Face datection to shooting image, face is not detected,
Then due to there is face in preview image, then, the first facial image position will can be corresponded in shooting image
The image tagged of position is the second facial image.
103rd, judge whether the Face datection succeeds according to second facial image.
Optionally, the specific implementation process of step 103 can be as follows:
The face contour in second facial image is extracted, judges whether the face contour is complete, if it is not, confirming institute
State Face datection failure.Terminal can extract carries out contours extract to the second facial image, when the face contour of extraction is complete, can recognize
For the Face datection success in step 103, when the face contour of extraction is imperfect, then confirm that Face datection fails.For example, user
In shooting process, if shaking, part face is caused to appear in shooting image, alternatively, under having shaken, causes to shoot image
It is fuzzy, the part face (facial contour extracted is also part) that detects or even it can not detect face.
Optionally, the specific implementation process of step 103 can also be as follows:
Judged in the shooting image with the presence or absence of ghost image according to second facial image, if so, confirming the face
Detection failure.The blurred area that terminal can detect shooting image is more than preset area, if so, it is believed that existing in shooting image
Ghost image, above-mentioned preset area can be defined according to the gross area of shooting image, for example, the 5% of the gross area.Certainly, above-mentioned mould
Paste area can be determined by existing image fuzzy detection mode, not do excessively repeating herein.
Certainly, if judging that Face datection fails according to the second facial image in step 103, step 104 is performed, if into
Work(then directly carries out U.S. face to the face detected.
104th, if it is not, determining the human face region in the shooting image according to first facial image.
Optionally, if Face datection fails, the human face region in the shooting image can be determined according to the first facial image.
Specifically, step 104 determines the human face region in the shooting image according to first facial image, can be according to
Following manner is implemented:
41), using first facial image determine it is described shooting image in the first facial image similarity most
High target area;
42) it, is determined in the shooting image according to the face contour of the target area and first facial image
Human face region.
Wherein, it can be compared according to the first facial image with shooting image, structural similarity (structural can be used
Similarity index, SSIM) the two is compared, determine that the first facial image similarity in image is shot is highest
Target area.Wherein, which is a kind of index for weighing two images similarity.By this way, it can obtain in shooting figure
As in the first highest target area of facial image similarity.Then, can step target area part face characteristic (such as two
Eye position, Hp position, nose shape, eyebrow position) the corresponding position in the first facial image, extract the first facial image
Face contour, certainly, the corresponding position of above-mentioned part face characteristic is included in the face contour, can be according to the first facial image
Face contour and the position of part face characteristic of corresponding position and combining target region of above-mentioned part face characteristic clapping
It takes the photograph and human face region is marked in image.I.e.:First, it in M feature of target area marker, then, is looked in the first facial image
To M feature corresponding with the M feature, secondly, the face contour of the first facial image is extracted, finally, according to the first face
The face contour of image and its M feature and M feature of target area determine human face region in image is shot, wherein, M is
Integer more than 0.
105th, U.S. face processing is carried out to the human face region in the shooting image.
Wherein, terminal can carry out the human face region in shooting image U.S. face processing, if U.S. face processing, can refer to existing
U.S. face algorithm, and shooting the non-face region in image can be handled without U.S. face, it is of course also possible to non-face region into
Row image enhancement (background can be more clear), alternatively, background blurring processing (highlighting face).
Optionally, terminal can also carry out human face region according to the first facial image U.S. face processing, for example, can be used the
U.S. face algorithm (alternatively, the parameter used in algorithm) carries out human face region U.S. face processing used by one facial image.Example again
Such as, the first facial image can be divided into N number of different zones, human face region is also divided into N number of difference according to identical partitioning scheme
Region, by each region in N number of different zones in the first facial image and corresponding N number of different zones in human face region
In a certain region be compared, obtain having in the first facial image that M region region more corresponding than in human face region is clear, that
, which can be replaced corresponding region in human face region, wherein, above-mentioned N is the integer more than 1, and M is is more than or waits
In 0 and less than or equal to N integer.
Further, it is above-mentioned that the U.S. face processing of human face region progress in shooting image may include according to the first facial image
Following steps:
51) the unintelligible region in the human face region, is determined;
52) the unintelligible region, is replaced into region of the unintelligible region correspondence in first facial image,
Obtain new human face region;
53) U.S. face processing, is carried out to the new human face region.
Wherein, in step 51, human face region can be divided into multiple regions, picture quality can be carried out to each region and commented
Valency can obtain multiple images quality evaluation value, a threshold value, i.e. first threshold can be set, more than the first threshold, it is believed that be
Image clearly is believed that fogging image less than or equal to the first threshold, wherein, image quality evaluation is carried out to each region
Mode can be:One or more image quality evaluation index index can be used, image quality evaluation is carried out to each region,
Image quality evaluation index can be:Average gray, entropy, edge conservation degree, mean square deviation etc..It in this way, can be respectively by multiple images
Quality evaluation value is compared with first threshold, and region corresponding less than the image quality evaluation values of first threshold can be unintelligible
Region.In step 52, unintelligible region can be replaced with the corresponding region in the first facial image, in this way, shooting
The region in the first facial image has been merged in human face region in image, has obtained new human face region, it can be by step 53 to new
Human face region carry out U.S. face processing.Certainly, the U.S. face processing in 53 can refer to U.S. face algorithm of the prior art.Further
Ground, can treated that new human face region is smoothed to U.S. face.
Optionally, institute is replaced in region of the unintelligible region correspondence in first facial image by above-mentioned steps 52
Unintelligible region is stated, obtaining new human face region can also be implemented in following manner:
Region of the unintelligible region correspondence in the first facial image is obtained, than less clear region and the unintelligible region
Clarity between the corresponding region in the first facial image, if the clarity in unintelligible region is less than the unintelligible region pair
Unintelligible area should be replaced into region of the unintelligible region correspondence in the first facial image in the clarity of the first facial image
Domain, if the clarity in unintelligible region is greater than or equal to the unintelligible region correspondence in the clarity of the first facial image,
Do not perform replacement process.
It should be noted that the embodiment of the present invention is taken pictures mainly for using U.S. face camera, specifically, such as:Using preposition
When camera carries out self-timer, if in shooting process, user is handshaking or head shakes, and leads to ghost image, so as to which Face datection loses
It loses, alternatively, in shooting process, if ambient light cataclysm also results in Face datection failure.So, implement using the present invention
The above method that example provides carries out U.S. face processing.It is of course also possible to use rear camera carries out self-timer or (shooting), i.e.,
Camera rotation technique can be used, front camera and rear camera are switched over, alternatively, directly using rear camera
Shoot personage.In the case where detecting face, the above method provided in an embodiment of the present invention can also be used, face is carried out
U.S. face processing.
Certainly, above-mentioned facial image is just for self-timer, alternatively, taking pictures to personage, the embodiment of the present invention can also
U.S. face processing is carried out suitable for other objects, for example, by a cup is placed on desk, it, can be according to this when recognizing the cup
Inventive embodiments carry out it U.S. face processing.
Through the embodiment of the present invention, when detecting shutter triggering, the first facial image in preview image is obtained, fast
After door triggering, Face datection is carried out to shooting image, the second facial image is obtained, face is judged according to the second facial image
Success is detected whether, if it is not, the human face region in shooting image is determined according to the first facial image, to the face in shooting image
Region carries out U.S. face processing.So as to determine to clap using the first facial image in the case where the second facial image detects failure
The human face region in image is taken the photograph, and U.S. face processing is carried out to the human face region, realizes more preferably U.S. face effect.
Consistent with the abovely, referring to Fig. 2, second for a kind of image processing method provided in an embodiment of the present invention implements
Example flow diagram.Image processing method described in the present embodiment, includes the following steps:
201st, when detecting shutter triggering, the first facial image in preview image is obtained.
202nd, judge whether first facial image is complete.
Optionally, if judging, the first facial image is imperfect, does not perform subsequent step.It is but fast due to triggering
Door or shooting image can be obtained, only, U.S. face processing will not be carried out to shooting image.
Wherein, it can determine whether the first facial image is complete, as extracted the face contour in the first facial image, judge institute
Whether complete face contour is stated, if it is not, confirming the Face datection failure.Terminal can extract carries out profile to the first facial image
Extraction, when the face contour of extraction is complete, it is believed that the first facial image is complete, when the face contour of extraction is imperfect, then really
It is imperfect to recognize the first facial image.For example, user during taking pictures, if shaking, causes part face to appear in shooting figure
As in, alternatively, under having shaken, shooting image is caused to obscure, (facial contour extracted is also portion to the part face detected
Point).For another example, judged in shooting image with the presence or absence of ghost image according to the first facial image, if so, confirming first facial image
It is imperfect.The blurred area that terminal can detect shooting image is more than preset area, if so, it is believed that there is weight in shooting image
Shadow, above-mentioned preset area can be defined according to the gross area of shooting image, for example, the 5% of the gross area.Certainly, it is above-mentioned fuzzy
Area can be determined by existing image fuzzy detection mode, not do excessively repeating herein.
203rd, if so, after the shutter triggers, Face datection is carried out to shooting image, obtains the second face figure
Picture.
204th, judge whether the Face datection succeeds according to second facial image.
205th, if it is not, determining the human face region in the shooting image according to first facial image.
206th, U.S. face processing is carried out to the human face region in the shooting image.
Wherein, step not described above can refer to the specific of the corresponding steps in the described image processing methods of Fig. 1
Description.
Optionally, if the first facial image is imperfect, this hair can not be implemented in the case where the second facial image obscures
Image processing method carries out shooting image U.S. face processing used by bright embodiment.Therefore, the embodiment of the present invention can be in preview
Facial image in image with regard to it is imperfect when, just abandon carrying out shooting image U.S. face processing, the electricity of terminal can be saved, separately
Outside, the face in preview image is complete, and shoot the face in image it is imperfect when, can be true according to the face in preview image
Surely the face in image is shot, and U.S. face processing is carried out to the face in shooting image, improves U.S. face effect.
Consistent with the abovely, it is specific as follows below to implement the virtual bench and entity apparatus of above-mentioned image processing method:
Fig. 3 a are please referred to, are a kind of first embodiment structure diagram of terminal provided in an embodiment of the present invention.This implementation
Terminal described in example, including:Acquiring unit 301, detection unit 302, judging unit 303, determination unit 304 and processing are single
Member 305, it is specific as follows:
Acquiring unit 301, for when detecting shutter triggering, obtaining the first facial image in preview image;
Detection unit 302 after being triggered in the shutter, carries out Face datection to shooting image, obtains second
Facial image;
Judging unit 303, for judging the people according to second facial image that the detection unit 302 detects
Face detects whether success;
Determination unit 304 if the judging result for the judging unit 303 is no, is obtained according to the acquiring unit 301
First facial image taken determines the human face region in the shooting image;
Processing unit 305 carries out beautiful for the human face region in the shooting image determining to the determination unit 304
Face processing.
Optionally, the judging unit 303 is specifically used for:
The face contour in second facial image is extracted, judges whether the face contour is complete, if it is not, confirming institute
State Face datection failure;
Alternatively,
Judged in the shooting image with the presence or absence of ghost image according to second facial image, if so, confirming the face
Detection failure.
Optionally, such as Fig. 3 b, the determination unit 304 of the terminal described in Fig. 3 a may include:First determining module 3041
It is specific as follows with the second determining module 3042:
First determining module 3041, for being determined in the shooting image with described first using first facial image
The highest target area of facial image similarity;
Second determining module 3042, for being determined according to the face contour of the target area and first facial image
Human face region in the shooting image.
Optionally, above-mentioned processing unit 305 is specifically used for:
U.S. face processing is carried out to the human face region in the shooting image according to first facial image.
Still optionally further, such as Fig. 3 c, the processing unit 305 of the terminal described in Fig. 3 a or Fig. 3 b may include:Third
Determining module 3051, replacement module 3052 and processing module 3053, it is specific as follows:
Third determining module 3051, for determining the unintelligible region in the human face region;
Replacement module 3052, for region of the unintelligible region correspondence in first facial image to be replaced institute
Unintelligible region is stated, obtains new human face region;
Processing module 3053, for carrying out U.S. face processing to the new human face region.
Optionally, the judging unit 303 also particularly useful for:The in the acquiring unit 301 obtains preview image
After one facial image, judge whether first facial image is complete;If the judging result of the judging unit 303 is yes,
Face datection is carried out to shooting image by the detection unit 302, obtains the second facial image.
Described terminal can obtain first in preview image when detecting shutter triggering through the embodiment of the present invention
Facial image after shutter triggers, carries out Face datection to shooting image, the second facial image is obtained, according to the second people
Face image judges whether Face datection succeeds, if it is not, the human face region in shooting image is determined according to the first facial image, to clapping
The human face region taken the photograph in image carries out U.S. face processing.So as to which first can be utilized in the case where the second facial image detects failure
Facial image determines the human face region in shooting image, and U.S. face processing is carried out to the human face region, realizes more preferably U.S. face effect
Fruit.
Referring to Fig. 4, the second embodiment structure diagram for a kind of terminal provided in an embodiment of the present invention.The present embodiment
Described in terminal, including:At least one input equipment 1000;At least one output equipment 2000;At least one processor
3000, such as CPU;With memory 4000, above-mentioned input equipment 1000, output equipment 2000, processor 3000 and memory
4000 are connected by bus 5000.
Wherein, above-mentioned input equipment 1000 concretely touch panel, physical button or mouse.
Above-mentioned output equipment 2000 concretely display screen.
Above-mentioned memory 4000 can be high-speed RAM memory or nonvolatile storage (non-volatile
), such as magnetic disk storage memory.Above-mentioned memory 4000 is above-mentioned input equipment 1000, defeated for storing batch processing code
Go out equipment 2000 and processor 3000 is used to call the program code stored in memory 4000, perform following operation:
Above-mentioned processor 3000, is used for:
When detecting shutter triggering, the first facial image in preview image is obtained;
After the shutter triggers, Face datection is carried out to shooting image, obtains the second facial image;
Judge whether the Face datection succeeds according to second facial image;
If it is not, the human face region in the shooting image is determined according to first facial image;
U.S. face processing is carried out to the human face region in the shooting image.
Optionally, above-mentioned processor 3000 judges whether the Face datection succeeds according to second facial image, packet
It includes:
The face contour in second facial image is extracted, judges whether the face contour is complete, if it is not, confirming institute
State Face datection failure;
Alternatively,
Judged in the shooting image with the presence or absence of ghost image according to second facial image, if so, confirming the face
Detection failure.
Optionally, above-mentioned processor 3000 determines the face area in the shooting image according to first facial image
Domain, including:
It is determined using first facial image highest with the first facial image similarity in the shooting image
Target area;
Face in the shooting image is determined according to the face contour of the target area and first facial image
Region.
Optionally, above-mentioned processor 3000 carries out U.S. face processing to the human face region in the shooting image, including:
U.S. face processing is carried out to the human face region in the shooting image according to first facial image.
Optionally, above-mentioned processor 3000 according to first facial image to it is described shooting image in human face region into
The face processing of row U.S., including:
Determine the unintelligible region in the human face region;
The unintelligible region is replaced into region of the unintelligible region correspondence in first facial image, is obtained
New human face region;
U.S. face processing is carried out to the new human face region.
Optionally, after above-mentioned processor 3000 obtains the first facial image in preview image, also particularly useful for:
Judge whether first facial image is complete;If so, performing described pair of shooting image carries out Face datection, obtain
Second facial image.
The embodiment of the present invention also provides a kind of computer storage media, wherein, which can be stored with journey
Sequence, the part or all of step of any image processing method when which performs described in including above method embodiment
Suddenly.
Although combining each embodiment herein, invention has been described, however, implementing the claimed invention
In the process, those skilled in the art are by checking the attached drawing, disclosure and the appended claims, it will be appreciated that and it is real
Other variations of the existing open embodiment.In the claims, " comprising " (comprising) word is not excluded for other compositions
Part or step, "a" or "an" are not excluded for multiple situations.Single processor or other units can realize claim
In several functions enumerating.Mutually different has been recited in mutually different dependent certain measures, it is not intended that these are arranged
It applies to combine and generates good effect.
It will be understood by those skilled in the art that the embodiment of the present invention can be provided as method, apparatus (equipment) or computer journey
Sequence product.Therefore, in terms of the present invention can be used complete hardware embodiment, complete software embodiment or combine software and hardware
The form of embodiment.Moreover, the calculating for wherein including computer usable program code in one or more can be used in the present invention
The computer program that machine usable storage medium is implemented on (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.)
The form of product.Computer program is stored/distributed in suitable medium, is provided together with other hardware or one as hardware
Part can also use other distribution forms, such as pass through the wired or wireless telecommunication systems of Internet or other.
The present invention be with reference to the embodiment of the present invention method, apparatus (equipment) and computer program product flow chart with/
Or block diagram describes.It should be understood that each flow that can be realized by computer program instructions in flowchart and/or the block diagram and/
Or the flow in box and flowchart and/or the block diagram and/or the combination of box.These computer program instructions can be provided
To the processor of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to generate one
A machine so that the instruction generation performed by computer or the processor of other programmable data processing devices is used to implement
The device of function specified in one flow of flow chart or multiple flows and/or one box of block diagram or multiple boxes.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that the instruction generation being stored in the computer-readable memory includes referring to
Enable the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one box of block diagram or
The function of being specified in multiple boxes.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted
Series of operation steps are performed on calculation machine or other programmable devices to generate computer implemented processing, so as in computer or
The instruction offer performed on other programmable devices is used to implement in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although with reference to specific features and embodiment, invention has been described, it is clear that, do not departing from this hair
In the case of bright spirit and scope, it can be carry out various modifications and combined.Correspondingly, the specification and drawings are only institute
The exemplary illustration of the present invention that attached claim is defined, and be considered as covered in the scope of the invention arbitrary and all and repair
Change, change, combining or equivalent.Obviously, those skilled in the art various changes and modifications can be made to the invention without
It is detached from the spirit and scope of the present invention.If in this way, these modifications and changes of the present invention belong to the claims in the present invention and its
Within the scope of equivalent technologies, then the present invention is also intended to include these modifications and variations.
Claims (11)
1. a kind of image processing method, which is characterized in that including:
When detecting shutter triggering, the first facial image in preview image is obtained;
After the shutter triggers, Face datection is carried out to shooting image, obtains the second facial image;
Judge whether the Face datection succeeds according to second facial image;
If it is not, the human face region in the shooting image is determined according to first facial image;
U.S. face processing is carried out to the human face region in the shooting image;
Wherein, after first facial image obtained in preview image, the method further includes:
Judge whether first facial image is complete;If so, performing described pair of shooting image carries out Face datection, second is obtained
Facial image.
2. according to the method described in claim 1, it is characterized in that, described judge the face according to second facial image
Detect whether success, including:
The face contour in second facial image is extracted, judges whether the face contour is complete, if it is not, confirming the people
Face detection failure;
Alternatively,
Judged in the shooting image with the presence or absence of ghost image according to second facial image, if so, confirming the Face datection
Failure.
3. according to the method described in claim 2, it is characterized in that, described determine the shooting according to first facial image
Human face region in image, including:
Using first facial image determine it is described shooting image in the highest target of the first facial image similarity
Region;
Human face region in the shooting image is determined according to the face contour of the target area and first facial image.
4. method according to any one of claims 1 to 3, which is characterized in that the face in the shooting image
Region carries out U.S. face processing, including:
U.S. face processing is carried out to the human face region in the shooting image according to first facial image.
5. according to the method described in claim 4, it is characterized in that, it is described according to first facial image to the shooting figure
Human face region as in carries out U.S. face processing, including:
Determine the unintelligible region in the human face region;
The unintelligible region is replaced into region of the unintelligible region correspondence in first facial image, is obtained new
Human face region;
U.S. face processing is carried out to the new human face region.
6. a kind of terminal, which is characterized in that including:
Acquiring unit, for when detecting shutter triggering, obtaining the first facial image in preview image;
Detection unit after being triggered in the shutter, carries out Face datection to shooting image, obtains the second face figure
Picture;
Judging unit, for whether judging the Face datection according to second facial image that the detection unit detects
Success;
Determination unit, if the judging result for the judging unit is no, being obtained according to the acquiring unit described first
Facial image determines the human face region in the shooting image;
Processing unit carries out U.S. face processing for the human face region in the shooting image that is determined to the determination unit;
Wherein, the judging unit also particularly useful for:
After the first facial image during the acquiring unit obtains preview image, judge whether first facial image is complete
It is whole;If the judging result of the judging unit is yes, Face datection is carried out to shooting image by the detection unit, obtains second
Facial image.
7. terminal according to claim 6, which is characterized in that the judging unit is specifically used for:
The face contour in second facial image is extracted, judges whether the face contour is complete, if it is not, confirming the people
Face detection failure;
Alternatively,
Judged in the shooting image with the presence or absence of ghost image according to second facial image, if so, confirming the Face datection
Failure.
8. terminal according to claim 7, which is characterized in that the determination unit includes:
First determining module, for using first facial image determine it is described shooting image in first facial image
The highest target area of similarity;
Second determining module, for determining the shooting according to the face contour of the target area and first facial image
Human face region in image.
9. according to claim 6 to 8 any one of them terminal, which is characterized in that the processing unit is specifically used for:
U.S. face processing is carried out to the human face region in the shooting image according to first facial image.
10. terminal according to claim 9, which is characterized in that the processing unit includes:
Third determining module, for determining the unintelligible region in the human face region;
Replacement module is described unintelligible for region of the unintelligible region correspondence in first facial image to be replaced
Region obtains new human face region;
Processing module, for carrying out U.S. face processing to the new human face region.
11. a kind of terminal, which is characterized in that including:
Processor and memory;Wherein, the processor is by calling the code in the memory or instructing to perform such as power
Profit requires the method described in 1 to 5 any one.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610753658.9A CN106161962B (en) | 2016-08-29 | 2016-08-29 | A kind of image processing method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610753658.9A CN106161962B (en) | 2016-08-29 | 2016-08-29 | A kind of image processing method and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106161962A CN106161962A (en) | 2016-11-23 |
CN106161962B true CN106161962B (en) | 2018-06-29 |
Family
ID=57344292
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610753658.9A Active CN106161962B (en) | 2016-08-29 | 2016-08-29 | A kind of image processing method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106161962B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018227349A1 (en) * | 2017-06-12 | 2018-12-20 | 美的集团股份有限公司 | Control method, controller, intelligent mirror and computer readable storage medium |
CN107295252B (en) | 2017-06-16 | 2020-06-05 | Oppo广东移动通信有限公司 | Focusing area display method and device and terminal equipment |
CN107707815B (en) * | 2017-09-26 | 2019-10-15 | 北京金山安全软件有限公司 | Image processing method and device, electronic equipment and storage medium |
CN107909542A (en) * | 2017-11-30 | 2018-04-13 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN108009999A (en) * | 2017-11-30 | 2018-05-08 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN108513068B (en) * | 2018-03-30 | 2021-03-02 | Oppo广东移动通信有限公司 | Image selection method and device, storage medium and electronic equipment |
CN108734754B (en) * | 2018-05-28 | 2022-05-06 | 北京小米移动软件有限公司 | Image processing method and device |
CN110766606B (en) * | 2019-10-29 | 2023-09-26 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103533244A (en) * | 2013-10-21 | 2014-01-22 | 深圳市中兴移动通信有限公司 | Shooting device and automatic visual effect processing shooting method thereof |
CN103841324A (en) * | 2014-02-20 | 2014-06-04 | 小米科技有限责任公司 | Shooting processing method and device and terminal device |
CN103885706A (en) * | 2014-02-10 | 2014-06-25 | 广东欧珀移动通信有限公司 | Method and device for beautifying face images |
CN104159032A (en) * | 2014-08-20 | 2014-11-19 | 广东欧珀移动通信有限公司 | Method and device of adjusting facial beautification effect in camera photographing in real time |
CN104902177A (en) * | 2015-05-26 | 2015-09-09 | 广东欧珀移动通信有限公司 | Intelligent photographing method and terminal |
CN104967774A (en) * | 2015-06-05 | 2015-10-07 | 广东欧珀移动通信有限公司 | Dual-camera shooting control method and terminal |
CN105787888A (en) * | 2014-12-23 | 2016-07-20 | 联芯科技有限公司 | Human face image beautifying method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5547730B2 (en) * | 2008-07-30 | 2014-07-16 | デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド | Automatic facial and skin beautification using face detection |
CN104794462B (en) * | 2015-05-11 | 2018-05-22 | 成都野望数码科技有限公司 | A kind of character image processing method and processing device |
CN105872447A (en) * | 2016-05-26 | 2016-08-17 | 努比亚技术有限公司 | Video image processing device and method |
-
2016
- 2016-08-29 CN CN201610753658.9A patent/CN106161962B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103533244A (en) * | 2013-10-21 | 2014-01-22 | 深圳市中兴移动通信有限公司 | Shooting device and automatic visual effect processing shooting method thereof |
CN103885706A (en) * | 2014-02-10 | 2014-06-25 | 广东欧珀移动通信有限公司 | Method and device for beautifying face images |
CN103841324A (en) * | 2014-02-20 | 2014-06-04 | 小米科技有限责任公司 | Shooting processing method and device and terminal device |
CN104159032A (en) * | 2014-08-20 | 2014-11-19 | 广东欧珀移动通信有限公司 | Method and device of adjusting facial beautification effect in camera photographing in real time |
CN105787888A (en) * | 2014-12-23 | 2016-07-20 | 联芯科技有限公司 | Human face image beautifying method |
CN104902177A (en) * | 2015-05-26 | 2015-09-09 | 广东欧珀移动通信有限公司 | Intelligent photographing method and terminal |
CN104967774A (en) * | 2015-06-05 | 2015-10-07 | 广东欧珀移动通信有限公司 | Dual-camera shooting control method and terminal |
Also Published As
Publication number | Publication date |
---|---|
CN106161962A (en) | 2016-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106161962B (en) | A kind of image processing method and terminal | |
CN107886032B (en) | Terminal device, smart phone, authentication method and system based on face recognition | |
CN105893920B (en) | Face living body detection method and device | |
CN107527046B (en) | Unlocking control method and related product | |
WO2016180224A1 (en) | Method and device for processing image of person | |
CN107590474B (en) | Unlocking control method and related product | |
CN106331492A (en) | Image processing method and terminal | |
CN107622243B (en) | Unlocking control method and related product | |
CN106257489A (en) | Expression recognition method and system | |
CN107358219B (en) | Face recognition method and device | |
CN107437067A (en) | Human face in-vivo detection method and Related product | |
CN107480601B (en) | Detection method and related product | |
CN107844742B (en) | Facial image glasses minimizing technology, device and storage medium | |
CN106650615B (en) | A kind of image processing method and terminal | |
CN109829370A (en) | Face identification method and Related product | |
CN111738735A (en) | Image data processing method and device and related equipment | |
EP3627383A1 (en) | Anti-counterfeiting processing method, anti-counterfeiting processing apparatus and electronic device | |
CN104735357A (en) | Automatic picture shooting method and device | |
CN107977636B (en) | Face detection method and device, terminal and storage medium | |
CN112802081A (en) | Depth detection method and device, electronic equipment and storage medium | |
WO2022262209A1 (en) | Neural network training method and apparatus, computer device, and storage medium | |
CN107657219A (en) | Method for detecting human face and Related product | |
CN110738607A (en) | Method, device and equipment for shooting driving license based on artificial intelligence and storage medium | |
CN107563338B (en) | Face detection method and related product | |
CN107357424B (en) | Gesture operation recognition method and device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Patentee after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Patentee before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. |
|
CP01 | Change in the name or title of a patent holder |