CN106161962A - A kind of image processing method and terminal - Google Patents
A kind of image processing method and terminal Download PDFInfo
- Publication number
- CN106161962A CN106161962A CN201610753658.9A CN201610753658A CN106161962A CN 106161962 A CN106161962 A CN 106161962A CN 201610753658 A CN201610753658 A CN 201610753658A CN 106161962 A CN106161962 A CN 106161962A
- Authority
- CN
- China
- Prior art keywords
- image
- face
- facial image
- region
- shooting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 19
- 230000001815 facial effect Effects 0.000 claims abstract description 145
- 238000000034 method Methods 0.000 claims abstract description 78
- 230000008569 process Effects 0.000 claims abstract description 62
- 238000012545 processing Methods 0.000 claims description 18
- 230000000694 effects Effects 0.000 abstract description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000013441 quality evaluation Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 210000003128 head Anatomy 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/57—Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/815—Camera processing pipelines; Components thereof for controlling the resolution by using a single image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
Embodiments providing a kind of image processing method, described method includes: when detecting that shutter triggers, and obtains the first facial image in preview image;After the triggering of described shutter terminates, Face datection is carried out to shooting image, obtains the second facial image;Judge that whether described Face datection is successful according to described second facial image;If it is not, determine the human face region in described shooting image according to described first facial image;U.S. face process is carried out to the human face region in described shooting image.The embodiment of the present invention additionally provides a kind of terminal.U.S. face effect can be promoted by the embodiment of the present invention in the case of shooting that face occurs fuzzy in image.
Description
Technical field
The present invention relates to technical field of image processing, be specifically related to a kind of image processing method and terminal.
Background technology
As information technology is fast-developing, terminal (such as mobile phone, panel computer etc.) uses more and more universal, conduct of taking pictures
One important application of terminal, becomes the attraction that each big terminal manufacturer is praised highly.At present, for effect of preferably being taken pictures, beautiful
Face technology is extensively pursued in application of taking pictures.U.S. face technology depends on human face detection tech, in particular it is required that face first detected
(human face detection tech needs to detect three positions, and i.e. two eyes and lip just can accurately identify), is detecting face basis
On carry out the operation of U.S. face.Therefore, can there is following a kind of situation, face be detected when preview image, but in shooting process because of
Cause shooting image to be clapped for certain reason (as handshaking) to stick with paste, in this case, it is impossible to face detected, therefore, traditional U.S. face
Shooting image cannot be come into force by technology, reduces Consumer's Experience.
Content of the invention
Embodiments provide a kind of image processing method and terminal, can occur fuzzy by face in shooting image
In the case of, promote U.S. face effect.
Embodiment of the present invention first aspect provides a kind of image processing method, comprising:
When detecting that shutter triggers, obtain the first facial image in preview image;
After the triggering of described shutter terminates, Face datection is carried out to shooting image, obtains the second facial image;
Judge that whether described Face datection is successful according to described second facial image;
If it is not, determine the human face region in described shooting image according to described first facial image;
U.S. face process is carried out to the human face region in described shooting image.
Embodiment of the present invention second aspect provides a kind of terminal, comprising:
Acquiring unit, for when detecting that shutter triggers, obtains the first facial image in preview image;
Detector unit, for after the triggering of described shutter terminates, carries out Face datection to shooting image, obtains the second face
Image;
Judging unit, for judging described Face datection according to described second facial image that described detector unit detects
Whether successful;
Determining unit, if the judged result for described judging unit is no, according to described acquiring unit obtains
First facial image determines the human face region in described shooting image;
Processing unit, is carried out at U.S. face for the described human face region shooting in image determining described determining unit
Reason.
The embodiment of the present invention third aspect provides a kind of terminal, comprising:
Processor and memory;Wherein, described processor is by calling code in described memory or instruction to perform
The some or all of step of the image processing method described by first aspect.
Implement the embodiment of the present invention, have the advantages that
By the embodiment of the present invention, when detecting that shutter triggers, obtain the first facial image in preview image, soon
Door triggers after terminating, and carries out Face datection to shooting image, obtains the second facial image, judge face according to the second facial image
Detect whether successfully, if it is not, determine the human face region in shooting image according to the first facial image, to the face in shooting image
Region carries out U.S. face process.Thus, the first facial image can be utilized to determine and to clap in the case that the second facial image detects unsuccessfully
Take the photograph the human face region in image, and carry out U.S. face process to this human face region, it is achieved more preferably U.S. face effect.
Brief description
For the technical scheme being illustrated more clearly that in the embodiment of the present invention, required in embodiment being described below make
Accompanying drawing be briefly described, it should be apparent that, below describe in accompanying drawing be some embodiments of the present invention, for ability
From the point of view of the those of ordinary skill of territory, on the premise of not paying creative work, the attached of other can also be obtained according to these accompanying drawings
Figure.
Fig. 1 is the first embodiment schematic flow sheet of a kind of image processing method that the embodiment of the present invention provides;
Fig. 2 is the second embodiment schematic flow sheet of a kind of image processing method that the embodiment of the present invention provides;
Fig. 3 a is the first embodiment structural representation of a kind of terminal that the embodiment of the present invention provides;
Fig. 3 b is the structural representation of the determining unit of the terminal described by Fig. 3 a that the embodiment of the present invention provides;
Fig. 3 c is the structural representation of the processing unit of the terminal described by Fig. 3 a that the embodiment of the present invention provides;
Fig. 4 is the second example structure schematic diagram of a kind of terminal that the embodiment of the present invention provides.
Detailed description of the invention
Embodiments provide a kind of image processing method and terminal, can occur fuzzy by face in shooting image
In the case of, promote U.S. face effect.
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Describe, it is clear that described embodiment is a part of embodiment of the present invention, rather than whole embodiments wholely.Based on this
Embodiment in bright, the every other enforcement that those of ordinary skill in the art are obtained under the premise of not making creative work
Example, broadly falls into the scope of protection of the invention.
Term " first " in description and claims of this specification and described accompanying drawing, " second ", " the 3rd " and "
Four " it is etc. for distinguishing different object, rather than be used for describing particular order.Additionally, term " includes " and " having " and it
Any deformation, it is intended that cover non-exclusive comprising.For example contain process, the method for series of steps or unit, be
System, product or equipment are not limited to step or the unit listed, but also include step or the list do not listed alternatively
Unit, or also include other steps intrinsic for these processes, method, product or equipment or unit alternatively.
Referenced herein " embodiment " is it is meant that the special characteristic, structure or the characteristic that describe can be wrapped in conjunction with the embodiments
It is contained at least one embodiment of the present invention.It is identical that each position in the description shows that this phrase might not each mean
Embodiment, be not and the independent of other embodiments mutual exclusion or alternative embodiment.Those skilled in the art explicitly and
Implicitly being understood by, embodiment described herein can combine with other embodiments.
Terminal described by the embodiment of the present invention can include smart mobile phone (as Android phone, iOS mobile phone,
Windows Phone mobile phone etc.), panel computer, palm PC, notebook computer, mobile internet device (MID, Mobile
Internet Devices) or Wearable etc., above-mentioned terminal is only citing, and non exhaustive, including but not limited to above-mentioned end
End.
Refer to Fig. 1, for the first embodiment schematic flow sheet of a kind of image processing method that the embodiment of the present invention provides.
Image processing method described in the present embodiment, comprises the following steps:
101st, when detecting that shutter triggers, the first facial image in preview image is obtained.
Wherein, terminal is when receiving shooting instruction, can trigger shutter, when triggering shutter, can enter preview image
Row Face datection, thus, obtain the first facial image, herein, human face detection tech belongs to prior art, does not make too much superfluous at this
State.Certainly, user open front-facing camera autodyne when, show preview image, starting show preview image arrive
Receive the interior during this period of time of shooting instruction, all Face datection can be carried out to this preview image, why select to trigger at shutter
When, it the reason that obtain the first facial image in preview image is, after shutter is triggered to terminate, obtain shooting image, this
Shooting image with the preview image when shutter triggers relatively because between during this period of time, the position that face moves
Very little, and, during this period of time, the probability that face moves is low, substantially it is believed that be to maintain constant.
Alternatively, above-mentioned first facial image can comprise two positions of face, Hp position, shape of face position, shape of face wheel
Exterior feature, Skin Color Information (for example, face color, speckle displacement etc.) etc., certainly, the first facial image can be with each pixel value
Position.
102nd, after the triggering of described shutter terminates, Face datection is carried out to shooting image, obtains the second facial image.
Wherein, after shutter triggering terminates, can obtain shooting image, the image obtaining after i.e. shooting, to this shooting image
Carry out Face datection, available second facial image.Owing to preview image and shooting image are not image in the same time, in difference
The position of moment, ambient light and people all may be fast changing, and therefore, both differ.
Alternatively, it during step 102, if Face datection is carried out to shooting image, is not detected by face,
Then due in preview image, face occurred, then, can the first facial image position will be corresponded in shooting image
The image tagged of position is the second facial image.
103rd, judge that whether described Face datection is successful according to described second facial image.
Alternatively, the specific implementation process of step 103 can be as follows:
Extract the face contour in described second facial image, it is judged that whether described face contour is complete, if it is not, confirm institute
State Face datection failure.Terminal can be extracted and carry out contours extract to the second facial image, when the face contour extracting is complete, can recognize
For the Face datection success in step 103, when the face contour extracting is imperfect, then confirm Face datection failure.For example, user
In shooting process, if rocking, part face is caused to occur in shooting image, or, under having rocked, cause shooting image
Fuzzy, the part face (facial contour extracting is also part) detecting, face even cannot be detected.
Alternatively, the specific implementation process of step 103 also can be as follows:
Judge whether described shooting image exists ghost image according to described second facial image, if so, confirm described face
Detect unsuccessfully.The blurred area that terminal can detect shooting image exceedes preset area, if so, then it is believed that exist in shooting image
Ghost image, above-mentioned preset area can be defined according to the gross area of shooting image, for example, the 5% of the gross area.Certainly, above-mentioned mould
Stick with paste area to be determined by existing image blurring detection mode, do not do too much repeating at this.
Certainly, if step 103 judges Face datection failure, then step 104 according to the second facial image, if becoming
Work(, then direct carry out U.S. face to the face detecting.
104 if it is not, determine the human face region in described shooting image according to described first facial image.
Alternatively, if Face datection failure, then the human face region in this shooting image can be determined according to the first facial image.
Specifically, step 104 determines the human face region in described shooting image according to described first facial image, can be according to
Following manner is implemented:
41) described first facial image, is utilized to determine in described shooting image with described first face image similarity
High target area;
42), determine in described shooting image according to the face contour of described target area and described first facial image
Human face region.
Wherein, can compare with shooting image according to the first facial image, structural similarity (structural can be used
Similarity index, SSIM) both are compared, determine that the first facial image similarity in shooting image is the highest
Target area.Wherein, this SSIM is the index of a kind of measurement two width image similarities.By which, can get at shooting figure
Target area the highest with the first face image similarity in Xiang.Then, can the part face characteristic (such as two of step target area
Eye position, Hp position, nose position, eyebrow position) corresponding position in the first facial image, extract the first facial image
Face contour, certainly, this face contour comprises the correspondence position of above-mentioned part face characteristic, can be according to the first facial image
Face contour and the correspondence position of above-mentioned part face characteristic, and the position of the part face characteristic in combining target region clapping
Take the photograph and image marks human face region.I.e.: first, in M feature of target area marker, then, look in the first facial image
To corresponding M the feature with this M feature, secondly, extract the face contour of the first facial image, finally, according to the first face
M feature of the face contour of image and M feature and target area determines human face region in shooting image, and wherein, M is
Integer more than 0.
105th, U.S. face process is carried out to the human face region in described shooting image.
Wherein, terminal can carry out U.S. face process to the human face region in shooting image, if U.S. face is processed, can refer to existing
U.S. face algorithm, and shoot the non-face region in image and can not carry out U.S. face process, it is of course also possible to non-face region is entered
Row image enhaucament (background can become apparent from), or, background blurring process (highlights face).
Alternatively, terminal also can carry out U.S. face process according to the first facial image to human face region, for example, can use the
U.S. face algorithm that one facial image is used (or, the parameter using in algorithm) carries out U.S. face process to human face region.Example again
As the first facial image being divided into N number of zones of different, also according to identical partitioning scheme, human face region is divided into N number of difference
Region, by each region in the N number of zones of different in the first facial image and corresponding N number of zones of different in human face region
In a certain region compare, obtain the first facial image has M region more clear than corresponding region in human face region, that
, this M region can be replaced corresponding region in human face region, wherein, above-mentioned N is the integer more than 1, M for more than or etc.
In 0 and less than or equal to the integer of N.
Further, above-mentioned U.S.'s face that carries out the human face region shooting in image according to the first facial image processes and can comprise
Following steps:
51) the unintelligible region in described human face region, is determined;
52), described unintelligible region is replaced in corresponding for the described unintelligible region region in described first facial image,
Obtain new human face region;
53), U.S. face process is carried out to described new human face region.
Wherein, in step 51, human face region can be divided into multiple region, picture quality can be carried out to each region and comment
Valency, can get multiple image quality evaluation values, can arrange threshold value, i.e. a first threshold, is more than this first threshold, it is believed that be
Image is clear, is less than or equal to this first threshold it is believed that not fogging clear, wherein, carries out image quality evaluation to each region
Mode can be: one or more image quality evaluation index index can be used to carry out image quality evaluation to each region,
Image quality evaluation index can be: average gray, entropy, edge conservation degree, mean square deviation etc..So, can be respectively by multiple images
Quality evaluation value and first threshold are compared, and can be unintelligible less than the corresponding region of image quality evaluation values of first threshold
Region.In step 52, unintelligible region can be replaced with corresponding region in the first facial image, so, shooting
Human face region in image merges the region in the first facial image, has obtained new human face region, can be by step 53 to newly
Human face region carry out U.S. face process.Certainly, the U.S. face in 53 processes and can refer to U.S. face algorithm of the prior art.Further
Ground, the new human face region after can processing U.S. face is smoothed.
Alternatively, institute is replaced in corresponding for the described unintelligible region region in described first facial image by above-mentioned steps 52
State unintelligible region, obtain new human face region and also can be carried out in following manner:
Obtain corresponding region in the first facial image, unintelligible region, than less clear region and this unintelligible region
Definition between corresponding region in the first facial image, if the definition in unintelligible region is less than this unintelligible region pair
When the definition of the first facial image, unintelligible district should be replaced in corresponding for the unintelligible region region in the first facial image
Territory, if the definition in unintelligible region is corresponding more than or equal to this unintelligible region when the definition of the first facial image, then
Do not perform replacement process.
It should be noted that the embodiment of the present invention is mainly for utilizing U.S. face camera to take pictures, specifically, such as: utilize preposition
When camera is autodyned, if in shooting process, user is handshaking or head rocks, and causes ghost image, thus, Face datection loses
Lose, or, in shooting process, if ambient light cataclysm also results in Face datection failure.So, may utilize the present invention to implement
The said method that example provides, carries out U.S. face process.It is of course also possible to use post-positioned pick-up head to carry out autodyning or (shooting), i.e.
Camera rotation technique can be used, front-facing camera and post-positioned pick-up head are switched over, or, directly use post-positioned pick-up head
Shooting personage.In the case of face being detected, it would however also be possible to employ the said method that the embodiment of the present invention provides, face is carried out
U.S. face process.
Certainly, above-mentioned facial image is just for auto heterodyne, or, personage to be taken pictures, the embodiment of the present invention is also permissible
It is applicable to other objects and carries out U.S. face process, for example, a cup will be placed on desk, when recognizing this cup, can be according to this
Inventive embodiments carries out U.S. face process to it.
By the embodiment of the present invention, when detecting that shutter triggers, obtain the first facial image in preview image, soon
Door triggers after terminating, and carries out Face datection to shooting image, obtains the second facial image, judge face according to the second facial image
Detect whether successfully, if it is not, determine the human face region in shooting image according to the first facial image, to the face in shooting image
Region carries out U.S. face process.Thus, the first facial image can be utilized to determine and to clap in the case that the second facial image detects unsuccessfully
Take the photograph the human face region in image, and carry out U.S. face process to this human face region, it is achieved more preferably U.S. face effect.
Consistent with the abovely, Fig. 2 is referred to, for the second enforcement of a kind of image processing method that the embodiment of the present invention provides
Example schematic flow sheet.Image processing method described in the present embodiment, comprises the following steps:
201st, when detecting that shutter triggers, the first facial image in preview image is obtained.
202nd, judge that whether described first facial image is complete.
Alternatively, if judging, the first facial image is imperfect, then do not perform subsequent step.But, fast owing to triggering
Door, still can obtain shooting image, simply, will not carry out U.S. face process to shooting image.
Wherein, can determine whether that whether the first facial image is complete, as extracted the face contour in the first facial image, it is judged that institute
Whether complete state face contour, if it is not, confirm the failure of described Face datection.Terminal can be extracted and carry out profile to the first facial image
Extract, when the face contour extracting is complete, it is believed that the first facial image is complete, when the face contour extracting is imperfect, then really
Recognize the first facial image imperfect.For example, user is during taking pictures, if rocking, causes part face to occur in shooting figure
In Xiang, or, under having rocked, cause shooting image blurring, (facial contour extracting is also portion to the part face detecting
Point).And for example, judge whether shooting image exists ghost image according to the first facial image, if so, confirm described first facial image
Imperfect.The blurred area that terminal can detect shooting image exceedes preset area, if so, then it is believed that there is weight in shooting image
Shadow, above-mentioned preset area can be defined according to the gross area of shooting image, for example, the 5% of the gross area.Certainly, above-mentioned fuzzy
Area can be determined by existing image blurring detection mode, does not do too much repeating at this.
203rd, if so, after the triggering of described shutter terminates, Face datection is carried out to shooting image, obtains the second face figure
Picture.
204th, judge that whether described Face datection is successful according to described second facial image.
205 if it is not, determine the human face region in described shooting image according to described first facial image.
206th, U.S. face process is carried out to the human face region in described shooting image.
Wherein, step not described above can refer to the concrete of the corresponding steps in the image processing method described by Fig. 1
Describe.
Alternatively, if the first facial image is imperfect, then cannot be implemented this in the case that the second facial image obscures
The image processing method that bright embodiment is used carries out U.S. face process to shooting image.Therefore, the embodiment of the present invention can be in preview
Facial image in image, with regard to when imperfect, just abandoned carrying out U.S. face process to shooting image, can be saved the electricity of terminal, separately
Outward, the face in preview image is complete, and shoot the face in image imperfect when, can be true according to the face in preview image
Surely shoot the face in image, and U.S. face process is carried out to the face in shooting image, improve U.S. face effect.
Consistent with the abovely, below for implementing virtual bench and the entity apparatus of above-mentioned image processing method, specific as follows:
Refer to Fig. 3 a, for the first embodiment structural representation of a kind of terminal that the embodiment of the present invention provides.This enforcement
Terminal described in example, comprising: acquiring unit the 301st, detector unit the 302nd, judging unit the 303rd, determining unit 304 and process are single
Unit 305, specific as follows:
Acquiring unit 301, for when detecting that shutter triggers, obtains the first facial image in preview image;
Detector unit 302, for after the triggering of described shutter terminates, carries out Face datection to shooting image, obtains second
Facial image;
Judging unit 303, for judging described people according to described second facial image that described detector unit 302 detects
Face detects whether successfully;
Determining unit 304, if the judged result for described judging unit 303 is no, obtains according to described acquiring unit 301
Described first facial image taking determines the human face region in described shooting image;
Processing unit 305, carries out U.S. for the described human face region shooting in image determining described determining unit 304
Face process.
Alternatively, described judging unit 303 specifically for:
Extract the face contour in described second facial image, it is judged that whether described face contour is complete, if it is not, confirm institute
State Face datection failure;
Or,
Judge whether described shooting image exists ghost image according to described second facial image, if so, confirm described face
Detect unsuccessfully.
Alternatively, such as Fig. 3 b, the determining unit 304 of the terminal described in Fig. 3 a mays include: the first determining module 3041
With the second determining module 3042, specific as follows:
First determining module 3041, is used for utilizing described first facial image to determine in described shooting image with described first
Facial image similarity target area the highest;
Second determining module 3042, determines for the face contour according to described target area and described first facial image
Human face region in described shooting image.
Alternatively, above-mentioned processing unit 305 specifically for:
According to described first facial image, U.S. face process is carried out to the human face region in described shooting image.
Still optionally further, such as Fig. 3 c, the processing unit 305 of the terminal described in Fig. 3 a or Fig. 3 b mays include: the 3rd
Determining module the 3051st, replacement module 3052 and processing module 3053, specific as follows:
3rd determining module 3051, for determining the unintelligible region in described human face region;
Replacement module 3052, for replacing institute by corresponding for the described unintelligible region region in described first facial image
State unintelligible region, obtain new human face region;
Processing module 3053, for carrying out U.S. face process to described new human face region.
Alternatively, described judging unit 303 also particularly useful for: obtain in preview image in described acquiring unit 301
After one facial image, it is judged that whether described first facial image is complete;If the judged result of described judging unit 303 is yes,
Described detector unit 302 is carried out Face datection to shooting image, obtains the second facial image.
First in preview image can be obtained by the terminal described by the embodiment of the present invention when detecting that shutter triggers
Facial image, after shutter triggering terminates, carries out Face datection to shooting image, obtains the second facial image, according to the second people
Face image judges that whether Face datection is successful, if it is not, determine the human face region in shooting image according to the first facial image, to bat
Take the photograph the human face region in image and carry out U.S. face process.Thus, first can be utilized in the case that the second facial image detects unsuccessfully
Facial image determines the human face region in shooting image, and carries out U.S. face process to this human face region, it is achieved more preferably U.S. face effect
Really.
Refer to Fig. 4, for the second example structure schematic diagram of a kind of terminal that the embodiment of the present invention provides.The present embodiment
Described in terminal, comprising: at least one input equipment 1000;At least one output equipment 2000;At least one processor
3000, such as CPU;With memory 4000, above-mentioned input equipment the 1000th, output equipment the 2000th, processor 3000 and memory
4000 are connected by bus 5000.
Wherein, above-mentioned input equipment 1000 concretely contact panel, physical button or mouse.
Above-mentioned output equipment 2000 concretely display screen.
Above-mentioned memory 4000 can be high-speed RAM memory, it is possible to be nonvolatile storage (non-volatile
Memory), such as magnetic disc store.Above-mentioned memory 4000 is used for storing batch processing code, and above-mentioned input equipment is the 1000th, defeated
Go out equipment 2000 and processor 3000 for calling in memory 4000 program code of storage, perform following operation:
Above-mentioned processor 3000, is used for:
When detecting that shutter triggers, obtain the first facial image in preview image;
After the triggering of described shutter terminates, Face datection is carried out to shooting image, obtains the second facial image;
Judge that whether described Face datection is successful according to described second facial image;
If it is not, determine the human face region in described shooting image according to described first facial image;
U.S. face process is carried out to the human face region in described shooting image.
Alternatively, according to described second facial image, above-mentioned processor 3000 judges that whether described Face datection is successful, bag
Include:
Extract the face contour in described second facial image, it is judged that whether described face contour is complete, if it is not, confirm institute
State Face datection failure;
Or,
Judge whether described shooting image exists ghost image according to described second facial image, if so, confirm described face
Detect unsuccessfully.
Alternatively, above-mentioned processor 3000 determines the face district in described shooting image according to described first facial image
Territory, comprising:
Described first facial image is utilized to determine in described shooting image the highest with described first face image similarity
Target area;
Determine the face in described shooting image according to the face contour of described target area and described first facial image
Region.
Alternatively, above-mentioned processor 3000 carries out U.S. face process to the human face region in described shooting image, comprising:
According to described first facial image, U.S. face process is carried out to the human face region in described shooting image.
Alternatively, the human face region in described shooting image is entered by above-mentioned processor 3000 according to described first facial image
The U.S. face process of row, comprising:
Determine the unintelligible region in described human face region;
Described unintelligible region is replaced in corresponding for the described unintelligible region region in described first facial image, obtains
New human face region;
Carry out U.S. face process to described new human face region.
Alternatively, after above-mentioned processor 3000 obtains the first facial image in preview image, also particularly useful for:
Judge that whether described first facial image is complete;If so, perform described to shooting image carry out Face datection, obtain
Second facial image.
The embodiment of the present invention also provides a kind of computer-readable storage medium, and wherein, this computer-readable storage medium can be stored with journey
Sequence, this program includes the part or all of step of any image processing method described in said method embodiment when performing
Suddenly.
Although combine each embodiment invention has been described at this, but, implementing the present invention for required protection
During, those skilled in the art are by checking described accompanying drawing, disclosure and appended claims, it will be appreciated that and real
Other changes of existing described open embodiment.In the claims, " include " that (comprising) word is not excluded for other compositions
Part or step, "a" or "an" is not excluded for multiple situations.Single processor or other unit can realize claim
In some functions enumerating.Mutually different has been recited in mutually different dependent some measure, it is not intended that these are arranged
Executing to combine produces good effect.
It will be understood by those skilled in the art that embodiments of the invention can be provided as method, device (equipment) or computer journey
Sequence product.Therefore, in terms of the present invention can use complete hardware embodiment, complete software implementation or combine software and hardware
The form of embodiment.And, the present invention can use in one or more calculating wherein including computer usable program code
The upper computer program implemented of machine usable storage medium (including but not limited to magnetic disc store, CD-ROM, optical memory etc.)
The form of product.Computer program is stored/distributed in suitable medium, provides or as hardware one together with other hardware
Part, it would however also be possible to employ other distribution forms, as by Internet or other wired or wireless telecommunication systems.
The present invention be with reference to the method for the embodiment of the present invention, device (equipment) and computer program flow chart with/
Or block diagram describes.It should be understood that can by each flow process in computer program instructions flowchart and/or block diagram and/
Or the combination of the flow process in square frame and flow chart and/or block diagram and/or square frame.These computer program instructions can be provided
To the processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device to produce one
Individual machine so that the instruction being performed by the processor of computer or other programmable data processing device is produced for realizing
The device of the function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions may be alternatively stored in and computer or other programmable data processing device can be guided with spy
Determine in the computer-readable memory that mode works so that the instruction being stored in this computer-readable memory produces and includes referring to
Make the manufacture of device, this command device realize at one flow process of flow chart or multiple flow process and/or one square frame of block diagram or
The function specified in multiple square frames.
These computer program instructions also can be loaded in computer or other programmable data processing device so that at meter
Perform sequence of operations step on calculation machine or other programmable devices producing computer implemented process, thus at computer or
The instruction performing on other programmable devices provides for realizing at one flow process of flow chart or multiple flow process and/or block diagram one
The step of the function specified in individual square frame or multiple square frame.
Although in conjunction with specific features and embodiment, invention has been described, it is clear that, without departing from this
In the case of bright spirit and scope, various modification and combination can be carried out to it.Correspondingly, the specification and drawings is only institute
The exemplary illustration of the present invention that attached claim is defined, and be considered as covering arbitrarily repairing in the scope of the invention with all
Change, change, combine or equivalent.Obviously, those skilled in the art the present invention can be carried out various change and modification and not
Depart from the spirit and scope of the present invention.So, if the present invention these modification and modification belong to the claims in the present invention and
Within the scope of equivalent technologies, then the present invention is also intended to comprise these changes and modification.
Claims (13)
1. an image processing method, it is characterised in that include:
When detecting that shutter triggers, obtain the first facial image in preview image;
After the triggering of described shutter terminates, Face datection is carried out to shooting image, obtains the second facial image;
Judge that whether described Face datection is successful according to described second facial image;
If it is not, determine the human face region in described shooting image according to described first facial image;
U.S. face process is carried out to the human face region in described shooting image.
2. method according to claim 1, it is characterised in that described judge described face according to described second facial image
Detect whether successfully, comprising:
Extract the face contour in described second facial image, it is judged that whether described face contour is complete, if it is not, confirm described people
Face detects unsuccessfully;
Or,
Judge whether described shooting image exists ghost image according to described second facial image, if so, confirm described Face datection
Failure.
3. method according to claim 2, it is characterised in that described determine described shooting according to described first facial image
Human face region in image, comprising:
Described first facial image is utilized to determine target the highest with described first face image similarity in described shooting image
Region;
Determine the human face region in described shooting image according to the face contour of described target area and described first facial image.
4. the method according to any one of claims 1 to 3, it is characterised in that described to the face in described shooting image
Region carries out U.S. face process, comprising:
According to described first facial image, U.S. face process is carried out to the human face region in described shooting image.
5. method according to claim 4, it is characterised in that described according to described first facial image to described shooting figure
Human face region in Xiang carries out U.S. face process, comprising:
Determine the unintelligible region in described human face region;
Described unintelligible region is replaced in corresponding for the described unintelligible region region in described first facial image, obtains new
Human face region;
Carry out U.S. face process to described new human face region.
6. the method according to any one of claims 1 to 3, it is characterised in that the first in described acquisition preview image
After face image, described method also includes:
Judge that whether described first facial image is complete;If so, perform described to shooting image carry out Face datection, obtain second
Facial image.
7. a terminal, it is characterised in that include:
Acquiring unit, for when detecting that shutter triggers, obtains the first facial image in preview image;
Detector unit, for after the triggering of described shutter terminates, carries out Face datection to shooting image, obtains the second face figure
Picture;
Whether judging unit, for judging described Face datection according to described second facial image that described detector unit detects
Success;
Determining unit, if the judged result for described judging unit is no, described first obtaining according to described acquiring unit
Facial image determines the human face region in described shooting image;
Processing unit, carries out U.S. face process for the described human face region shooting in image determining described determining unit.
8. terminal according to claim 7, it is characterised in that described judging unit specifically for:
Extract the face contour in described second facial image, it is judged that whether described face contour is complete, if it is not, confirm described people
Face detects unsuccessfully;
Or,
Judge whether described shooting image exists ghost image according to described second facial image, if so, confirm described Face datection
Failure.
9. terminal according to claim 8, it is characterised in that described determining unit includes:
First determining module, be used for utilizing described first facial image determine in described shooting image with described first facial image
Similarity target area the highest;
Second determining module, for determining described shooting according to the face contour of described target area and described first facial image
Human face region in image.
10. the terminal according to any one of claim 7 to 9, it is characterised in that described processing unit specifically for:
According to described first facial image, U.S. face process is carried out to the human face region in described shooting image.
11. terminals according to claim 10, it is characterised in that described processing unit includes:
3rd determining module, for determining the unintelligible region in described human face region;
Replacement module, for replacing corresponding for the described unintelligible region region in described first facial image described unintelligible
Region, obtains new human face region;
Processing module, for carrying out U.S. face process to described new human face region.
12. terminals according to any one of claim 7 to 9, it is characterised in that described judging unit also particularly useful for:
After the first facial image that described acquiring unit obtains in preview image, it is judged that whether described first facial image is complete
Whole;If the judged result of described judging unit is yes, described detector unit is carried out Face datection to shooting image, obtains second
Facial image.
13. 1 kinds of terminals, it is characterised in that include:
Processor and memory;Wherein, described processor is by calling code in described memory or instruction to perform such as to weigh
Profit requires the method described in 1 to 6 any one.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610753658.9A CN106161962B (en) | 2016-08-29 | 2016-08-29 | A kind of image processing method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610753658.9A CN106161962B (en) | 2016-08-29 | 2016-08-29 | A kind of image processing method and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106161962A true CN106161962A (en) | 2016-11-23 |
CN106161962B CN106161962B (en) | 2018-06-29 |
Family
ID=57344292
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610753658.9A Active CN106161962B (en) | 2016-08-29 | 2016-08-29 | A kind of image processing method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106161962B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107707815A (en) * | 2017-09-26 | 2018-02-16 | 北京金山安全软件有限公司 | Image processing method and device, electronic equipment and storage medium |
CN107909542A (en) * | 2017-11-30 | 2018-04-13 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN108009999A (en) * | 2017-11-30 | 2018-05-08 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN108513068A (en) * | 2018-03-30 | 2018-09-07 | 广东欧珀移动通信有限公司 | Choosing method, device, storage medium and the electronic equipment of image |
CN108734754A (en) * | 2018-05-28 | 2018-11-02 | 北京小米移动软件有限公司 | Image processing method and device |
WO2018227349A1 (en) * | 2017-06-12 | 2018-12-20 | 美的集团股份有限公司 | Control method, controller, intelligent mirror and computer readable storage medium |
CN110766606A (en) * | 2019-10-29 | 2020-02-07 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
US11283987B2 (en) | 2017-06-16 | 2022-03-22 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Focus region display method and apparatus, and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100026831A1 (en) * | 2008-07-30 | 2010-02-04 | Fotonation Ireland Limited | Automatic face and skin beautification using face detection |
CN103533244A (en) * | 2013-10-21 | 2014-01-22 | 深圳市中兴移动通信有限公司 | Shooting device and automatic visual effect processing shooting method thereof |
CN103841324A (en) * | 2014-02-20 | 2014-06-04 | 小米科技有限责任公司 | Shooting processing method and device and terminal device |
CN103885706A (en) * | 2014-02-10 | 2014-06-25 | 广东欧珀移动通信有限公司 | Method and device for beautifying face images |
CN104159032A (en) * | 2014-08-20 | 2014-11-19 | 广东欧珀移动通信有限公司 | Method and device of adjusting facial beautification effect in camera photographing in real time |
CN104794462A (en) * | 2015-05-11 | 2015-07-22 | 北京锤子数码科技有限公司 | Figure image processing method and device |
CN104902177A (en) * | 2015-05-26 | 2015-09-09 | 广东欧珀移动通信有限公司 | Intelligent photographing method and terminal |
CN104967774A (en) * | 2015-06-05 | 2015-10-07 | 广东欧珀移动通信有限公司 | Dual-camera shooting control method and terminal |
CN105787888A (en) * | 2014-12-23 | 2016-07-20 | 联芯科技有限公司 | Human face image beautifying method |
CN105872447A (en) * | 2016-05-26 | 2016-08-17 | 努比亚技术有限公司 | Video image processing device and method |
-
2016
- 2016-08-29 CN CN201610753658.9A patent/CN106161962B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100026831A1 (en) * | 2008-07-30 | 2010-02-04 | Fotonation Ireland Limited | Automatic face and skin beautification using face detection |
CN103533244A (en) * | 2013-10-21 | 2014-01-22 | 深圳市中兴移动通信有限公司 | Shooting device and automatic visual effect processing shooting method thereof |
CN103885706A (en) * | 2014-02-10 | 2014-06-25 | 广东欧珀移动通信有限公司 | Method and device for beautifying face images |
CN103841324A (en) * | 2014-02-20 | 2014-06-04 | 小米科技有限责任公司 | Shooting processing method and device and terminal device |
CN104159032A (en) * | 2014-08-20 | 2014-11-19 | 广东欧珀移动通信有限公司 | Method and device of adjusting facial beautification effect in camera photographing in real time |
CN105787888A (en) * | 2014-12-23 | 2016-07-20 | 联芯科技有限公司 | Human face image beautifying method |
CN104794462A (en) * | 2015-05-11 | 2015-07-22 | 北京锤子数码科技有限公司 | Figure image processing method and device |
CN104902177A (en) * | 2015-05-26 | 2015-09-09 | 广东欧珀移动通信有限公司 | Intelligent photographing method and terminal |
CN104967774A (en) * | 2015-06-05 | 2015-10-07 | 广东欧珀移动通信有限公司 | Dual-camera shooting control method and terminal |
CN105872447A (en) * | 2016-05-26 | 2016-08-17 | 努比亚技术有限公司 | Video image processing device and method |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018227349A1 (en) * | 2017-06-12 | 2018-12-20 | 美的集团股份有限公司 | Control method, controller, intelligent mirror and computer readable storage medium |
US11283987B2 (en) | 2017-06-16 | 2022-03-22 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Focus region display method and apparatus, and storage medium |
CN107707815A (en) * | 2017-09-26 | 2018-02-16 | 北京金山安全软件有限公司 | Image processing method and device, electronic equipment and storage medium |
CN107707815B (en) * | 2017-09-26 | 2019-10-15 | 北京金山安全软件有限公司 | Image processing method and device, electronic equipment and storage medium |
CN107909542A (en) * | 2017-11-30 | 2018-04-13 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN108009999A (en) * | 2017-11-30 | 2018-05-08 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN108513068A (en) * | 2018-03-30 | 2018-09-07 | 广东欧珀移动通信有限公司 | Choosing method, device, storage medium and the electronic equipment of image |
CN108513068B (en) * | 2018-03-30 | 2021-03-02 | Oppo广东移动通信有限公司 | Image selection method and device, storage medium and electronic equipment |
CN108734754A (en) * | 2018-05-28 | 2018-11-02 | 北京小米移动软件有限公司 | Image processing method and device |
CN108734754B (en) * | 2018-05-28 | 2022-05-06 | 北京小米移动软件有限公司 | Image processing method and device |
CN110766606A (en) * | 2019-10-29 | 2020-02-07 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN110766606B (en) * | 2019-10-29 | 2023-09-26 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN106161962B (en) | 2018-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106161962A (en) | A kind of image processing method and terminal | |
CN105933589B (en) | A kind of image processing method and terminal | |
US11010967B2 (en) | Three dimensional content generating apparatus and three dimensional content generating method thereof | |
TWI686774B (en) | Human face live detection method and device | |
CN106331492A (en) | Image processing method and terminal | |
CN107527046B (en) | Unlocking control method and related product | |
CN107483834B (en) | Image processing method, continuous shooting method and device and related medium product | |
WO2016180224A1 (en) | Method and device for processing image of person | |
CN106257489A (en) | Expression recognition method and system | |
CN107590474B (en) | Unlocking control method and related product | |
CN107392958A (en) | A kind of method and device that object volume is determined based on binocular stereo camera | |
WO2016107638A1 (en) | An image face processing method and apparatus | |
CN111626163B (en) | Human face living body detection method and device and computer equipment | |
CN102713975B (en) | Image clearing system, image method for sorting and computer program | |
CN107844742A (en) | Facial image glasses minimizing technology, device and storage medium | |
CN110738078A (en) | face recognition method and terminal equipment | |
KR101641500B1 (en) | Fast Eye Detection Method Using Block Contrast and Symmetry in Mobile Device | |
CN112802081A (en) | Depth detection method and device, electronic equipment and storage medium | |
CN108833774A (en) | Camera control method, device and UAV system | |
CN113688820A (en) | Stroboscopic stripe information identification method and device and electronic equipment | |
CN107832598B (en) | Unlocking control method and related product | |
CN104182975A (en) | Photographing device and method capable of automatically filtering picture with poor effect | |
CN105893578A (en) | Method and device for selecting photos | |
CN113538315B (en) | Image processing method and device | |
CN114565531A (en) | Image restoration method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Patentee after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Patentee before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. |
|
CP01 | Change in the name or title of a patent holder |