CN107274355A - image processing method, device and mobile terminal - Google Patents
image processing method, device and mobile terminal Download PDFInfo
- Publication number
- CN107274355A CN107274355A CN201710365381.7A CN201710365381A CN107274355A CN 107274355 A CN107274355 A CN 107274355A CN 201710365381 A CN201710365381 A CN 201710365381A CN 107274355 A CN107274355 A CN 107274355A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- processing
- strategy
- identity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 38
- 238000012545 processing Methods 0.000 claims abstract description 268
- 238000000034 method Methods 0.000 claims abstract description 22
- 230000000694 effects Effects 0.000 abstract description 11
- 230000008569 process Effects 0.000 abstract description 6
- 230000004069 differentiation Effects 0.000 abstract description 4
- 210000000887 face Anatomy 0.000 description 19
- 230000006870 function Effects 0.000 description 16
- 230000006854 communication Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 230000003321 amplification Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 8
- 238000003199 nucleic acid amplification method Methods 0.000 description 8
- 208000003351 Melanosis Diseases 0.000 description 7
- 230000003416 augmentation Effects 0.000 description 7
- 239000002537 cosmetic Substances 0.000 description 7
- 238000002435 rhinoplasty Methods 0.000 description 7
- 208000002874 Acne Vulgaris Diseases 0.000 description 6
- 206010000496 acne Diseases 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000007796 conventional method Methods 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003255 anti-acne Effects 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72439—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
Abstract
Present invention is disclosed a kind of image processing method, device and mobile terminal, it the described method comprises the following steps:Recognize the identity of face in image;According to the identity of face and the incidence relation of default face and U.S. face processing strategy, the U.S. face processing strategy associated with face is obtained;Strategy is handled according to U.S. face U.S. face processing is carried out to face.So as to eliminate the cumbersome flow that user sets U.S. face processing strategy to pending image manually, simplify the operating process of U.S. face processing, improve the intelligent level of operating efficiency and terminal.Simultaneously, when there is multiple faces in image, corresponding U.S. face processing strategy can also be obtained respectively according to the identity of each face, so as to which the U.S. face for targetedly carrying out differentiation to each face respectively is handled, so that final U.S. face effect meets the feature of each face in image, proprietary satisfaction in the treatment effect and image of image is improved, Consumer's Experience is greatly improved.
Description
Technical field
The present invention relates to technical field of image processing, a kind of image processing method, device and movement are especially related to eventually
End.
Background technology
As smart mobile phone is gradually popularized in life, the performance of taking pictures of mobile phone is also more and more stronger.In daily life, with
Hand one, which is clapped, shares network as thing all the fashion, and U.S. face function more turns into the function that people seeking beauty likes.Using beautiful
When face function is taken pictures, user can be arranged as required to U.S. face processing strategy, the U.S. face processing strategy as amplification eyes,
Skin-whitening etc., terminal then handles strategy according to the U.S. face of setting and U.S. face processing is carried out to the photo of shooting.
However, existing technical scheme needs user to set U.S. face processing strategy manually, therefore, whenever photographed scene or picture
When face changes, it is required for user to manually select and switch suitable U.S. face processing strategy again, U.S. of satisfaction could be obtained
Face effect, therefore operation is comparatively laborious, operating efficiency is low.Meanwhile, the facial characteristics of different people varies, when many people group photo
When, because all faces in a photo can only all use a kind of U.S. face processing strategy that user is set, therefore it will cause only
There is a minority to be satisfied with and other people unsatisfied situations, have impact on the satisfaction of user, reduce Consumer's Experience.
The content of the invention
The main object of the present invention is a kind of image processing method of offer, device and mobile terminal, it is intended to simplified at U.S.'s face
The operating process of reason, improves operating efficiency and intelligent level.
To achieve these objectives, the present invention proposes a kind of image processing method, the described method comprises the following steps:
Recognize the identity of face in image;
According to the identity of the face and the incidence relation of default face and U.S. face processing strategy, obtain and the people
The associated U.S. face processing strategy of face;
U.S. face is carried out to the face according to the U.S. face processing strategy to handle.
Alternatively, also include before the step of identity of the face in the identification image:
Obtain the image information and identity information of face and strategy is handled to the U.S. face of the face;
The face is associated with the U.S. face processing strategy, and store the image information and identity information of the face
And the U.S. face processing strategy.
Alternatively, the image information and identity information for obtaining face and the U.S. face to the face handle strategy
Step includes:
After the face in an image carries out U.S. face processing, obtain the corresponding U.S. face processing strategy of this U.S. face processing with
And the image information of the face, and obtain the identity information of the face of user's input.
Alternatively, the step of identity information of the face for obtaining user's input includes:
User is pointed out to input the identity information of the face;
Obtain the identity information of user's input.
Alternatively, the image information and identity information for obtaining face and the U.S. face to the face handle strategy
Step includes:
The image information of face is obtained, and obtains the identity information of the face of user's input;
Receive the outside U.S. face processing strategy imported and handle strategy as the U.S. face of the face.
Alternatively, the step of image information of the acquisition face includes:The image information of face is gathered by camera,
Or, the image information of face is obtained from specified image.
Alternatively, the identity information includes title and/or coding.
It is alternatively, described that to obtain the corresponding U.S. face processing of this U.S. face processing tactful and image information of the face
Also include after step:Share the U.S. face processing strategy.
Alternatively, the step for sharing the U.S. face processing strategy includes:The U.S. face is handled into policy issue in net
Network platform, it is uploaded to cloud server or is sent to other terminal devices.
Alternatively, include in the identification image the step of identity of face:
Face in image and the face prestored are subjected to aspect ratio pair, judge whether the similarity of the two reaches threshold value;
When the similarity of the two reaches threshold value, the identity information of the face prestored described in acquisition, so as to identify described
The identity of face in image.
Alternatively, also include after the step for obtaining the U.S. face processing strategy associated with the face:
When at least two faces in described image, judge whether the gap of the face value of each face reaches threshold value;
When the gap of the face value of each face reaches threshold value, adjust the U.S. face processing strategy to reduce each face
The gap of face value.
Alternatively, the step of regulation U.S. face handles gap of the strategy to reduce the face value of each face includes:
Improve the U.S. face parameter value of the corresponding U.S. face processing strategy of the low face of face value.
Alternatively, the step of regulation U.S. face handles gap of the strategy to reduce the face value of each face includes:
Reduce the U.S. face parameter value of the corresponding U.S. face processing strategy of the high face of face value.
Alternatively, the U.S. face processing strategy include skin-whitening, removing acne and freckle, amplification eyes, lip cosmetic, thin face,
Grind skin and one kind in augmentation rhinoplasty or at least two combination.
Alternatively, include in the identification image the step of identity of face:When shooting one photo of acquisition, institute is recognized
State the identity of the face in photo.
Alternatively, the step of identity of the face in the identification image includes:When shooting interface display preview image,
Recognize the identity of the face in the preview image.
Alternatively, include in the identification image the step of identity of face:Refer to when receiving the U.S. face for a picture
When making, the identity of the face in the picture is recognized.
Alternatively, also include before in the identification image the step of identity of face:Start camera programm and obtain band people
The image of face, and start U.S. face processing.
The embodiment of the present invention proposes a kind of image processing apparatus simultaneously, and described device includes:
Identification module, the identity for recognizing face in image;
Acquisition module, associating for strategy is handled for the identity according to the face and default face with U.S. face
System, obtains the U.S. face processing strategy associated with the face;
Processing module, U.S. face processing is carried out to the face for handling strategy according to the U.S. face.
Alternatively, described device also includes relating module, and the relating module is used for:
Obtain the image information and identity information of face and strategy is handled to the U.S. face of the face;By the face with
The U.S. face processing strategy is associated, and stores the image information and identity information and the U.S. face processing plan of the face
Slightly.
Alternatively, the relating module is used for:
After the face in an image carries out U.S. face processing, obtain the corresponding U.S. face processing strategy of this U.S. face processing with
And the image information of the face, and obtain the identity information of the face of user's input.
Alternatively, the relating module is used for:
Point out user to input the identity information of the face, obtain the identity information of user's input.
Alternatively, the relating module is used for:
The image information of face is obtained, and obtains the identity information of the face of user's input;Receive outside importing
U.S. face processing strategy simultaneously handles strategy as the U.S. face of the face.
Alternatively, the relating module is used for:
The image information of face is gathered by camera, or, the image information of face is obtained from specified image.
Alternatively, described device also includes sharing module, and the sharing module is used for:Face in an image is carried out
After U.S. face processing, share this U.S. face and handle corresponding U.S. face processing strategy.
Alternatively, the sharing module is used for:By the U.S. face processing policy issue in the network platform, be uploaded to high in the clouds clothes
Business device is sent to other terminal devices.
Alternatively, the identification module is used for:Face in image and the face prestored are subjected to aspect ratio pair, two are judged
Whether the similarity of person reaches threshold value;When the similarity of the two reaches threshold value, the identity information of the face prestored described in acquisition,
So as to identify the identity of the face in described image.
Alternatively, described device also includes adjustment module, and the adjustment module is used for:
When at least two faces in described image, judge whether the gap of the face value of each face reaches threshold value;When
When the gap of the face value of each face reaches threshold value, the U.S. face processing strategy is adjusted with the difference for the face value for reducing each face
Away from.
Alternatively, the adjustment module is used for:Improve the U.S. face parameter of the corresponding U.S. face processing strategy of the low face of face value
Value.
Alternatively, the adjustment module is used for:Reduce the U.S. face parameter of the corresponding U.S. face processing strategy of the high face of face value
Value.
Alternatively, the identification module is used for:When shoot obtain a photo when, recognize face in the photo
Identity.
Alternatively, the identification module is used for:When shooting interface display preview image, recognize in the preview image
The identity of face.
Alternatively, the identification module is used for:When receiving the U.S. face instruction for a picture, recognize in the picture
Face identity.
Alternatively, described device also includes starting module, and the starting module is used for:Start camera programm and obtain band face
Image, and start the processing of U.S. face, so as to trigger identification module.
The embodiment of the present invention also proposes a kind of mobile terminal, including:
Display;
One or more processors;
Memory;
One or more application programs, wherein one or more of application programs are stored in the memory and quilt
It is configured to by one or more of computing devices, one or more of application programs are configurable for performing earlier figures
As processing method.
A kind of image processing method and device that the embodiment of the present invention is provided, are handled by presetting face with U.S. face
The incidence relation of strategy, when carrying out U.S. face processing to image, identifies the identity of the face in image, then according to people first
The identity of face obtains the U.S. face processing strategy associated with the face to carry out U.S. face processing automatically, eliminates user to pending
Image the cumbersome flow of U.S. face processing strategy is set manually, simplify the operating process of U.S. face processing, improve operating efficiency
With the intelligent level of terminal.
Further, when there is multiple faces in image, it can also obtain corresponding respectively according to the identity of each face
U.S. face processing strategy, so that the U.S. face for targetedly carrying out differentiation to each face respectively is handled so that final U.S. face
Effect meets the feature of each face in image, improves proprietary satisfaction in the treatment effect and image of image, so that
Avoid because conventional method the U.S. face of identical can only be set to handle strategy to all faces in same photo cause it is only few
Groups of people are satisfied with and other people unsatisfied situations, greatly improve Consumer's Experience.
Brief description of the drawings
Fig. 1 is the flow chart of the image processing method of first embodiment of the invention;
Fig. 2 is the particular flow sheet of setting face and the incidence relation of U.S. face processing strategy in the embodiment of the present invention;
Fig. 3 is the module diagram of the image processing apparatus of second embodiment of the invention;
Fig. 4 is the module diagram of the image processing apparatus of third embodiment of the invention;
Fig. 5 is the module diagram of the image processing apparatus of fourth embodiment of the invention;
Fig. 6 is the module diagram of the image processing apparatus of fifth embodiment of the invention;
Fig. 7 is the module diagram for being used in the embodiment of the present invention realize the mobile terminal of image processing method.The present invention
Realization, functional characteristics and the advantage of purpose will be described further referring to the drawings in conjunction with the embodiments.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singulative " one " used herein, " one
It is individual ", " described " and "the" may also comprise plural form.It is to be further understood that what is used in the specification of the present invention arranges
Diction " comprising " refer to there is the feature, integer, step, operation, element and/or component, but it is not excluded that in the presence of or addition
Other one or more features, integer, step, operation, element, component and/or their group.It should be understood that when we claim member
Part is " connected " or during " coupled " to another element, and it can be directly connected or coupled to other elements, or can also exist
Intermediary element.In addition, " connection " used herein or " coupling " can include wireless connection or wireless coupling.It is used herein to arrange
Taking leave "and/or" includes one or more associated wholes or any cell for listing item and all combines.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art
Language and scientific terminology), with the general understanding identical meaning with the those of ordinary skill in art of the present invention.Should also
Understand, those terms defined in such as general dictionary, it should be understood that with the context with prior art
The consistent meaning of meaning, and unless by specific definitions as here, otherwise will not use idealization or excessively formal implication
To explain.
Those skilled in the art of the present technique are appreciated that " terminal " used herein above, " terminal device " both include wireless communication
The equipment of number receiver, it only possesses the equipment of the wireless signal receiver of non-emissive ability, includes receiving again and transmitting hardware
Equipment, its have can on bidirectional communication link, perform two-way communication reception and launch hardware equipment.This equipment
It can include:Honeycomb or other communication equipments, it has single line display or multi-line display or shown without multi-line
The honeycomb of device or other communication equipments;PCS (Personal Communications Service, PCS Personal Communications System), it can
With combine voice, data processing, fax and/or its communication ability;PDA (Personal Digital Assistant, it is personal
Digital assistants), it can include radio frequency receiver, pager, the Internet/intranet access, web browser, notepad, day
Go through and/or GPS (Global Positioning System, global positioning system) receiver;Conventional laptop and/or palm
Type computer or other equipment, its have and/or conventional laptop and/or palmtop computer including radio frequency receiver or its
His equipment." terminal " used herein above, " terminal device " they can be portable, can transport, installed in the vehicles (aviation,
Sea-freight and/or land) in, or be suitable for and/or be configured in local runtime, and/or with distribution form, operate in the earth
And/or any other position operation in space." terminal " used herein above, " terminal device " can also be communication terminal, on
Network termination, music/video playback terminal, for example, can be PDA, MID (Mobile Internet Device, mobile Internet
Equipment) and/or the equipment such as mobile phone or intelligent television with music/video playing function, set top box.
Embodiment one
Reference picture 1, proposes the image processing method of first embodiment of the invention, the described method comprises the following steps:
The identity of face in S11, identification image.
In this step S11, terminal, when detecting face, is passed through by the face in face recognition technology detection image
Face recognition technology is analyzed and processed, the identity of face in identification image.Further, when detecting at least two in image
During individual () face, then the identity of each face is identified respectively.
Specifically, when carrying out identification to face, the face in image and the face prestored are carried out feature by terminal
Compare, judge whether the similarity of the two reaches threshold value;When the similarity of the two reaches threshold value, the face prestored is obtained
Identity information, so as to identify the identity of the face in image according to the identity information.
When there are multiple faces in image, then multiple faces are compared with the face that prestores one by one.The face prestored
Can be one or two or more.The face prestored described here, refers to the image of face prestored
Information, the image information can be plane picture, stereo-picture etc..
In the embodiment of the present invention, described image includes photo, picture, preview image etc..
Alternatively, terminal often shoots one photo of acquisition, then detects the people in the face in the photo, identification photo immediately
The identity of face, starts to carry out the photo U.S. face processing so that user can obtain the photo after U.S. face processing in real time.Further
Ground, can set U.S. face functional switch, when just thinking that U.S. face functional switch is in opening, and just the automatic photo to shooting is carried out
U.S. face processing.
Alternatively, terminal starts camera, and the view data generation preview image gathered according to camera is simultaneously shown in bat
Take the photograph on interface, and detect the identity of the face in the face in the preview image, identification preview image immediately, start to the preview
Image carries out U.S. face processing so that user can check U.S. face treatment effect in real time.It is possible to further set U.S. face function
Switch, it is just automatic to handle shooting the U.S. face of preview image progress during photo when just thinking that U.S. face functional switch is in opening.
Alternatively, user can issue the U.S. face instruction of any picture (including photo) for being locally stored at any time, when
When receiving the U.S. face instruction for a picture, terminal detects the face in the picture immediately, recognizes face in the picture
Identity, starts to carry out the picture U.S. face processing.
Alternatively, terminal starts camera programm and obtained after the image with face, immediately the U.S. face processing of automatic start, detection figure
Face as in, recognizes the identity of the face in the image, starts to carry out the image U.S. face processing.
The incidence relation of S12, the identity according to face and default face and U.S. face processing strategy, is obtained and face phase
The U.S. face processing strategy of association.
In the embodiment of the present invention, the incidence relation of face and U.S. face processing strategy is pre-established, when terminal recognition goes out figure
During the identity of the face as in, it is possible to go out the U.S. face associated with the face according to the identities match of face and handle strategy, institute
Stating U.S. face processing strategy includes the processing plans such as skin-whitening, removing acne and freckle, amplification eyes, lip cosmetic, thin face, mill skin, augmentation rhinoplasty
One kind or at least two combination in slightly.
The incidence relation of face set in advance and U.S. face processing strategy, can be stored in terminal sheet in the embodiment of the present invention
Ground, is stored in cloud server, wherein:
When incidence relation is stored in terminal local, terminal inquiry is stored in the pass of local face and U.S. face processing strategy
Connection relation, obtains the U.S. face processing strategy associated with face in image;
When incidence relation is stored in cloud server, the identity information of face in image can be sent to high in the clouds by terminal
Server, the incidence relation of the identity information of cloud server inquiry face and U.S. face processing strategy, is obtained and face in image
Associated U.S. face processing strategy, and return to terminal.
In the embodiment of the present invention, the incidence relation of face and U.S. face processing strategy, can be that terminal is directly externally introduced
Or terminal be locally created, wherein, terminal sets up the specific mistake of face and the incidence relation of U.S. face processing strategy
Journey will be described in detail later.
S13, strategy handled according to U.S. face U.S. face is carried out to face and handle.
In this step S13, the U.S. face that terminal is matched according to previous step S12 handles strategy and face carried out at U.S. face
Reason, such as the processing of face progress skin-whitening, removing acne and freckle processing, the processing of amplification eyes, lip cosmetic treatment, thin face are handled,
Grind skin processing, augmentation rhinoplasty processing etc..Specific processing mode is same as the prior art, will not be described here.
Further, when at least two faces in image, then according to the U.S. face processing strategy matched respectively to every
One face carries out U.S. face processing.For example:When going out U.S. face processing strategy A according to face A identities match, then U.S. face processing is utilized
Tactful A carries out U.S. face to face A and handled;When going out U.S. face according to face B identities match and handling strategy B, then using U.S. face at
Manage strategy B and U.S. face processing is carried out to face B;When going out U.S. face processing strategy C according to face C identities match, then U.S. face is utilized
Handle strategy C and U.S. face processing is carried out to face C.
Further, when at least two faces in image, judge whether the gap of the face value of each face reaches threshold
Value;When the gap of the face value of each face reaches threshold value, the U.S. face processing strategy of regulation is with the difference for the face value for reducing each face
Away from, thus avoid during group photo face value difference it is different it is too big caused by it is awkward, to improve proprietary satisfaction in group photo to greatest extent
Degree.
Specifically, the U.S. face parameter value of the corresponding U.S. face processing strategy of the low face of face value can be improved, and/or, reduction
The U.S. face parameter value of the corresponding U.S. face processing strategy of the high face of face value, so as to balance the face value of each face, reduces face value difference
It is different.
Face value height described in the embodiment of the present invention, can be according to the skin color of face, skin quality, fat or thin, face ratios
One kind in example and the parameter such as size judged, or at least two are integrated to average judged.For example, working as people
When the skin of face is more pale, then judge that face value is higher, on the contrary then face value is relatively low;When the skin smoother of face, then judge
Face value is higher, and on the contrary then face value is relatively low;When the fat or thin degree of face is close to threshold value, then judge that face value is higher, on the contrary then face value
It is relatively low;When the face ratio and more well-balanced size of face, then judge that face value is higher, on the contrary then face value is relatively low;Etc..
U.S. face parameter value described in the embodiment of the present invention is identical with the evaluation method of face value.For example, when needs improve U.S. face
During parameter value, then take increase skin fairness, improve skin smoothness, by fat or thin degree convergence threshold value, improve face
One kind or at least two in the modes such as the well-balanced degree of ratio and size;Conversely, when needing to reduce U.S. face parameter value, then taking
Reduce skin fairness, reduce skin smoothness, by fat or thin degree away from threshold value, reduction face ratio and size it is well-balanced
One kind or at least two in the modes such as degree.
In the embodiment of the present invention, before step S11, terminal can set up face and U.S. face processing plan in the following manner
Incidence relation slightly:
S101, the image information and identity information that obtain face and the U.S. face to face handle strategy.
Alternatively, after user carries out U.S. face processing manually to the face in an image, terminal obtains this U.S. face processing
The image information of corresponding U.S. face processing strategy and the face, and obtain the identity information of the face of user's input.For example, eventually
End shows word and/or image information near face, points out user to input the identity information of face, when user's input identity letter
During breath, then the identity information of user's input is obtained.
Alternatively, terminal can also by the image information of camera collection in worksite face, or, from specified image
The image information of face is obtained, the identity information of the face of user's input is then obtained, and received at the outside U.S. face imported
Reason strategy handles strategy as the U.S. face of the face.At the U.S. face processing strategy that outside is imported, the U.S. face that can be shared with other people
Reason strategy, the U.S. face processing strategy can be set from downloads such as the network platform, cloud servers or from other terminals
For what is sent.
The identity information of the face includes the title and/or coding of face, represents the identity of face.
Further, it is also possible at using the image information and identity information of other manner acquisition face and to the U.S. face of face
Reason strategy, this is not limited by the present invention.
S102, face and U.S. face handle to strategy is associated, and store the image information and identity information of face and right
The U.S. face processing strategy answered.
In this step S102, terminal sets up the incidence relation of face and U.S. face processing strategy, and by the image information of face
Policy store is handled in local and/or cloud server with identity information and corresponding U.S. face., can be with when setting up incidence relation
One association table is set, and form includes the identity information and U.S. face processing strategy of face, and identity information and U.S. face processing
Strategy is corresponded.
Further, in step S101, after the U.S. face processing strategy to face is obtained, user can also export the U.S.
Face processing strategy, shares the U.S. face processing strategy.For example, by U.S. face handle policy issue in the network platform, be uploaded to high in the clouds clothes
Business device, be sent to other terminal devices etc..
The image processing method of the embodiment of the present invention, the incidence relation of strategy is handled by presetting face with U.S. face,
When carrying out U.S. face processing to image, the identity of the face in image is identified first, is then obtained automatically according to the identity of face
Take the U.S. face associated with the face to handle strategy to carry out U.S. face processing, eliminate user and pending image is set manually
The cumbersome flow of U.S. face processing strategy, simplifies the operating process of U.S. face processing, improves the intellectuality of operating efficiency and terminal
Level.
Meanwhile, when there are multiple faces in image, corresponding U.S. face can also be obtained respectively according to the identity of each face
Processing strategy, so that the U.S. face for targetedly carrying out differentiation to each face respectively is handled so that final U.S. face effect
Meet the feature of each face in image, proprietary satisfaction in the treatment effect and image of image is improved, so as to avoid
Cause only small part because conventional method can only set the U.S. face of identical to handle strategy to all faces in same photo
People is satisfied with and other people unsatisfied situations, greatly improves Consumer's Experience.
Embodiment two
Reference picture 3, proposes the image processing apparatus of second embodiment of the invention, and described device includes identification module 10, obtained
Modulus block 20 and processing module 30, wherein:
Identification module 10:Identity for recognizing face in image.
Identification module 10, when detecting face, is known by the face in face recognition technology detection image by face
Other technology is analyzed and processed, the identity of face in identification image.Further, when detecting at least two (opening) in image
During face, then the identity of each face is identified respectively.
Specifically, when carrying out identification to face, identification module 10 enters the face in image and the face that prestores
Row aspect ratio pair, judges whether the similarity of the two reaches threshold value;When the similarity of the two reaches threshold value, obtain what this prestored
The identity information of face, so as to identify the identity of the face in image according to the identity information.
When there is multiple faces in image, multiple faces are then compared by identification module 10 with the face that prestores one by one.
The face prestored can be one or two or more.The face prestored described here, refers to what is prestored
The image information of face, the image information can be plane picture, stereo-picture etc..
In the embodiment of the present invention, described image includes photo, picture, preview image etc..
Alternatively, terminal often shoots one photo of acquisition, and identification module 10 then detects the face in the photo immediately, recognizes
The identity of face in photo, starts to carry out the photo U.S. face processing so that user can obtain after U.S. face processing in real time
Photo.It is just automatic to clapping when just thinking that U.S. face functional switch is in opening it is possible to further set U.S. face functional switch
The photo taken the photograph carries out U.S. face processing.
Alternatively, terminal starts camera, and the view data generation preview image gathered according to camera is simultaneously shown in bat
Take the photograph on interface, identification module 10 then detects the identity of the face in the face in the preview image, identification preview image immediately, opens
Begin to carry out the preview image U.S. face processing so that user can check U.S. face treatment effect in real time.It is possible to further set
U.S. face functional switch is put, when just thinking that U.S. face functional switch is in opening, just the automatic preview image to when shooting photo enters
The face processing of row U.S..
Alternatively, user can issue the U.S. face instruction of any picture (including photo) for being locally stored at any time, when
When receiving the U.S. face instruction for a picture, identification module 10 detects the face in the picture immediately, recognizes in the picture
The identity of face, starts to carry out the picture U.S. face processing.
Alternatively, terminal also includes a starting module, and the starting module is used to start figure of the camera programm acquisition with face
Picture, and start U.S. face processing.So as to trigger identification module 10, the face in the detection image of identification module 10 recognizes the image
In face identity, start to carry out the image U.S. face processing.
Acquisition module 20:For the identity according to face and the incidence relation of default face and U.S. face processing strategy,
Obtain the U.S. face processing strategy associated with face.
In the embodiment of the present invention, the incidence relation of face and U.S. face processing strategy is pre-established, when identification module 10 is known
During the identity for the face not gone out in image, acquisition module 20 can just go out associated with the face according to the identities match of face
U.S. face processing strategy, the U.S. face processing strategy includes skin-whitening, removing acne and freckle, amplification eyes, lip cosmetic, thin face, mill
One kind or at least two combination in the processing strategy such as skin, augmentation rhinoplasty.
The incidence relation of face set in advance and U.S. face processing strategy, can be stored in terminal sheet in the embodiment of the present invention
Ground, is stored in cloud server, wherein:
When incidence relation is stored in terminal local, the inquiry of acquisition module 20 is stored in local face and U.S. face processing plan
Incidence relation slightly, obtains the U.S. face processing strategy associated with face in image;
When incidence relation is stored in cloud server, acquisition module 20 can send the identity information of face in image
To cloud server, the incidence relation of the identity information of cloud server inquiry face and U.S. face processing strategy is obtained and image
The associated U.S. face processing strategy of middle face, and return to acquisition module 20.
In the embodiment of the present invention, the incidence relation of face and U.S. face processing strategy, is that terminal is directly externally introduced.
In other embodiments or terminal is being locally created, wherein, terminal sets up associating for face and U.S. face processing strategy
The detailed process of system will be described in detail in embodiment below.
Processing module 30:U.S. face processing is carried out to face for handling strategy according to U.S. face.
Face is carried out at U.S. face specifically, the U.S. face that processing module 30 is matched according to acquisition module 20 handles strategy
Reason, such as the processing of face progress skin-whitening, removing acne and freckle processing, the processing of amplification eyes, lip cosmetic treatment, thin face are handled,
Grind skin processing, augmentation rhinoplasty processing etc..Specific processing mode is same as the prior art, will not be described here.
Further, when at least two faces in image, processing module 30 is then according to the U.S. face processing plan matched
U.S. face is slightly carried out to each face respectively to handle.For example:When going out U.S. face processing strategy A according to face A identities match, processing
Module 30 then handles strategy A using U.S. face and U.S. face processing is carried out to face A;Handled when going out U.S. face according to face B identities match
During tactful B, processing module 30 then handles strategy B using U.S. face and U.S. face processing is carried out to face B;When the identity according to face C
When allotting U.S. face processing strategy C, processing module 30 then handles strategy C using U.S. face and the U.S. face of face C progress is handled.
The image processing apparatus of the embodiment of the present invention, the incidence relation of strategy is handled by presetting face with U.S. face,
When carrying out U.S. face processing to image, the identity of the face in image is identified first, is then obtained automatically according to the identity of face
Take the U.S. face associated with the face to handle strategy to carry out U.S. face processing, eliminate user and pending image is set manually
The cumbersome flow of U.S. face processing strategy, simplifies the operating process of U.S. face processing, improves the intellectuality of operating efficiency and terminal
Level.
Meanwhile, when there are multiple faces in image, corresponding U.S. face can also be obtained respectively according to the identity of each face
Processing strategy, so that the U.S. face for targetedly carrying out differentiation to each face respectively is handled so that final U.S. face effect
Meet the feature of each face in image, proprietary satisfaction in the treatment effect and image of image is improved, so as to avoid
Cause only small part because conventional method can only set the U.S. face of identical to handle strategy to all faces in same photo
People is satisfied with and other people unsatisfied situations, greatly improves Consumer's Experience.
Embodiment three
Reference picture 4, proposes the image processing apparatus of third embodiment of the invention, the present embodiment is on the basis of second embodiment
On add relating module 40, the relating module 40 is used for:Obtain the image information and identity information of face and to face
U.S. face processing strategy;Face is associated with U.S. face processing strategy, that is, the incidence relation of face and U.S. face processing strategy is set up,
And the image information and identity information and corresponding U.S. face processing strategy of face are stored, such as it is stored in terminal local and/or cloud
Hold server.
Relating module 40 can obtain the image information and identity information of face and U.S. to face in the following manner
Face processing strategy:
Alternatively, after user carries out U.S. face processing manually to the face in an image, it is beautiful that relating module 40 obtains this
The image information of the corresponding U.S. face processing strategy of face processing and the face, and obtain the identity information of the face of user's input.
For example, showing word and/or image information near face, user is pointed out to input the identity information of face, when user inputs body
During part information, then the identity information of user's input is obtained.
Alternatively, relating module 40 can also by the image information of camera collection in worksite face, or, from specified
The image information of face is obtained in image, the identity information of the face of user's input is then obtained, and receives outside importing
U.S. face processing strategy handles strategy as the U.S. face of the face.The U.S. face processing strategy that outside is imported, can be shared with other people
U.S. face processing strategy, U.S. face processing strategy can be from downloads such as the network platform, cloud servers or from other
What terminal device was sent.
The identity information of the face includes the title and/or coding of face, represents the identity of face.
Further, it is also possible at using the image information and identity information of other manner acquisition face and to the U.S. face of face
Reason strategy, this is not limited by the present invention.
In the embodiment of the present invention, relating module 40 can be set in an association table, form when setting up incidence relation
Identity information and U.S. face processing including face are tactful, and identity information and U.S. face processing strategy one-to-one corresponding.
The embodiment of the present invention is by setting relating module 40 so that user can be with self-defined setting face and U.S. face processing plan
Corresponding relation slightly, improves the satisfaction of user.
Example IV
Reference picture 5, proposes the image processing apparatus of fourth embodiment of the invention, the present embodiment is on the basis of 3rd embodiment
On add a sharing module 50, the sharing module 50 is used for:After the face in an image carries out U.S. face processing, share
This U.S. face handles corresponding U.S. face processing strategy.For example, by U.S. face handle policy issue in the network platform, be uploaded to high in the clouds clothes
Business device, be sent to other terminal devices etc..It is achieved thereby that the export sharing function of U.S. face processing strategy.
Alternatively it is also possible to omit the relating module 40 in the present embodiment and form new embodiment.
Embodiment five
Reference picture 6, proposes that the image procossing of fifth embodiment of the invention is this, the present embodiment is on the basis of fourth embodiment
On add an adjustment module 60, the adjustment module 60 is used for:When at least two faces in image, each face is judged
The gap of face value whether reach threshold value;When the gap of the face value of each face reaches threshold value, the U.S. face processing strategy of regulation with
Reduce the gap of the face value of each face, thus avoid during group photo face value difference it is different it is too big caused by it is awkward, with to greatest extent
Improve proprietary satisfaction in group photo.
Specifically, adjustment module 60 can improve the U.S. face parameter value of the corresponding U.S. face processing strategy of the low face of face value,
And/or, the U.S. face parameter value of the corresponding U.S. face processing strategy of the high face of reduction face value, so that the face value of each face is balanced,
Reduce face value difference different.
Face value height described in the embodiment of the present invention, can be according to the skin color of face, skin quality, fat or thin, face ratios
One kind in example and the parameter such as size judged, or at least two are integrated to average judged.For example, working as people
When the skin of face is more pale, then judge that face value is higher, on the contrary then face value is relatively low;When the skin smoother of face, then judge
Face value is higher, and on the contrary then face value is relatively low;When the fat or thin degree of face is close to threshold value, then judge that face value is higher, on the contrary then face value
It is relatively low;When the face ratio and more well-balanced size of face, then judge that face value is higher, on the contrary then face value is relatively low;Etc..
It is alternatively possible to which the relating module 40 and/or sharing module 50 that omit in the present embodiment form new embodiment.
U.S. face parameter value described in the embodiment of the present invention is identical with the evaluation method of face value.For example, when needs improve U.S. face
During parameter value, then take increase skin fairness, improve skin smoothness, by fat or thin degree convergence threshold value, improve face
One kind or at least two in the modes such as the well-balanced degree of ratio and size;Conversely, when needing to reduce U.S. face parameter value, then taking
Reduce skin fairness, reduce skin smoothness, by fat or thin degree away from threshold value, reduction face ratio and size it is well-balanced
One kind or at least two in the modes such as degree.
The image processing method and device of the embodiment of the present invention, can apply to the mobile terminals such as mobile phone, flat board, can also
Applied to camera terminals such as camera, video cameras, the terminals such as PC, notebook computer can also be applied to, or
Person's others terminal device.
The embodiment of the present invention additionally provides a kind of mobile terminal, as shown in fig. 7, for convenience of description, illustrate only and this
The related part of inventive embodiments, particular technique details is not disclosed, refer to present invention method part.The terminal can
Think mobile phone, tablet personal computer, PDA (Personal Digital Assistant, personal digital assistant), POS (Point
OfSales, point-of-sale terminal), any terminal device such as vehicle-mounted computer, so that terminal is mobile phone as an example:
Fig. 7 is illustrated that the block diagram of the part-structure of the mobile phone related to mobile terminal provided in an embodiment of the present invention.Ginseng
Fig. 7 is examined, mobile phone includes:Radio frequency (Radio Frequency, RF) circuit 310, memory 320, input block 330, display unit
340th, sensor 350, voicefrequency circuit 360, Wireless Fidelity (wireless-fidelity, Wi-Fi) module 370, processor 380,
And the grade part of power supply 390.It will be understood by those skilled in the art that the handset structure shown in Fig. 7 is not constituted to mobile phone
Limit, can include than illustrating more or less parts, either combine some parts or different parts arrangement.
Each component parts of mobile phone is specifically introduced with reference to Fig. 7:
RF circuits 310 can be used for receive and send messages or communication process in, the reception and transmission of signal, especially, by base station
After downlink information is received, handled to processor 380;In addition, being sent to base station by up data are designed.Generally, RF circuits 310
Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier (Low Noise
Amplifier, LNA), duplexer etc..In addition, RF circuits 310 can also be communicated by radio communication with network and other equipment.
Above-mentioned radio communication can use any communication standard or agreement, including but not limited to global system for mobile communications (Global
System of Mobile communication, GSM), general packet radio service (General Packet Radio
Service, GPRS), CDMA (Code Division Multiple Access, CDMA), WCDMA
(Wideband Code Division Multiple Access, WCDMA), Long Term Evolution (Long Term Evolution,
LTE), Email, Short Message Service (Short Messaging Service, SMS) etc..
Memory 320 can be used for storage software program and module, and processor 380 is stored in memory 320 by operation
Software program and module, so as to perform various function application and the data processing of mobile phone.Memory 320 can mainly include
Storing program area and storage data field, wherein, the application journey that storing program area can be needed for storage program area, at least one function
Sequence (such as sound-playing function, image player function etc.) etc.;Storage data field can be stored uses what is created according to mobile phone
Data (such as voice data, phone directory etc.) etc..In addition, memory 320 can include high-speed random access memory, can be with
Including nonvolatile memory, for example, at least one disk memory, flush memory device or other volatile solid-states
Part.
Input block 330 can be used for the numeral or character information for receiving input, and produce with the user of mobile phone set with
And the relevant key signals input of function control.Specifically, input block 330 may include that contact panel 331 and other inputs are set
Standby 332.Contact panel 331, also referred to as touch-screen, collecting touch operation of the user on or near it, (such as user uses
The operation of any suitable object such as finger, stylus or annex on contact panel 331 or near contact panel 331), and root
Corresponding attachment means are driven according to formula set in advance.Optionally, contact panel 331 may include touch detecting apparatus and touch
Two parts of controller.Wherein, touch detecting apparatus detects the touch orientation of user, and detects the signal that touch operation is brought,
Transmit a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and is converted into touching
Point coordinates, then give processor 380, and the order sent of reception processing device 380 and can be performed.Furthermore, it is possible to using electricity
The polytypes such as resistive, condenser type, infrared ray and surface acoustic wave realize contact panel 331.Except contact panel 331, input
Unit 330 can also include other input equipments 332.Specifically, other input equipments 332 can include but is not limited to secondary or physical bond
One or more in disk, function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc..
Display unit 340 can be used for the various of the information that is inputted by user of display or the information for being supplied to user and mobile phone
Menu.Display unit 340 may include display panel 341, optionally, can use liquid crystal display (Liquid Crystal
Display, LCD), the form such as Organic Light Emitting Diode (Organic Light-Emitting Diode, OLED) it is aobvious to configure
Show panel 341.Further, contact panel 331 can cover display panel 341, when contact panel 331 is detected thereon or attached
After near touch operation, processor 380 is sent to determine the type of touch event, with preprocessor 380 according to touch event
Type corresponding visual output is provided on display panel 341.Although in the figure 7, contact panel 331 and display panel 341
It is input and the input function that mobile phone is realized as two independent parts, but in some embodiments it is possible to by touch-control
Panel 331 and the input that is integrated and realizing mobile phone of display panel 341 and output function.
Mobile phone may also include at least one sensor 350, such as optical sensor, motion sensor and other sensors.
Specifically, optical sensor may include ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to ambient light
Light and shade adjust the brightness of display panel 341, proximity transducer can close display panel 341 when mobile phone is moved in one's ear
And/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (generally three axles) acceleration
Size, size and the direction of gravity are can detect that when static, available for identification mobile phone posture application (such as horizontal/vertical screen is cut
Change, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;May be used also as mobile phone
The other sensors such as gyroscope, barometer, hygrometer, thermometer, the infrared ray sensor of configuration, will not be repeated here.
Voicefrequency circuit 360, loudspeaker 361, microphone 362 can provide the COBBAIF between user and mobile phone.Audio-frequency electric
Electric signal after the voice data received conversion can be transferred to loudspeaker 361, sound is converted to by loudspeaker 361 by road 360
Signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 362, by voicefrequency circuit 360 receive after turn
It is changed to voice data, then after voice data output processor 380 is handled, through RF circuits 310 to be sent to such as another mobile phone,
Or export voice data to memory 320 so as to further processing.
WiFi belongs to short range wireless transmission technology, and mobile phone can help user's transceiver electronicses postal by WiFi module 370
Part, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 7 is shown
WiFi module 370, but it is understood that, it is simultaneously not belonging to must be configured into for mobile phone, can not change as needed completely
Become in the essential scope of invention and omit.
Processor 380 is the control centre of mobile phone, using various interfaces and the various pieces of connection whole mobile phone, is led to
Cross operation or perform and be stored in software program and/or module in memory 320, and call and be stored in memory 320
Data, perform the various functions and processing data of mobile phone, so as to carry out integral monitoring to mobile phone.Optionally, processor 380 can be wrapped
Include one or more processing units;It is preferred that, processor 380 can integrated application processor and modem processor, wherein, should
Operating system, user interface and application program etc. are mainly handled with processor, modem processor mainly handles radio communication.
It is understood that above-mentioned modem processor can not also be integrated into processor 380.
Mobile phone also includes the power supply 390 (such as battery) powered to all parts, it is preferred that power supply can pass through power supply pipe
Reason system and processor 380 are logically contiguous, so as to realize management charging, electric discharge and power managed by power-supply management system
Etc. function.
Although not shown, mobile phone can also include camera, bluetooth module etc., will not be repeated here.
In embodiments of the present invention, the processor 380 included by the terminal also has following functions:
Recognize the identity of face in image;
According to the identity of the face and the incidence relation of default face and U.S. face processing strategy, obtain and the people
The associated U.S. face processing strategy of face;
U.S. face is carried out to the face according to the U.S. face processing strategy to handle.
The embodiment of the invention discloses A1, a kind of image processing method, comprise the following steps:
Recognize the identity of face in image;
According to the identity of the face and the incidence relation of default face and U.S. face processing strategy, obtain and the people
The associated U.S. face processing strategy of face;
U.S. face is carried out to the face according to the U.S. face processing strategy to handle.
Before the step of identity of face in A2, the image processing method as described in A1, the identification image, also wrap
Include:
Obtain the image information and identity information of face and strategy is handled to the U.S. face of the face;
The face is associated with the U.S. face processing strategy, and store the image information and identity information of the face
And the U.S. face processing strategy.
A3, the image processing method as described in A2, the image information and identity information of the acquisition face and to described
The step of the U.S. face processing strategy of face includes:
After the face in an image carries out U.S. face processing, obtain the corresponding U.S. face processing strategy of this U.S. face processing with
And the image information of the face, and obtain the identity information of the face of user's input.
The step of A4, the image processing method as described in A3, identity information of the face of the acquisition user input
Including:
User is pointed out to input the identity information of the face;
Obtain the identity information of user's input.
A5, the image processing method as described in A2, the image information and identity information of the acquisition face and to described
The step of the U.S. face processing strategy of face includes:
The image information of face is obtained, and obtains the identity information of the face of user's input;
Receive the outside U.S. face processing strategy imported and handle strategy as the U.S. face of the face.
The step of A6, the image processing method as described in A5, image information of the acquisition face, includes:Pass through camera
The image information of face is gathered, or, the image information of face is obtained from specified image.
A7, the image processing method as described in any one of A2-A6, the identity information include title and/or coding.
A8, the image processing method as described in A3 or A4, this U.S. face of the acquisition handle corresponding U.S. face processing strategy
And the face image information the step of after, in addition to:Share the U.S. face processing strategy.
A9, the image processing method as described in A8, the step for sharing the U.S. face processing strategy include:By described U.S.
Face handle policy issue in the network platform, be uploaded to cloud server or be sent to other terminal devices.
Wrapped in A10, the image processing method as described in any one of A1-A6, the identification image the step of identity of face
Include:
Face in image and the face prestored are subjected to aspect ratio pair, judge whether the similarity of the two reaches threshold value;
When the similarity of the two reaches threshold value, the identity information of the face prestored described in acquisition, so as to identify described
The identity of face in image.
At A11, the image processing method as described in any one of A1-A6, the acquisition U.S. face associated with the face
Also include after the step for managing strategy:
When at least two faces in described image, judge whether the gap of the face value of each face reaches threshold value;
When the gap of the face value of each face reaches threshold value, adjust the U.S. face processing strategy to reduce each face
The gap of face value.
A12, the image processing method as described in A11, the regulation U.S. face handle strategy to reduce each face
The step of gap of face value, includes:
Improve the U.S. face parameter value of the corresponding U.S. face processing strategy of the low face of face value.
A13, the image processing method as described in A11, the regulation U.S. face handle strategy to reduce each face
The step of gap of face value, includes:
Reduce the U.S. face parameter value of the corresponding U.S. face processing strategy of the high face of face value.
A14, the image processing method as described in any one of A1-A6, the U.S. face processing strategy include skin-whitening, anti-acne
Nti-freckle, amplification eyes, lip cosmetic, thin face, mill skin and one kind in augmentation rhinoplasty or at least two combination.
Wrapped in A15, the image processing method as described in any one of A1-A6, the identification image the step of identity of face
Include:When shooting one photo of acquisition, the identity of the face in the photo is recognized.
Wrapped in A16, the image processing method as described in any one of A1-A6, the identification image the step of identity of face
Include:When shooting interface display preview image, the identity of the face in the preview image is recognized.
Wrapped in A17, the image processing method as described in any one of A1-A6, the identification image the step of identity of face
Include:When receiving the U.S. face instruction for a picture, the identity of the face in the picture is recognized.
In A18, the image processing method as described in any one of A1-A6, the identification image the step of identity of face it
It is preceding also to include:Start camera programm and obtain the image with face, and start U.S. face processing.
B19, a kind of image processing apparatus, including:
Identification module, the identity for recognizing face in image;
Acquisition module, associating for strategy is handled for the identity according to the face and default face with U.S. face
System, obtains the U.S. face processing strategy associated with the face;
Processing module, U.S. face processing is carried out to the face for handling strategy according to the U.S. face.
B20, the image processing apparatus as described in B19, described device also include relating module, and the relating module is used for:
Obtain the image information and identity information of face and strategy is handled to the U.S. face of the face;By the face with
The U.S. face processing strategy is associated, and stores the image information and identity information and the U.S. face processing plan of the face
Slightly.
B21, the image processing apparatus as described in B20, the relating module are used for:
After the face in an image carries out U.S. face processing, obtain the corresponding U.S. face processing strategy of this U.S. face processing with
And the image information of the face, and obtain the identity information of the face of user's input.
B22, the image processing apparatus as described in B21, the relating module are used for:
Point out user to input the identity information of the face, obtain the identity information of user's input.
B23, the image processing apparatus as described in B20, the relating module are used for:
The image information of face is obtained, and obtains the identity information of the face of user's input;Receive outside importing
U.S. face processing strategy simultaneously handles strategy as the U.S. face of the face.
B24, the image processing apparatus as described in B23, the relating module are used for:
The image information of face is gathered by camera, or, the image information of face is obtained from specified image.
B25, the image processing apparatus as described in any one of B20-B24, the identity information include title and/or coding.
B26, the image processing apparatus as described in B21 or B22, described device also include sharing module, the sharing module
For:After the face in an image carries out U.S. face processing, share this U.S. face and handle corresponding U.S. face processing strategy.
B27, the image processing apparatus as described in B26, the sharing module are used for:By the U.S. face processing policy issue in
The network platform, it is uploaded to cloud server or is sent to other terminal devices.
B28, the image processing apparatus as described in any one of B19-B24, the identification module are used for:By the face in image
Face progress aspect ratio pair with prestoring, judges whether the similarity of the two reaches threshold value;When the similarity of the two reaches threshold value
When, obtain described in the identity information of face that prestores, so as to identify the identity of the face in described image.
B29, the image processing apparatus as described in any one of B19-B24, described device also include adjustment module, the regulation
Module is used for:
When at least two faces in described image, judge whether the gap of the face value of each face reaches threshold value;When
When the gap of the face value of each face reaches threshold value, the U.S. face processing strategy is adjusted with the difference for the face value for reducing each face
Away from.
B30, the image processing apparatus as described in B29, the adjustment module are used for:Improve the low corresponding U.S. of face of face value
The U.S. face parameter value of face processing strategy.
B31, the image processing apparatus as described in B29, the adjustment module are used for:Reduce the high corresponding U.S. of face of face value
The U.S. face parameter value of face processing strategy.
B32, the image processing apparatus as described in any one of B19-B24, the U.S. face processing strategy include skin-whitening, dispelled
Acne nti-freckle, amplification eyes, lip cosmetic, thin face, mill skin and one kind in augmentation rhinoplasty or at least two combination.
B33, the image processing apparatus as described in any one of B19-B24, the identification module are used for:One is obtained when shooting
During photo, the identity of the face in the photo is recognized.
B34, the image processing apparatus as described in any one of B19-B24, the identification module are used for:When shooting interface display
During preview image, the identity of the face in the preview image is recognized.
B35, the image processing apparatus as described in any one of B19-B24, the identification module are used for:One is directed to when receiving
During the U.S. face instruction of picture, the identity of the face in the picture is recognized.
B36, the image processing apparatus as described in any one of B19-B24, described device also include starting module, the startup
Module is used for:Start camera programm and obtain the image with face, and start U.S. face processing.
C37, a kind of mobile terminal, including:
Display;
One or more processors;
Memory;
One or more application programs, wherein one or more of application programs are stored in the memory and quilt
It is configured to by one or more of computing devices, one or more of application programs are configurable for performing A1 extremely
Method described in any one of A18.
It will be understood by those skilled in the art that the present invention includes being related to for performing one in operation described herein
Or multinomial equipment.These equipment can be for needed for purpose and specially design and manufacture, or general-purpose computations can also be included
Known device in machine.These equipment have the computer program being stored in it, and these computer programs are optionally activated
Or reconstruct.Such computer program, which can be stored in equipment (for example, computer) computer-readable recording medium or be stored in, to be suitable to
Storage e-command is simultaneously coupled in any kind of medium of bus respectively, and the computer-readable medium includes but is not limited to
Any kind of disk (including floppy disk, hard disk, CD, CD-ROM and magneto-optic disk), ROM (Read-Only Memory, it is read-only to deposit
Reservoir), RAM (Random Access Memory, random access memory), EPROM (Erasable Programmable Read-
Only Memory, Erarable Programmable Read only Memory), EEPROM (Electrically Erasable Programmable
Read-Only Memory, EEPROM), flash memory, magnetic card or light card.It is, readable
Medium includes any medium for storing or transmitting information in the form of it can read by equipment (for example, computer).
Those skilled in the art of the present technique be appreciated that can be realized with computer program instructions these structure charts and/or
The combination of each frame and these structure charts and/or the frame in block diagram and/or flow graph in block diagram and/or flow graph.This technology is led
Field technique personnel be appreciated that these computer program instructions can be supplied to all-purpose computer, special purpose computer or other
The processor of programmable data processing method is realized, so as to pass through the processing of computer or other programmable data processing methods
The scheme that device is specified in the frame or multiple frames to perform structure chart disclosed by the invention and/or block diagram and/or flow graph.
Those skilled in the art of the present technique are appreciated that in the various operations discussed in the present invention, method, flow
Step, measure, scheme can be replaced, changed, combined or deleted.Further, it is each with what is discussed in the present invention
Kind operation, method, other steps in flow, measure, scheme can also be replaced, changed, reset, decomposed, combined or deleted.
Further, it is of the prior art to have and the step in the various operations disclosed in the present invention, method, flow, measure, scheme
It can also be replaced, changed, reset, decomposed, combined or deleted.
Above by reference to the preferred embodiments of the present invention have been illustrated, not thereby limit to the interest field of the present invention.This
Art personnel do not depart from the scope of the present invention and essence, can have a variety of flexible programs to realize the present invention, for example as one
The feature of individual embodiment can be used for another embodiment and obtain another embodiment.All institutes within the technical concept with the present invention
Any modifications, equivalent substitutions and improvements made, all should be within the interest field of the present invention.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the scope of the invention, it is every to utilize
Equivalent structure or equivalent flow conversion that description of the invention and accompanying drawing content are made, or directly or indirectly it is used in other correlations
Technical field, be included within the scope of the present invention.
Claims (10)
1. a kind of image processing method, it is characterised in that comprise the following steps:
Recognize the identity of face in image;
According to the identity of the face and the incidence relation of default face and U.S. face processing strategy, obtain and the face phase
The U.S. face processing strategy of association;
U.S. face is carried out to the face according to the U.S. face processing strategy to handle.
2. image processing method according to claim 1, it is characterised in that the identity of the face in the identification image
Before step, in addition to:
Obtain the image information and identity information of face and strategy is handled to the U.S. face of the face;
The face and U.S. face processing strategy is associated, and store the face image information and identity information and
The U.S. face processing strategy.
3. image processing method according to claim 2, it is characterised in that the image information and identity of the acquisition face
Information and the tactful step of the U.S. face processing to the face include:
After the face in an image carries out U.S. face processing, obtain this U.S. face and handle corresponding U.S. face processing strategy and institute
The image information of face is stated, and obtains the identity information of the face of user's input.
4. image processing method according to claim 3, it is characterised in that the face of the acquisition user input
The step of identity information, includes:
User is pointed out to input the identity information of the face;
Obtain the identity information of user's input.
5. image processing method according to claim 2, it is characterised in that the image information and identity of the acquisition face
Information and the tactful step of the U.S. face processing to the face include:
The image information of face is obtained, and obtains the identity information of the face of user's input;
Receive the outside U.S. face processing strategy imported and handle strategy as the U.S. face of the face.
6. a kind of image processing apparatus, it is characterised in that including:
Identification module, the identity for recognizing face in image;
Acquisition module, for the identity according to the face and the incidence relation of default face and U.S. face processing strategy, is obtained
The U.S. face associated with the face is taken to handle strategy;
Processing module, U.S. face processing is carried out to the face for handling strategy according to the U.S. face.
7. image processing apparatus according to claim 6, it is characterised in that described device also includes relating module, described
Relating module is used for:
Obtain the image information and identity information of face and strategy is handled to the U.S. face of the face;By the face with it is described
U.S. face processing strategy is associated, and stores the image information and identity information and the U.S. face processing strategy of the face.
8. image processing apparatus according to claim 7, it is characterised in that the relating module is used for:
After the face in an image carries out U.S. face processing, obtain this U.S. face and handle corresponding U.S. face processing strategy and institute
The image information of face is stated, and obtains the identity information of the face of user's input.
9. image processing apparatus according to claim 8, it is characterised in that the relating module is used for:
Point out user to input the identity information of the face, obtain the identity information of user's input.
10. a kind of mobile terminal, including:
Display;
One or more processors;
Memory;
One or more application programs, wherein one or more of application programs are stored in the memory and are configured
For by one or more of computing devices, one or more of application programs are configurable for perform claim requirement 1
To the method described in 5 any one.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710365381.7A CN107274355A (en) | 2017-05-22 | 2017-05-22 | image processing method, device and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710365381.7A CN107274355A (en) | 2017-05-22 | 2017-05-22 | image processing method, device and mobile terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107274355A true CN107274355A (en) | 2017-10-20 |
Family
ID=60065208
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710365381.7A Pending CN107274355A (en) | 2017-05-22 | 2017-05-22 | image processing method, device and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107274355A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107862654A (en) * | 2017-11-30 | 2018-03-30 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN107959789A (en) * | 2017-11-10 | 2018-04-24 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN107995415A (en) * | 2017-11-09 | 2018-05-04 | 深圳市金立通信设备有限公司 | A kind of image processing method, terminal and computer-readable medium |
CN108182714A (en) * | 2018-01-02 | 2018-06-19 | 腾讯科技(深圳)有限公司 | Image processing method and device, storage medium |
CN108229389A (en) * | 2017-12-29 | 2018-06-29 | 努比亚技术有限公司 | Facial image processing method, apparatus and computer readable storage medium |
CN108447035A (en) * | 2018-03-21 | 2018-08-24 | 广东欧珀移动通信有限公司 | Image optimization method, electronic device and computer readable storage medium |
CN109166082A (en) * | 2018-08-22 | 2019-01-08 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN109903249A (en) * | 2019-01-29 | 2019-06-18 | 贾广袖 | U.S. face processing method, storage equipment and mobile terminal based on face identification |
CN110020990A (en) * | 2018-01-10 | 2019-07-16 | 中兴通讯股份有限公司 | A kind of global skin makeup method, apparatus, equipment and the storage medium of mobile terminal |
CN110222567A (en) * | 2019-04-30 | 2019-09-10 | 维沃移动通信有限公司 | A kind of image processing method and equipment |
CN110335207A (en) * | 2019-06-04 | 2019-10-15 | 苏州浩哥文化传播有限公司 | A kind of intelligent imaging optimization method and its system based on images of a group of characters selection |
CN111145082A (en) * | 2019-12-23 | 2020-05-12 | 五八有限公司 | Face image processing method and device, electronic equipment and storage medium |
CN111275650A (en) * | 2020-02-25 | 2020-06-12 | 北京字节跳动网络技术有限公司 | Beautifying processing method and device |
CN111402157A (en) * | 2020-03-12 | 2020-07-10 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN112150346A (en) * | 2019-06-28 | 2020-12-29 | 青岛海信移动通信技术股份有限公司 | Terminal and image processing method thereof |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413270A (en) * | 2013-08-15 | 2013-11-27 | 北京小米科技有限责任公司 | Method and device for image processing and terminal device |
CN105512615A (en) * | 2015-11-26 | 2016-04-20 | 小米科技有限责任公司 | Picture processing method and apparatus |
-
2017
- 2017-05-22 CN CN201710365381.7A patent/CN107274355A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413270A (en) * | 2013-08-15 | 2013-11-27 | 北京小米科技有限责任公司 | Method and device for image processing and terminal device |
CN105512615A (en) * | 2015-11-26 | 2016-04-20 | 小米科技有限责任公司 | Picture processing method and apparatus |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107995415A (en) * | 2017-11-09 | 2018-05-04 | 深圳市金立通信设备有限公司 | A kind of image processing method, terminal and computer-readable medium |
CN107959789A (en) * | 2017-11-10 | 2018-04-24 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN107959789B (en) * | 2017-11-10 | 2020-03-06 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN107862654A (en) * | 2017-11-30 | 2018-03-30 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN108229389A (en) * | 2017-12-29 | 2018-06-29 | 努比亚技术有限公司 | Facial image processing method, apparatus and computer readable storage medium |
CN108182714A (en) * | 2018-01-02 | 2018-06-19 | 腾讯科技(深圳)有限公司 | Image processing method and device, storage medium |
CN108182714B (en) * | 2018-01-02 | 2023-09-15 | 腾讯科技(深圳)有限公司 | Image processing method and device and storage medium |
CN110020990B (en) * | 2018-01-10 | 2023-11-07 | 中兴通讯股份有限公司 | Global skin beautifying method, device and equipment of mobile terminal and storage medium |
CN110020990A (en) * | 2018-01-10 | 2019-07-16 | 中兴通讯股份有限公司 | A kind of global skin makeup method, apparatus, equipment and the storage medium of mobile terminal |
CN108447035A (en) * | 2018-03-21 | 2018-08-24 | 广东欧珀移动通信有限公司 | Image optimization method, electronic device and computer readable storage medium |
CN109166082A (en) * | 2018-08-22 | 2019-01-08 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN109903249A (en) * | 2019-01-29 | 2019-06-18 | 贾广袖 | U.S. face processing method, storage equipment and mobile terminal based on face identification |
CN110222567A (en) * | 2019-04-30 | 2019-09-10 | 维沃移动通信有限公司 | A kind of image processing method and equipment |
CN110335207A (en) * | 2019-06-04 | 2019-10-15 | 苏州浩哥文化传播有限公司 | A kind of intelligent imaging optimization method and its system based on images of a group of characters selection |
CN110335207B (en) * | 2019-06-04 | 2022-01-21 | 重庆七腾科技有限公司 | Intelligent image optimization method and system based on group image selection |
CN112150346A (en) * | 2019-06-28 | 2020-12-29 | 青岛海信移动通信技术股份有限公司 | Terminal and image processing method thereof |
CN111145082A (en) * | 2019-12-23 | 2020-05-12 | 五八有限公司 | Face image processing method and device, electronic equipment and storage medium |
CN111275650A (en) * | 2020-02-25 | 2020-06-12 | 北京字节跳动网络技术有限公司 | Beautifying processing method and device |
US11769286B2 (en) | 2020-02-25 | 2023-09-26 | Beijing Bytedance Network Technology Co., Ltd. | Beauty processing method, electronic device, and computer-readable storage medium |
CN111275650B (en) * | 2020-02-25 | 2023-10-17 | 抖音视界有限公司 | Beauty treatment method and device |
CN111402157A (en) * | 2020-03-12 | 2020-07-10 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN111402157B (en) * | 2020-03-12 | 2024-04-09 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107274355A (en) | image processing method, device and mobile terminal | |
CN107274354A (en) | image processing method, device and mobile terminal | |
WO2021013230A1 (en) | Robot control method, robot, terminal, server, and control system | |
CN107292235B (en) | fingerprint acquisition method and related product | |
CN107657222A (en) | Face identification method and Related product | |
CN107623806B (en) | Image processing method and related product | |
CN111209812A (en) | Target face picture extraction method and device and terminal equipment | |
CN107888823A (en) | One kind shooting processing method, apparatus and system | |
CN108718389B (en) | Shooting mode selection method and mobile terminal | |
CN114710585A (en) | Photographing method and terminal | |
CN108984143B (en) | Display control method and terminal equipment | |
CN109922539A (en) | Method for connecting network and Related product | |
CN111294625B (en) | Method, device, terminal equipment and storage medium for combining equipment service capability | |
CN107563316A (en) | A kind of image pickup method, terminal and computer-readable recording medium | |
CN107045418A (en) | A kind of method, device, computer installation and the storage medium of information input | |
CN107292833B (en) | Image processing method and device and mobile terminal | |
CN112131473B (en) | Information recommendation method, device, equipment and storage medium | |
CN109726303A (en) | A kind of image recommendation method and terminal | |
CN108462794A (en) | A kind of method for information display and mobile terminal | |
CN108155457A (en) | A kind of mobile terminal of wireless communication | |
CN106657544A (en) | Incoming call recording method and terminal equipment | |
CN106921792B (en) | Data acquisition method and device and mobile terminal | |
CN107613284B (en) | A kind of image processing method, terminal and computer readable storage medium | |
CN110536067A (en) | Image processing method, device, terminal device and computer readable storage medium | |
CN109858447A (en) | A kind of information processing method and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171020 |
|
RJ01 | Rejection of invention patent application after publication |