CN107274354A - image processing method, device and mobile terminal - Google Patents

image processing method, device and mobile terminal Download PDF

Info

Publication number
CN107274354A
CN107274354A CN201710365380.2A CN201710365380A CN107274354A CN 107274354 A CN107274354 A CN 107274354A CN 201710365380 A CN201710365380 A CN 201710365380A CN 107274354 A CN107274354 A CN 107274354A
Authority
CN
China
Prior art keywords
processing
image
strategy
facial feature
feature information
Prior art date
Application number
CN201710365380.2A
Other languages
Chinese (zh)
Inventor
唐金成
Original Assignee
奇酷互联网络科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奇酷互联网络科技(深圳)有限公司 filed Critical 奇酷互联网络科技(深圳)有限公司
Priority to CN201710365380.2A priority Critical patent/CN107274354A/en
Publication of CN107274354A publication Critical patent/CN107274354A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/005Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00221Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
    • G06K9/00228Detection; Localisation; Normalisation
    • G06K9/00241Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00221Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
    • G06K9/00268Feature extraction; Face representation

Abstract

Present invention is disclosed a kind of image processing method, device and mobile terminal, it the described method comprises the following steps:An at least face in detection image, and obtain the facial feature information of each face;According to the facial feature information of each face, corresponding U.S. face processing strategy is matched;U.S. face processing strategy according to matching carries out U.S. face to each face respectively and handled.It is achieved thereby that going out to meet the U.S. face processing strategy of the face according to the facial feature information Auto-matching of face in image to carry out U.S. face processing, the cumbersome flow that user sets U.S. face processing strategy to pending image manually is eliminated, the intelligent level of operating efficiency and terminal is improved.Simultaneously, when there is multiple faces in image, the U.S. face that differentiation can also targetedly be carried out to each face respectively is handled, so that final U.S. face effect meets the feature of each face in image, so that all persons in image are satisfied to result, the treatment effect of image is substantially increased.

Description

Image processing method, device and mobile terminal

Technical field

The present invention relates to technical field of image processing, a kind of image processing method, device and movement are especially related to eventually End.

Background technology

As smart mobile phone is gradually popularized in life, the performance of taking pictures of mobile phone is also more and more stronger.In daily life, with Hand one, which is clapped, shares network as thing all the fashion, and U.S. face function more turns into the function that people seeking beauty likes.Using beautiful When face function is taken pictures, user can be arranged as required to U.S. face processing strategy, the U.S. face processing strategy as amplification eyes, Skin-whitening etc., terminal then handles strategy according to the U.S. face of setting and U.S. face processing is carried out to the photo of shooting.

However, existing technical scheme needs user to set U.S. face processing strategy manually, therefore, whenever photographed scene or picture When face changes, it is required for user to manually select and switch suitable U.S. face processing strategy again, U.S. of satisfaction could be obtained Face effect, therefore operation is comparatively laborious, operating efficiency is low.Meanwhile, the facial characteristics of different people varies, when many people group photo When, because all faces in a photo can only all use a kind of U.S. face processing strategy that user is set, therefore it will cause only There is a minority to be satisfied with and other people unsatisfied situations, have impact on the satisfaction of user, reduce Consumer's Experience.

The content of the invention

The main object of the present invention is a kind of image processing method of offer, device and mobile terminal, it is intended to simplified at U.S.'s face The operating process of reason, improves operating efficiency and intelligent level.

To achieve these objectives, the present invention proposes a kind of image processing method, the described method comprises the following steps:

An at least face in detection image, and obtain the facial feature information of each face;

According to the facial feature information of each face, corresponding U.S. face processing strategy is matched;

U.S. face processing strategy according to matching carries out U.S. face to each face respectively and handled.

Alternatively, the facial feature information according to each face, matches the step of corresponding U.S. face processing strategy Including:

According to the facial feature information of each face, recommend the U.S. face processing parameter of corresponding star's face to user;

The U.S. face processing parameter that user is selected handles strategy as the U.S. face of the face.

Alternatively, it is described to include the step of recommend the U.S. face processing parameter of corresponding star's face to user:

The prompting letter of the U.S. face processing parameter of corresponding star's face is selected in adjacent face region display reminding user Breath.

Alternatively, the facial feature information according to each face, matches the step of corresponding U.S. face processing strategy Including:

The U.S. face associated with the identification code of terminal is called to handle policy database;

According to the facial feature information of each face, corresponding U.S. face is matched in the U.S. face processing policy database Processing strategy.

Alternatively, the identification code is telephone number.

Alternatively, the facial feature information according to each face, matches the step of corresponding U.S. face processing strategy Including:

Call the U.S. face processing policy database that the operational mode current with terminal is associated;

According to the facial feature information of each face, corresponding U.S. face is matched in the U.S. face processing policy database Processing strategy.

Alternatively, the operational mode includes at least two in child mode, mode of operation and householder mode.

Alternatively, the facial feature information according to each face, matches the step of corresponding U.S. face processing strategy Including:

When at least two faces in described image, judge whether the gap of the face value of each face reaches threshold value;

When the gap of the face value of each face reaches threshold value, corresponding U.S. is gone out according to the face value High-low Match of each face Face processing strategy, with the gap for the face value for reducing each face.

Alternatively, the face value High-low Match according to each face goes out the tactful step of corresponding U.S. face processing and included:

When the face value of face reduces by one grade, the U.S. face parameter value of corresponding U.S. face processing strategy improves one grade.

Alternatively, the face value High-low Match according to each face goes out the tactful step of corresponding U.S. face processing and included:

The face high to face value, matches the low U.S. face processing strategy of U.S. face parameter value;The face low to face value, is matched The high U.S. face processing strategy of U.S. face parameter value.

Alternatively, the U.S. face processing policy database is stored in high in the clouds.

Alternatively, methods described includes:

Following operate is performed to each face order in described image successively:Obtain facial feature information, the U.S. face of matching Processing strategy and the U.S. face processing of progress.

Alternatively, the facial feature information includes age characteristics, sex character, features of skin colors, face feature, skin quality spy One kind or at least two combination in fat or thin feature of seeking peace.

Alternatively, the U.S. face processing strategy include skin-whitening, removing acne and freckle, amplification eyes, lip cosmetic, thin face, Grind skin and one kind in augmentation rhinoplasty or at least two combination.

Alternatively, the facial feature information includes age characteristics and sex character, when the age characteristics is youth, institute When stating sex character for women, corresponding U.S. face processing strategy at least includes skin-whitening, mill skin and/or lip and made up.

Alternatively, the facial feature information includes face feature, when the face are characterized as pigsney and/or flat nose When, corresponding U.S. face processing strategy at least includes amplification eyes and/or augmentation rhinoplasty.

Alternatively, the facial feature information includes sex character and features of skin colors, when the sex character is women, institute State features of skin colors for it is partially yellow or partially dark when, corresponding U.S. face processing strategy at least includes skin-whitening.

Alternatively, the facial feature information includes skin quality feature, when the skin quality has been characterized as acne or has had spot, correspondence U.S. face processing strategy at least include removing acne and freckle.

Alternatively, the facial feature information includes sex character and fat or thin feature, when the sex character is women, institute State fat or thin when being characterized as partially fat, corresponding U.S. face processing strategy at least includes thin face.

Alternatively, the step of at least face in the detection image includes:When shooting one photo of acquisition, detection An at least face in the photo.

Alternatively, the step of at least face in the detection image includes:When shooting interface display preview image, Detect at least face in the preview image.

Alternatively, the step of at least face in the detection image includes:When receiving the U.S. face for a picture During instruction, at least face in the picture is detected.

Alternatively, also include before the step of at least face in the detection image:Start camera programm and obtain band The image of face, and start U.S. face processing.

The present invention proposes a kind of image processing apparatus simultaneously, and described device includes:

Detection module, at least face in detection image, obtains the facial feature information of each face;

Matching module, for the facial feature information according to each face, matches corresponding U.S. face processing strategy;

Processing module, is handled for carrying out U.S. face to each face according to the U.S. face processing strategy matched.

Alternatively, the matching module is used for:According to the facial feature information of each face, recommend to user corresponding bright The U.S. face processing parameter of star face;The U.S. face processing parameter that user is selected handles strategy as the U.S. face of the face.

Alternatively, the matching module is used for:Corresponding star is selected in adjacent face region display reminding user The prompt message of the U.S. face processing parameter of face.

Alternatively, the matching module is used for:The U.S. face associated with the identification code of terminal is called to handle policy database; According to the facial feature information of each face, corresponding U.S. face processing plan is matched in the U.S. face processing policy database Slightly.

Alternatively, the matching module is used for:

Call the U.S. face processing policy database that the operational mode current with terminal is associated;According to the face of each face Characteristic information, matches corresponding U.S. face processing strategy in the U.S. face processing policy database.

Alternatively, the matching module is used for:

When at least two faces in described image, judge whether the gap of the face value of each face reaches threshold value;

When the gap of the face value of each face reaches threshold value, corresponding U.S. is gone out according to the face value High-low Match of each face Face processing strategy, with the gap for the face value for reducing each face.

Alternatively, the matching module is used for:When the face value of face reduces by one grade, U.S. of corresponding U.S. face processing strategy Face parameter value improves one grade.

Alternatively, the matching module is used for:The face high to face value, matches the low U.S. face processing plan of U.S. face parameter value Slightly;The face low to face value, matches the high U.S. face processing strategy of U.S. face parameter value.

Alternatively, described device performs following operate to each face order successively:Obtain facial feature information, matching U.S. Face processing strategy and the U.S. face processing of progress.

Alternatively, the facial feature information includes age characteristics, sex character, features of skin colors, face feature, skin quality spy One kind or at least two combination in fat or thin feature of seeking peace.

Alternatively, the detection module is used for:When shooting one photo of acquisition, at least people in the photo is detected Face.

Alternatively, the detection module is used for:When shooting interface display preview image, detect in the preview image An at least face.

Alternatively, the detection module is used for:When receiving the U.S. face instruction for a picture, detect in the picture An at least face.

Alternatively, described device also includes starting module, and the starting module is used for:Start camera programm and obtain band face Image, and start the processing of U.S. face, so as to trigger the detection module.

The present invention also proposes a kind of mobile terminal, and the mobile terminal includes:

Touch-sensitive display;

One or more processors;

Memory;

One or more application programs, wherein one or more of application programs are stored in the memory and quilt It is configured to by one or more of computing devices, one or more of programs are configurable for performing at earlier figures picture Reason method.

A kind of image processing method that the embodiment of the present invention is provided, it is automatic according to the facial feature information of face in image Match and meet the U.S. face processing strategy of the face to carry out U.S. face processing, eliminate user and pending image is set manually The cumbersome flow of U.S. face processing strategy, simplifies the operating process of U.S. face processing, improves the intellectuality of operating efficiency and terminal Level.

Meanwhile, can also be according to the facial feature information of each face, targetedly when there is multiple faces in image The U.S. face for carrying out differentiation to each face respectively is handled so that final U.S. face effect meets the spy of each face in image Levy, improve proprietary satisfaction in the treatment effect and image of image, so as to avoid because conventional method can only be to same All faces in photo set the U.S. face processing strategy of identical and cause only a minority's satisfaction and other people are unsatisfied Situation, greatly improves Consumer's Experience.

Brief description of the drawings

Fig. 1 is the flow chart of the image processing method of first embodiment of the invention;

Fig. 2 is the flow chart of the image processing method of second embodiment of the invention;

Fig. 3 is the module diagram of the image processing apparatus of third embodiment of the invention;

Fig. 4 is the module diagram for being used in the embodiment of the present invention realize the mobile terminal of image processing method.

The realization, functional characteristics and advantage of the object of the invention will be described further referring to the drawings in conjunction with the embodiments.

Embodiment

It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.

Those skilled in the art of the present technique are appreciated that unless expressly stated, singulative " one " used herein, " one It is individual ", " described " and "the" may also comprise plural form.It is to be further understood that what is used in the specification of the present invention arranges Diction " comprising " refer to there is the feature, integer, step, operation, element and/or component, but it is not excluded that in the presence of or addition Other one or more features, integer, step, operation, element, component and/or their group.It should be understood that when we claim member Part is " connected " or during " coupled " to another element, and it can be directly connected or coupled to other elements, or can also exist Intermediary element.In addition, " connection " used herein or " coupling " can include wireless connection or wireless coupling.It is used herein to arrange Taking leave "and/or" includes one or more associated wholes or any cell for listing item and all combines.

Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art Language and scientific terminology), with the general understanding identical meaning with the those of ordinary skill in art of the present invention.Should also Understand, those terms defined in such as general dictionary, it should be understood that with the context with prior art The consistent meaning of meaning, and unless by specific definitions as here, otherwise will not use idealization or excessively formal implication To explain.

Those skilled in the art of the present technique are appreciated that " terminal " used herein above, " terminal device " both include wireless communication The equipment of number receiver, it only possesses the equipment of the wireless signal receiver of non-emissive ability, includes receiving again and transmitting hardware Equipment, its have can on bidirectional communication link, perform two-way communication reception and launch hardware equipment.This equipment It can include:Honeycomb or other communication equipments, it has single line display or multi-line display or shown without multi-line The honeycomb of device or other communication equipments;PCS (Personal Communications Service, PCS Personal Communications System), it can With combine voice, data processing, fax and/or its communication ability;PDA (Personal Digital Assistant, it is personal Digital assistants), it can include radio frequency receiver, pager, the Internet/intranet access, web browser, notepad, day Go through and/or GPS (Global Positioning System, global positioning system) receiver;Conventional laptop and/or palm Type computer or other equipment, its have and/or conventional laptop and/or palmtop computer including radio frequency receiver or its His equipment." terminal " used herein above, " terminal device " they can be portable, can transport, installed in the vehicles (aviation, Sea-freight and/or land) in, or be suitable for and/or be configured in local runtime, and/or with distribution form, operate in the earth And/or any other position operation in space." terminal " used herein above, " terminal device " can also be communication terminal, on Network termination, music/video playback terminal, for example, can be PDA, MID (Mobile Internet Device, mobile Internet Equipment) and/or the equipment such as mobile phone or intelligent television with music/video playing function, set top box.

Embodiment one

Reference picture 1, proposes the image processing method of first embodiment of the invention, the described method comprises the following steps:

An at least face in S11, detection image, and obtain the facial feature information of each face.

In this step S11, terminal, when detecting face, is passed through by the face in face recognition technology detection image Face recognition technology is analyzed and processed, and obtains the facial feature information of the face.Further, when detect in image have to During few two (opening) faces, then the facial feature information of each face is obtained respectively.

The facial feature information includes age characteristics, sex character, features of skin colors, face feature, skin quality feature and fat One kind or at least two combination in the features such as thin feature.Wherein:Age characteristics includes children, youth, middle age, old age etc., Sex character include masculinity and femininity, features of skin colors include it is partially white, partially yellow, partially dark etc., face feature include face size (such as eye Eyeball, nose, the size of face), nose endure collapse, face ratio etc., skin quality feature includes smoothness, whether have acne and spot, whether There is wrinkle etc., fat or thin feature includes partially fat, partially thin, suitable etc..

In the embodiment of the present invention, described image includes photo, picture, preview image etc..

Alternatively, terminal often shoots one photo of acquisition, then detects the face in the photo immediately, start to enter the photo Row U.S. face processing so that user can obtain the photo after U.S. face processing in real time.It is possible to further set U.S. face function to open Close, when just thinking that U.S. face functional switch is in opening, just the U.S. face of the progress of the automatic photo to shooting is handled.

Alternatively, terminal starts camera, and the view data generation preview image gathered according to camera is simultaneously shown in bat Take the photograph on interface, and detect the face in the preview image immediately, start to carry out the preview image U.S. face processing so that Yong Huke To check U.S. face treatment effect in real time.It is possible to further set U.S. face functional switch, just think that U.S. face functional switch is in and open It is just automatic to carry out U.S. face processing to shooting the preview image during photo when opening state.

Alternatively, user can issue the U.S. face instruction of any picture (including photo) for being locally stored at any time, when When receiving the U.S. face instruction for a picture, terminal detects the face in the picture immediately, starts to carry out the picture U.S. face Processing.

Alternatively, terminal starts camera programm, when obtaining the image with face, then the U.S. face processing of automatic start.

S12, the facial feature information according to each face, match corresponding U.S. face processing strategy.

In this step S12, corresponding U.S. is matched for the face in image according to the facial feature information of face in image Face processing strategy.Further, when at least two faces in image, then respectively according to the facial feature information of each face Corresponding U.S. face processing strategy is matched for each face.

Terminal can match corresponding U.S. face processing strategy in the following ways:

Alternatively, terminal is handled according to the facial feature information and default facial feature information of face in image with U.S. face The corresponding relation of strategy, is that each face in image matches corresponding U.S. face processing strategy.Default (face) face The corresponding relation of portion's characteristic information and U.S. face processing strategy, can be stored in terminal local, be stored in cloud server, Wherein:

When corresponding relation is stored in terminal local, terminal inquiry is stored in local facial feature information and handled with U.S. face The corresponding relation of strategy, obtains the U.S. face processing strategy corresponding with the facial feature information of face in image;

When corresponding relation is stored in cloud server, the facial feature information of face in image can be sent to by terminal Cloud server, the corresponding relation of cloud server inquiry facial feature information and U.S. face processing strategy, is obtained and people in image The U.S. face processing that the facial feature information of face is corresponding is tactful, and returns to terminal.

Alternatively, terminal is according to the facial feature information of each face, at the U.S. face for recommending corresponding star's face to user Parameter is managed, the U.S. face processing parameter for then selecting user handles strategy as the U.S. face of the face.Recommending U.S. face processing ginseng , can be near face during number, such as adjacent face region (between i.e. two faces), display reminding user selection is corresponding The prompt message of the U.S. face processing parameter of star's face.

For example, face can be compared with the star prestored face for terminal, when the similarity with a certain star's face reaches During threshold value, then recommend the U.S. face processing parameter of star's face to user.

Alternatively, terminal calls the U.S. face associated with the identification code of terminal to handle policy database, then according to each The facial feature information of face, matches corresponding U.S. face processing strategy in U.S. face processing policy database.

The corresponding pass of default facial feature information and U.S. face processing strategy can be included in U.S. face processing policy database System, the different U.S. face processing policy database of different identification code correspondences, i.e., with different facial feature informations with U.S. face Manage the corresponding relation of strategy.For example, identification code is the telephone number of terminal, when user uses different telephone numbers, then may be used Strategy is handled to obtain different U.S. face.In addition it is also possible to regard other numbers as the identification code of terminal, this hair as needed It is bright that this is not construed as limiting.

Alternatively, terminal calls the U.S. face processing policy database that the operational mode current with terminal is associated, Ran Hougen According to the facial feature information of each face, corresponding U.S. face processing strategy is matched in U.S. face processing policy database.It is described Operational mode includes at least two in child mode, mode of operation and householder mode.

The corresponding pass of default facial feature information and U.S. face processing strategy can be included in U.S. face processing policy database System, the different U.S. face processing policy database of different operational mode correspondences, i.e., with different facial feature informations and U.S. face Handle the corresponding relation of strategy.So as to for same face, be obtained under different operational modes at different U.S. face Reason strategy, and then obtain different U.S. face treatment effects.

Alternatively, when at least two faces in image, terminal judges whether the gap of the face value of each face reaches Threshold value;When the gap of the face value of each face reaches threshold value, corresponding U.S. face is gone out according to the face value High-low Match of each face Processing strategy, with the gap for the face value for reducing each face.

For example, when the face value of face reduces by one grade, the U.S. face parameter value of corresponding U.S. face processing strategy then improves one grade. And for example, the face high to face value, matches the low U.S. face processing strategy of U.S. face parameter value;The face low to face value, matches U.S. The high U.S. face processing strategy of face parameter value.Face value difference is away from the embarrassment brought greatly very much during so as to avoid taking a group photo.

Foregoing U.S. face processing policy database can be stored in cloud server, be stored in terminal local.

U.S. face processing strategy described in the embodiment of the present invention, including skin-whitening, removing acne and freckle, amplification eyes, lip One kind or at least two combination in the U.S. face modes such as adornment, thin face, mill skin, augmentation rhinoplasty.It can be a kind of facial feature information pair Answer a kind of U.S. face processing tactful or a kind of a variety of U.S. face processing strategies of facial feature information correspondence.

Alternatively, it is contemplated that young women is typically intended to the facial pale delicacy of oneself, the plump sexuality of lip, therefore, When face facial feature information includes age characteristics and sex character, if age characteristics is youth, sex character is women, i.e., When the face is the face of young women, then the corresponding U.S. face processing strategy matched at least include skin-whitening, mill skin and/ Or lip is made up, and other U.S. face processing strategies whether are also needed to, then need to integrate the other facial feature informations of face and come true It is fixed.

But for male, children and middle and aged women, then skin-whitening, mill skin and/or lip should not be used without exception The U.S. face processing strategy of adornment, but other facial feature informations of face should be integrated to determine.

Alternatively, it is contemplated that the aesthetic standards of people are intended to eyes greatly a bit, and nose is endured a bit, therefore, when face is special Reference breath includes face feature, if face are characterized as pigsney and/or flat nose, the corresponding U.S. face processing plan matched Slightly at least include amplification eyes and/or augmentation rhinoplasty.

Further, comprehensive age characteristics and sex character, when the facial feature information of face shows that the face is young During the face of women, then the U.S. face grade for amplifying eyes and/or augmentation rhinoplasty is more than male, children and middle and aged women, wherein, U.S. face Bigger grade, then eyes are put more greatly, grand obtain of nose is more endured.

Alternatively, it is contemplated that the women of all ages does not like partially yellow or partially dark skin nearly all, therefore, works as face Portion's characteristic information include sex character and features of skin colors when, if sex character be women, and features of skin colors for it is partially yellow or partially dark when, The corresponding U.S. face processing strategy then matched at least includes skin-whitening.And should not then be lumped together, it is necessary to root for male Determined according to concrete condition.

For example, although the colour of skin is partially yellow or partially dark, but still when in healthy scope, does not then take at the U.S. face of skin-whitening Reason strategy;When not in healthy scope, then the appropriate skin-whitening carried out.

And for example, comprehensive age characteristics, when the face is the face of elderly men, does not then take at the U.S. face of skin-whitening Reason strategy;When face of the face for middle age, youth and children male, then appropriate skin-whitening, further, year are carried out The age degree of the smaller skin-whitening taken is higher.

Alternatively, it is contemplated that owner it is not desirable that oneself have spot and small pox, therefore, when facial feature information includes on the face During skin quality feature, if skin quality has been characterized as acne or has had spot, the corresponding U.S. face processing strategy matched at least includes anti-acne Nti-freckle.

Alternatively, it is contemplated that women is big it is not desirable that the face of oneself is too fat, therefore, when facial feature information includes sex spy During fat or thin feature of seeking peace, if sex character is women, fat or thin corresponding U.S. face processing strategy when being characterized as partially fat, then matched At least include thin face.

Further, comprehensive age characteristics, young women, female middle-aged, old women, the corresponding thin face of children women U.S. face grade is gradually reduced, i.e. the thin face degree for the treatment of highest of young women, and the thin face degree for the treatment of of children women is minimum.

Foregoing illustrate and how corresponding U.S. face processing strategy, ability are matched according to the facial feature information of face Field technique personnel are appreciated that in addition to this it is possible to same mode be taken, according to the combination of different facial feature informations Match different U.S. face processing strategies, similarly within the scope of the present invention, the embodiment of the present invention to this no longer one by one Enumerate and repeat.

The U.S. face processing strategy that S13, basis are matched carries out U.S. face processing to each face respectively.

In this step S13, the U.S. face that terminal is matched according to previous step S12 handles strategy and face carried out at U.S. face Reason, such as the processing of face progress skin-whitening, removing acne and freckle processing, the processing of amplification eyes, lip cosmetic treatment, thin face are handled, Grind skin processing, augmentation rhinoplasty processing etc..Specific processing mode is same as the prior art, will not be described here.

Further, when at least two faces in image, then according to the U.S. face processing strategy matched respectively to every One face carries out U.S. face processing.For example:When matching U.S. face processing strategy A according to face A facial feature information, then utilize U.S. face processing strategy A carries out U.S. face to face A and handled;Strategy B is handled when matching U.S. face according to face B facial feature information When, then handle strategy B using U.S. face and U.S. face processing is carried out to face B;When matching U.S. face according to face C facial feature information When handling strategy C, then handle strategy C using U.S. face and the U.S. face of face C progress is handled.

The image processing method of the embodiment of the present invention, goes out to meet according to the facial feature information Auto-matching of face in image The U.S. face of the face handles strategy to carry out U.S. face processing, eliminates user and sets U.S. face processing plan manually to pending image Cumbersome flow slightly, simplifies the operating process of U.S. face processing, improves the intelligent level of operating efficiency and terminal.

Meanwhile, can also be according to the facial feature information of each face, targetedly when there is multiple faces in image The U.S. face for carrying out differentiation to each face respectively is handled so that final U.S. face effect meets the spy of each face in image Levy, improve proprietary satisfaction in the treatment effect and image of image, so as to avoid because conventional method can only be to same All faces in photo set the U.S. face processing strategy of identical and cause only a minority's satisfaction and other people are unsatisfied Situation, greatly improves Consumer's Experience.

Embodiment two

Reference picture 2, proposes the image processing method of second embodiment of the invention, the described method comprises the following steps:

S21, one photo of shooting.

It whether there is face in S22, detection photo.When there is face in photo, step S23 is performed;When in photo not When there is face, step S27 is directly entered.

S23, the quantity n and the facial feature information t [n] of every face for obtaining face in photo, and forefathers are worked as in initialization Face sequence number i=0.Wherein, n >=1.

S24, judge whether i≤n.As i≤n, illustrate that also having face not carry out U.S. face processing in photo then enters step S25, proceeds U.S. face processing;Work as i>During n, illustrate that all faces are disposed in photo, then into step S27.

S25, facial feature information t [i] and facial feature information according to i-th face are corresponding with U.S. face processing strategy Relation, is that i-th face matches corresponding U.S. face processing strategy d [i].

S26, handle strategy d [i] using U.S. face U.S. face processing is carried out to i-th face, and add 1 to face sequence number i.Return Step S24, U.S. face is carried out to next face and is handled, until all faces in photo are disposed.

S27, output photo.

In the present embodiment, when the face in image at least two, using having been handled by the way of handling one by one in image All faces, i.e.,:Following operate is performed to each face order in image successively:Obtain facial feature information, matching U.S. Face processing strategy and the U.S. face processing of progress, when all faces are disposed, the final output image.Using locating successively one by one The mode of reason, can greatly improve processing speed, improve the real-time of image output, lift Consumer's Experience.

The image processing method of the present embodiment, whenever a photo is shot, just according to the year of each face in photo The facial feature informations such as age feature, sex character, face feature, targetedly carry out differentiation to each face respectively U.S. face processing so that final U.S. face effect meets the feature of each face in photo.Using the image of the embodiment of the present invention Processing method, can obtain all persons' all satisfied effects in photo in group photo, so as to avoid because of conventional method only Same or analogous U.S. face can be set to handle strategy owner and cause only a minority's satisfaction and other people are unsatisfied Situation, substantially increases the shooting effect of group photo, improves Consumer's Experience.

Embodiment three

Reference picture 3, propose third embodiment of the invention image processing apparatus, described device include detection module 10, With module 20 and processing module 30, wherein:

Detection module 10:For at least face in detection image, the facial feature information matching of each face is obtained Module 20.

Specifically, detection module 10, when detecting face, is passed through by the face in face recognition technology detection image Face recognition technology is analyzed and processed, and obtains the facial feature information of the face.Further, when detect in image have to During few two (opening) faces, then the facial feature information of each face is obtained respectively.

The facial feature information includes age characteristics, sex character, features of skin colors, face feature, skin quality feature and fat One kind or at least two combination in the features such as thin feature.Wherein:Age characteristics includes children, youth, middle age, old age etc., Sex character include masculinity and femininity, features of skin colors include it is partially white, partially yellow, partially dark etc., face feature include face size (such as eye Eyeball, nose, the size of face), nose endure collapse, face ratio etc., skin quality feature includes smoothness, whether have acne and spot, whether There is wrinkle etc., fat or thin feature includes partially fat, partially thin, suitable etc..

In the embodiment of the present invention, described image includes photo, picture, preview image etc..

Alternatively, when terminal taking obtains a photo, detection module 10 then detects the face in the photo immediately, opens Begin to carry out the photo U.S. face processing so that user can obtain the photo after U.S. face processing in real time.It is possible to further set U.S. face functional switch, when just thinking that U.S. face functional switch is in opening, just the U.S. face of the progress of the automatic photo to shooting is handled.

Alternatively, terminal starts camera, when the view data gathered according to camera generates preview image and is shown in Shoot on interface, detection module 10 then detects the face in the preview image immediately, start to carry out the preview image at U.S. face Reason so that user can check U.S. face treatment effect in real time.It is possible to further set U.S. face functional switch, U.S. face is just thought It is just automatic to handle shooting the U.S. face of preview image progress during photo when functional switch is in opening.

Alternatively, user can issue the U.S. face instruction of any picture (including photo) for being locally stored at any time, when When receiving the U.S. face instruction for a picture, detection module 10 detects the face in the picture immediately, starts to enter the picture The face processing of row U.S..

Further, in certain embodiments, the device also includes a starting module, and the starting module is used for:Start phase Machine program obtains the image with face, and starts U.S. face processing, so that detection trigger module 10 so that detection module 10 is examined immediately The face surveyed in the picture, starts to carry out the picture U.S. face processing.

Matching module 20:For the facial feature information according to face and default facial feature information and U.S. face processing plan Corresponding relation slightly, is that face matches corresponding U.S. face processing strategy, and the U.S. face matched processing strategy is sent into place Manage module 30.

Specifically, matching module 20 matches correspondence according to the facial feature information of face in image for the face in image U.S. face processing strategy.Further, when at least two faces in image, matching module 20 is then respectively according to each face Facial feature information match corresponding U.S. face processing strategy for each face.Matching module 20 can in the following ways Allot corresponding U.S. face processing strategy:

Alternatively, facial feature information and default facial feature information and U.S. of the matching module 20 according to face in image The corresponding relation of face processing strategy, is that each face in image matches corresponding U.S. face processing strategy.It is set in advance The corresponding relation of (face) facial feature information and U.S. face processing strategy, can be stored in terminal local, be stored in Cloud server, wherein:

When corresponding relation is stored in terminal local, the inquiry of matching module 20 is stored in local facial feature information and U.S. The corresponding relation of face processing strategy, obtains the U.S. face processing strategy corresponding with the facial feature information of face in image;

When corresponding relation is stored in cloud server, matching module 20 can be by the facial feature information of face in image Cloud server is sent to, the corresponding relation of cloud server inquiry facial feature information and U.S. face processing strategy is obtained and figure The corresponding U.S. face processing strategy of the facial feature information of face as in, and return to terminal.

Alternatively, matching module 20 recommends corresponding star's face according to the facial feature information of each face to user U.S. face processing parameter, the U.S. face processing parameter for then selecting user handles strategy as the U.S. face of the face.Recommending U.S. face , can be near face during processing parameter, such as adjacent face region (between i.e. two faces), display reminding user selection The prompt message of the U.S. face processing parameter of corresponding star's face.

For example, face can be compared with the star prestored face for matching module 20, when similar to a certain star's face When degree reaches threshold value, then recommend the U.S. face processing parameter of star's face to user.

Alternatively, matching module 20 calls the U.S. face associated with the identification code of terminal to handle policy database, Ran Hougen According to the facial feature information of each face, corresponding U.S. face processing strategy is matched in U.S. face processing policy database.

The corresponding pass of default facial feature information and U.S. face processing strategy can be included in U.S. face processing policy database System, the different U.S. face processing policy database of different identification code correspondences, i.e., with different facial feature informations with U.S. face Manage the corresponding relation of strategy.For example, identification code is the telephone number of terminal, when user uses different telephone numbers, then may be used Strategy is handled to obtain different U.S. face.In addition it is also possible to regard other numbers as the identification code of terminal, this hair as needed It is bright that this is not construed as limiting.

Alternatively, matching module 20 calls the U.S. face processing policy database that the operational mode current with terminal is associated, Then according to the facial feature information of each face, corresponding U.S. face processing plan is matched in U.S. face processing policy database Slightly.The operational mode includes at least two in child mode, mode of operation and householder mode.

The corresponding pass of default facial feature information and U.S. face processing strategy can be included in U.S. face processing policy database System, the different U.S. face processing policy database of different operational mode correspondences, i.e., with different facial feature informations and U.S. face Handle the corresponding relation of strategy.So as to for same face, be obtained under different operational modes at different U.S. face Reason strategy, and then obtain different U.S. face treatment effects.

Alternatively, when at least two faces in image, matching module 20 judges that the gap of the face value of each face is It is no to reach threshold value;When the gap of the face value of each face reaches threshold value, correspondence is gone out according to the face value High-low Match of each face U.S. face processing strategy, with the gap for the face value for reducing each face.

For example, when the face value of face reduces by one grade, the U.S. face parameter value of corresponding U.S. face processing strategy then improves one grade. And for example, the face high to face value, matches the low U.S. face processing strategy of U.S. face parameter value;The face low to face value, matches U.S. The high U.S. face processing strategy of face parameter value.Face value difference is away from the embarrassment brought greatly very much during so as to avoid taking a group photo.

Foregoing U.S. face processing policy database can be stored in cloud server, be stored in terminal local.

U.S. face processing strategy described in the embodiment of the present invention, including skin-whitening, removing acne and freckle, amplification eyes, lip One kind or at least two combination in the U.S. face modes such as adornment, thin face, mill skin, augmentation rhinoplasty.It can be a kind of facial feature information pair Answer a kind of U.S. face processing tactful or a kind of a variety of U.S. face processing strategies of facial feature information correspondence.

Alternatively, it is contemplated that young women is typically intended to the facial pale delicacy of oneself, the plump sexuality of lip, therefore, When face facial feature information includes age characteristics and sex character, if age characteristics is youth, sex character is women, i.e., When the face is the face of young women, then it is beautiful that the corresponding U.S. face processing strategy that matching module 20 is matched at least includes skin In vain, grind skin and/or lip is made up, and whether also need to other U.S. face processing strategies, then need to integrate the other faces spies of face Reference ceases to determine.

But for male, children and middle and aged women, then skin-whitening, mill skin and/or lip should not be used without exception The U.S. face processing strategy of adornment, but other facial feature informations of face should be integrated to determine.

Alternatively, it is contemplated that the aesthetic standards of people are intended to eyes greatly a bit, and nose is endured a bit, therefore, when face is special Reference breath includes face feature, if face are characterized as pigsney and/or flat nose, it is corresponding that matching module 20 is matched U.S. face processing strategy at least includes amplification eyes and/or augmentation rhinoplasty.

Further, comprehensive age characteristics and sex character, when the facial feature information of face shows that the face is young During the face of women, then the U.S. face grade for amplifying eyes and/or augmentation rhinoplasty is more than male, children and middle and aged women, wherein, U.S. face Bigger grade, then eyes are put more greatly, grand obtain of nose is more endured.

Alternatively, it is contemplated that the women of all ages does not like partially yellow or partially dark skin nearly all, therefore, works as face Portion's characteristic information include sex character and features of skin colors when, if sex character be women, and features of skin colors for it is partially yellow or partially dark when, The corresponding U.S. face processing strategy that then matching module 20 is matched at least includes skin-whitening.And for male then should not without exception and By, it is necessary to determine as the case may be.

For example, although the colour of skin is partially yellow or partially dark, but still when in healthy scope, does not then take at the U.S. face of skin-whitening Reason strategy;When not in healthy scope, then the appropriate skin-whitening carried out.

And for example, comprehensive age characteristics, when the face is the face of elderly men, does not then take at the U.S. face of skin-whitening Reason strategy;When face of the face for middle age, youth and children male, then appropriate skin-whitening, further, year are carried out The age degree of the smaller skin-whitening taken is higher.

Alternatively, it is contemplated that owner it is not desirable that oneself have spot and small pox, therefore, when facial feature information includes on the face During skin quality feature, if skin quality has been characterized as acne or has had spot, the corresponding U.S. face processing strategy that matching module 20 is matched is extremely Include removing acne and freckle less.

Alternatively, it is contemplated that women is big it is not desirable that the face of oneself is too fat, therefore, when facial feature information includes sex spy It is fat or thin when being characterized as partially fat if sex character is women during fat or thin feature of seeking peace, then corresponding U.S. that matching module 20 is matched Face processing strategy at least includes thin face.

Further, comprehensive age characteristics, young women, female middle-aged, old women, the corresponding thin face of children women U.S. face grade is gradually reduced, i.e. the thin face degree for the treatment of highest of young women, and the thin face degree for the treatment of of children women is minimum.

Foregoing illustrate and how corresponding U.S. face processing strategy, ability are matched according to the facial feature information of face Field technique personnel are appreciated that in addition to this it is possible to same mode be taken, according to the combination of different facial feature informations Match different U.S. face processing strategies, similarly within the scope of the present invention, the embodiment of the present invention to this no longer one by one Enumerate and repeat.

Processing module 30:Handled for carrying out U.S. face to face according to the U.S. face processing strategy matched.

Face is carried out at U.S. face specifically, the U.S. face that processing module 30 is matched according to matching module 20 handles strategy Reason, such as the processing of face progress skin-whitening, removing acne and freckle processing, the processing of amplification eyes, lip cosmetic treatment, thin face are handled, Grind skin processing, augmentation rhinoplasty processing etc..Specific processing mode is same as the prior art, will not be described here.

Further, when at least two faces in image, processing module 30 is then according to the U.S. face processing plan matched U.S. face is slightly carried out to each face respectively to handle.For example:Strategy is handled when matching U.S. face according to face A facial feature information During A, processing module 30 then handles strategy A using U.S. face and U.S. face processing is carried out to face A;Believe when according to face B face feature When breath matches U.S. face processing strategy B, processing module 30 then handles strategy B using U.S. face and the U.S. face of face B progress is handled;Work as root When matching U.S. face processing strategy C according to face C facial feature information, processing module 30 then handles strategy C to people using U.S. face Face C carries out U.S. face processing.

Further, when the face in image at least two, image processing apparatus is held to each face order successively Row is following to be operated:Obtain facial feature information, the U.S. face processing strategy of matching and carry out U.S. face processing.That is, detection module 10 continue the facial feature information of each face in detection images, and the facial feature information for often obtaining a face is transmitted to With module 20;The facial feature information that matching module 20 continues the face sent according to detection module 10 carries out U.S. face processing strategy Matching, the U.S. face processing strategy for often matching a face is transmitted to processing module 30;Processing module 30 then continues basis U.S. face is carried out to corresponding face with the U.S. face processing strategy that module 20 is matched to handle.So as to keep processing procedure persistently to enter OK, processing speed is improved, and then improves the real-time of image output, Consumer's Experience is lifted.

The image processing apparatus of the embodiment of the present invention, goes out to meet according to the facial feature information Auto-matching of face in image The U.S. face of the face handles strategy to carry out U.S. face processing, eliminates user and sets U.S. face processing plan manually to pending image Cumbersome flow slightly, simplifies the operating process of U.S. face processing, improves the intelligent level of operating efficiency and terminal.

Meanwhile, can also be according to the facial feature information of each face, targetedly when there is multiple faces in image The U.S. face for carrying out differentiation to each face respectively is handled so that final U.S. face effect meets the spy of each face in image Levy, improve proprietary satisfaction in the treatment effect and image of image, so as to avoid because conventional method can only be to same All faces in photo set the U.S. face processing strategy of identical and cause only a minority's satisfaction and other people are unsatisfied Situation, greatly improves Consumer's Experience.

The embodiment of the present invention additionally provides a kind of mobile terminal, as shown in figure 4, for convenience of description, illustrate only and this The related part of inventive embodiments, particular technique details is not disclosed, refer to present invention method part.The terminal can Think mobile phone, tablet personal computer, PDA (Personal Digital Assistant, personal digital assistant), POS (Point of Sales, point-of-sale terminal), any terminal device such as vehicle-mounted computer, so that terminal is mobile phone as an example:

Fig. 4 is illustrated that the block diagram of the part-structure of the mobile phone related to terminal provided in an embodiment of the present invention.With reference to figure 4, mobile phone includes:Radio frequency (Radio Frequency, RF) circuit 310, memory 320, input block 330, display unit 340, Sensor 350, voicefrequency circuit 360, Wireless Fidelity (wireless-fidelity, Wi-Fi) module 370, processor 380 and The grade part of power supply 390.It will be understood by those skilled in the art that the handset structure shown in Fig. 4 does not constitute the restriction to mobile phone, It can include than illustrating more or less parts, either combine some parts or different parts arrangement.

Each component parts of mobile phone is specifically introduced with reference to Fig. 4:

RF circuits 310 can be used for receive and send messages or communication process in, the reception and transmission of signal, especially, by base station After downlink information is received, handled to processor 380;In addition, being sent to base station by up data are designed.Generally, RF circuits 310 Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier (Low Noise Amplifier, LNA), duplexer etc..In addition, RF circuits 310 can also be communicated by radio communication with network and other equipment. Above-mentioned radio communication can use any communication standard or agreement, including but not limited to global system for mobile communications (Global System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), CDMA (Code Division Multiple Access, CDMA), WCDMA (Wideband Code Division Multiple Access, WCDMA), Long Term Evolution (Long Term Evolution, LTE), Email, Short Message Service (Short Messaging Service, SMS) etc..

Memory 320 can be used for storage software program and module, and processor 380 is stored in memory 320 by operation Software program and module, so as to perform various function application and the data processing of mobile phone.Memory 320 can mainly include Storing program area and storage data field, wherein, the application journey that storing program area can be needed for storage program area, at least one function Sequence (such as sound-playing function, image player function etc.) etc.;Storage data field can be stored uses what is created according to mobile phone Data (such as voice data, phone directory etc.) etc..In addition, memory 320 can include high-speed random access memory, can be with Including nonvolatile memory, for example, at least one disk memory, flush memory device or other volatile solid-states Part.

Input block 330 can be used for the numeral or character information for receiving input, and produce with the user of mobile phone set with And the relevant key signals input of function control.Specifically, input block 330 may include that contact panel 331 and other inputs are set Standby 332.Contact panel 331, also referred to as touch-screen, collecting touch operation of the user on or near it, (such as user uses The operation of any suitable object such as finger, stylus or annex on contact panel 331 or near contact panel 331), and root Corresponding attachment means are driven according to formula set in advance.Optionally, contact panel 331 may include touch detecting apparatus and touch Two parts of controller.Wherein, touch detecting apparatus detects the touch orientation of user, and detects the signal that touch operation is brought, Transmit a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and is converted into touching Point coordinates, then give processor 380, and the order sent of reception processing device 380 and can be performed.Furthermore, it is possible to using electricity The polytypes such as resistive, condenser type, infrared ray and surface acoustic wave realize contact panel 331.Except contact panel 331, input Unit 330 can also include other input equipments 332.Specifically, other input equipments 332 can include but is not limited to secondary or physical bond One or more in disk, function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc..

Display unit 340 can be used for the various of the information that is inputted by user of display or the information for being supplied to user and mobile phone Menu.Display unit 340 may include display panel 341, optionally, can use liquid crystal display (Liquid Crystal Display, LCD), the form such as Organic Light Emitting Diode (Organic Light-Emitting Diode, OLED) it is aobvious to configure Show panel 341.Further, contact panel 331 can cover display panel 341, when contact panel 331 is detected thereon or attached After near touch operation, processor 380 is sent to determine the type of touch event, with preprocessor 380 according to touch event Type corresponding visual output is provided on display panel 341.Although in Fig. 4, contact panel 331 and display panel 341 It is input and the input function that mobile phone is realized as two independent parts, but in some embodiments it is possible to by touch-control Panel 331 and the input that is integrated and realizing mobile phone of display panel 341 and output function.

Mobile phone may also include at least one sensor 350, such as optical sensor, motion sensor and other sensors. Specifically, optical sensor may include ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to ambient light Light and shade adjust the brightness of display panel 341, proximity transducer can close display panel 341 when mobile phone is moved in one's ear And/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (generally three axles) acceleration Size, size and the direction of gravity are can detect that when static, available for identification mobile phone posture application (such as horizontal/vertical screen is cut Change, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;May be used also as mobile phone The other sensors such as gyroscope, barometer, hygrometer, thermometer, the infrared ray sensor of configuration, will not be repeated here.

Voicefrequency circuit 360, loudspeaker 361, microphone 362 can provide the COBBAIF between user and mobile phone.Audio-frequency electric Electric signal after the voice data received conversion can be transferred to loudspeaker 361, sound is converted to by loudspeaker 361 by road 360 Signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 362, by voicefrequency circuit 360 receive after turn It is changed to voice data, then after voice data output processor 380 is handled, through RF circuits 310 to be sent to such as another mobile phone, Or export voice data to memory 320 so as to further processing.

WiFi belongs to short range wireless transmission technology, and mobile phone can help user's transceiver electronicses postal by WiFi module 370 Part, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 4 is shown WiFi module 370, but it is understood that, it is simultaneously not belonging to must be configured into for mobile phone, can not change as needed completely Become in the essential scope of invention and omit.

Processor 380 is the control centre of mobile phone, using various interfaces and the various pieces of connection whole mobile phone, is led to Cross operation or perform and be stored in software program and/or module in memory 320, and call and be stored in memory 320 Data, perform the various functions and processing data of mobile phone, so as to carry out integral monitoring to mobile phone.Optionally, processor 380 can be wrapped Include one or more processing units;It is preferred that, processor 380 can integrated application processor and modem processor, wherein, should Operating system, user interface and application program etc. are mainly handled with processor, modem processor mainly handles radio communication. It is understood that above-mentioned modem processor can not also be integrated into processor 380.

Mobile phone also includes the power supply 390 (such as battery) powered to all parts, it is preferred that power supply can pass through power supply pipe Reason system and processor 380 are logically contiguous, so as to realize management charging, electric discharge and power managed by power-supply management system Etc. function.

Although not shown, mobile phone can also include camera, bluetooth module etc., will not be repeated here.

In embodiments of the present invention, the processor 380 included by the terminal also has following functions:

An at least face in detection image, and obtain the facial feature information of each face;

According to the facial feature information of each face, corresponding U.S. face processing strategy is matched;

U.S. face processing strategy according to matching carries out U.S. face to each face respectively and handled.

Further, the facial feature information according to each face, matches the step of corresponding U.S. face processing strategy Suddenly include:

According to the facial feature information of each face, recommend the U.S. face processing parameter of corresponding star's face to user;

The U.S. face processing parameter that user is selected handles strategy as the U.S. face of the face.

Further, it is described to include the step of recommend the U.S. face processing parameter of corresponding star's face to user:

The prompting letter of the U.S. face processing parameter of corresponding star's face is selected in adjacent face region display reminding user Breath.

Further, the facial feature information according to each face, matches the step of corresponding U.S. face processing strategy Suddenly include:

The U.S. face associated with the identification code of terminal is called to handle policy database;

According to the facial feature information of each face, corresponding U.S. face is matched in the U.S. face processing policy database Processing strategy.

Further, the facial feature information according to each face, matches the step of corresponding U.S. face processing strategy Suddenly include:

Call the U.S. face processing policy database that the operational mode current with terminal is associated;

According to the facial feature information of each face, corresponding U.S. face is matched in the U.S. face processing policy database Processing strategy.

Further, the facial feature information according to each face, matches the step of corresponding U.S. face processing strategy Suddenly include:

When at least two faces in described image, judge whether the gap of the face value of each face reaches threshold value;

When the gap of the face value of each face reaches threshold value, corresponding U.S. is gone out according to the face value High-low Match of each face Face processing strategy, with the gap for the face value for reducing each face.

Further, the face value High-low Match according to each face goes out the step bag of corresponding U.S. face processing strategy Include:

When the face value of face reduces by one grade, the U.S. face parameter value of corresponding U.S. face processing strategy improves one grade.

Further, the face value High-low Match according to each face goes out the step bag of corresponding U.S. face processing strategy Include:

The face high to face value, matches the low U.S. face processing strategy of U.S. face parameter value;The face low to face value, is matched The high U.S. face processing strategy of U.S. face parameter value.

Further, following operate is performed to each face order in described image successively:Acquisition facial feature information, The U.S. face processing of matching is tactful and carries out U.S. face processing.

Further, the step of at least face in the detection image includes:When shooting one photo of acquisition, inspection At least face surveyed in the photo.

Further, the step of at least face in the detection image includes:When shooting interface display preview image When, detect at least face in the preview image.

Further, the step of at least face in the detection image includes:When receiving U.S. for a picture When face is instructed, at least face in the picture is detected.

Further, also include before the step of at least face in the detection image:Start camera programm to obtain Image with face, and start U.S. face processing.

The present invention proposes a kind of mobile terminal simultaneously, and the mobile terminal includes:Touch-sensitive display;One or more processing Device;Memory;One or more application programs, wherein one or more of application programs are stored in the memory simultaneously It is configured as by one or more of computing devices, one or more of programs are configurable for performing image procossing Method.Described image processing method comprises the following steps:An at least face in detection image, and obtain the face of each face Characteristic information;According to the facial feature information of each face, corresponding U.S. face processing strategy is matched;According to the U.S. face matched Processing strategy carries out U.S. face to each face respectively and handled.Image processing method described in the present embodiment is in the present invention The image processing method involved by embodiment is stated, be will not be repeated here.

A kind of mobile terminal that the embodiment of the present invention is provided, is performed at earlier figures picture by configuring one or more programs Reason method, presets the facial feature information of face and the corresponding relation of U.S. face processing strategy, according to corresponding relation and image The facial feature information Auto-matching of middle face goes out to meet the U.S. face processing strategy of the face to carry out U.S. face processing, eliminates use Family sets the cumbersome flow of U.S. face processing strategy to pending image manually, simplifies the operating process of U.S. face processing, improves The intelligent level of operating efficiency and terminal.

Meanwhile, can also be according to the facial feature information of each face, targetedly when there is multiple faces in image The U.S. face for carrying out differentiation to each face respectively is handled so that final U.S. face effect meets the spy of each face in image Levy, improve proprietary satisfaction in the treatment effect and image of image, so as to avoid because conventional method can only be to same All faces in photo set the U.S. face processing strategy of identical and cause only a minority's satisfaction and other people are unsatisfied Situation, greatly improves Consumer's Experience.

The image processing method and device of the embodiment of the present invention, both can apply to mobile phone, flat board, camera, notebook electricity The mobile terminals such as brain or portable terminal, can apply to the fixed terminals such as desktop computer again.

The embodiment of the invention discloses A1, a kind of image processing method, including:

An at least face in detection image, and obtain the facial feature information of each face;

According to the facial feature information of each face, corresponding U.S. face processing strategy is matched;

U.S. face processing strategy according to matching carries out U.S. face to each face respectively and handled.

A2, the image processing method as described in A1, the facial feature information according to each face are matched corresponding The step of U.S. face processing strategy includes:

According to the facial feature information of each face, recommend the U.S. face processing parameter of corresponding star's face to user;

The U.S. face processing parameter that user is selected handles strategy as the U.S. face of the face.

A3, the image processing method as described in A2, the U.S. face processing parameter to the corresponding star's face of user's recommendation Step includes:

The prompting letter of the U.S. face processing parameter of corresponding star's face is selected in adjacent face region display reminding user Breath.

A4, the image processing method as described in A1, the facial feature information according to each face are matched corresponding The step of U.S. face processing strategy includes:

The U.S. face associated with the identification code of terminal is called to handle policy database;

According to the facial feature information of each face, corresponding U.S. face is matched in the U.S. face processing policy database Processing strategy.

A5, the image processing method as described in A4, the identification code are telephone number.

A6, the image processing method as described in A1, the facial feature information according to each face are matched corresponding The step of U.S. face processing strategy includes:

Call the U.S. face processing policy database that the operational mode current with terminal is associated;

According to the facial feature information of each face, corresponding U.S. face is matched in the U.S. face processing policy database Processing strategy.

A7, the image processing method as described in A6, the operational mode include child mode, mode of operation and householder mode In at least two.

A8, the image processing method as described in A1, the facial feature information according to each face are matched corresponding The step of U.S. face processing strategy includes:

When at least two faces in described image, judge whether the gap of the face value of each face reaches threshold value;

When the gap of the face value of each face reaches threshold value, corresponding U.S. is gone out according to the face value High-low Match of each face Face processing strategy, with the gap for the face value for reducing each face.

A9, the image processing method as described in A8, the face value High-low Match according to each face go out corresponding U.S. face The step of processing strategy includes:

When the face value of face reduces by one grade, the U.S. face parameter value of corresponding U.S. face processing strategy improves one grade.

A10, the image processing method as described in A8, the face value High-low Match according to each face go out corresponding U.S. face The step of processing strategy includes:

The face high to face value, matches the low U.S. face processing strategy of U.S. face parameter value;The face low to face value, is matched The high U.S. face processing strategy of U.S. face parameter value.

A11, the image processing method as described in any one of A4-A7, the U.S. face processing policy database are stored in high in the clouds.

A12, the image processing method as described in any one of A1-A10, methods described include:

Following operate is performed to each face order in described image successively:Obtain facial feature information, the U.S. face of matching Processing strategy and the U.S. face processing of progress.

A13, the image processing method as described in any one of A1-A10, the facial feature information include age characteristics, property Other feature, features of skin colors, face feature, skin quality feature and one kind in fat or thin feature or at least two combination.

A14, the image processing method as described in any one of A1-A10, the U.S. face processing strategy include skin-whitening, dispelled Acne nti-freckle, amplification eyes, lip cosmetic, thin face, mill skin and one kind in augmentation rhinoplasty or at least two combination.

A15, the image processing method as described in any one of A1-A10, the facial feature information include age characteristics and property Other feature, when the age characteristics is youth, and the sex character is women, corresponding U.S. face processing strategy at least includes skin Skin whitening, mill skin and/or lip are made up.

A16, the image processing method as described in any one of A1-A10, the facial feature information include face feature, when When the face are characterized as pigsney and/or flat nose, corresponding U.S. face processing strategy at least includes amplification eyes and/or grand Nose.

A17, the image processing method as described in any one of A1-A10, the facial feature information include sex character and skin Color characteristic, when the sex character is women, and the features of skin colors is partially yellow or partially dark, corresponding U.S. face processing strategy is at least Including skin-whitening.

A18, the image processing method as described in any one of A1-A10, the facial feature information include skin quality feature, when When the skin quality has been characterized as acne or has had spot, corresponding U.S. face processing strategy at least includes removing acne and freckle.

A19, the image processing method as described in any one of A1-A10, the facial feature information include sex character and fat Thin feature, when the sex character is women, described fat or thin when being characterized as partially fat, corresponding U.S. face processing strategy at least includes thin Face.

The step of at least face in A20, the image processing method as described in any one of A1-A10, the detection image Including:When shooting one photo of acquisition, at least face in the photo is detected.

The step of at least face in A21, the image processing method as described in any one of A1-A10, the detection image Including:When shooting interface display preview image, at least face in the preview image is detected.

The step of at least face in A22, the image processing method as described in any one of A1-A10, the detection image Including:When receiving the U.S. face instruction for a picture, at least face in the picture is detected.

The step of at least face in A23, the image processing method as described in any one of A1-A10, the detection image Also include before:Start camera programm and obtain the image with face, and start U.S. face processing.

The embodiment of the invention also discloses B24, a kind of image processing apparatus, including:

Detection module, at least face in detection image, obtains the facial feature information of each face;

Matching module, for the facial feature information according to each face, matches corresponding U.S. face processing strategy;

Processing module, is handled for carrying out U.S. face to each face according to the U.S. face processing strategy matched.

B25, the image processing apparatus as described in B24, the matching module are used for:Believed according to the face feature of each face Breath, recommends the U.S. face processing parameter of corresponding star's face to user;The U.S. face processing parameter that user is selected is used as the face U.S. face processing strategy.

B26, the image processing apparatus as described in B25, the matching module are used for:Show and carry in adjacent face region Show that user selects the prompt message of the U.S. face processing parameter of corresponding star's face.

B27, the image processing apparatus as described in B24, the matching module are used for:Call associated with the identification code of terminal U.S. face processing policy database;According to the facial feature information of each face, in the U.S. face processing policy database Allot corresponding U.S. face processing strategy.

B28, the image processing apparatus as described in B27, the identification code are telephone number.

B29, the image processing apparatus as described in B24, the matching module are used for:

Call the U.S. face processing policy database that the operational mode current with terminal is associated;According to the face of each face Characteristic information, matches corresponding U.S. face processing strategy in the U.S. face processing policy database.

B30, the image processing apparatus as described in B29, the operational mode include child mode, mode of operation and parent's mould At least two in formula.

B31, the image processing apparatus as described in B24, the matching module are used for:

When at least two faces in described image, judge whether the gap of the face value of each face reaches threshold value;

When the gap of the face value of each face reaches threshold value, corresponding U.S. is gone out according to the face value High-low Match of each face Face processing strategy, with the gap for the face value for reducing each face.

B32, the image processing apparatus as described in B31, the matching module are used for:When the face value of face reduces by one grade, The U.S. face parameter value of corresponding U.S. face processing strategy improves one grade.

B33, the image processing apparatus as described in B31, the matching module are used for:The face high to face value, matches U.S. The low U.S. face processing strategy of face parameter value;The face low to face value, matches the high U.S. face processing strategy of U.S. face parameter value.

B34, the image processing apparatus as described in any one of B27-B30, the U.S. face processing policy database are stored in cloud End.

B35, the image processing apparatus as described in any one of B24-B33, described device are performed to each face order successively Operate below:Obtain facial feature information, the U.S. face processing strategy of matching and carry out U.S. face processing.

B36, the image processing apparatus as described in any one of B24-B33, the facial feature information include age characteristics, property Other feature, features of skin colors, face feature, skin quality feature and one kind in fat or thin feature or at least two combination.

B37, the image processing apparatus as described in any one of B24-B33, the U.S. face processing strategy include skin-whitening, dispelled Acne nti-freckle, amplification eyes, lip cosmetic, thin face, mill skin and one kind in augmentation rhinoplasty or at least two combination.

B38, the image processing apparatus as described in any one of B24-B33, the facial feature information include age characteristics and Sex character, when the age characteristics is youth, and the sex character is women, corresponding U.S. face processing strategy at least includes Skin-whitening, mill skin and/or lip are made up.

B39, the image processing apparatus as described in any one of B24-B33, the facial feature information include face feature, when When the face are characterized as pigsney and/or flat nose, corresponding U.S. face processing strategy at least includes amplification eyes and/or grand Nose.

B40, the image processing apparatus as described in any one of B24-B33, the facial feature information include sex character and Features of skin colors, when the sex character is women, and the features of skin colors is partially yellow or partially dark, corresponding U.S. face processing strategy is extremely Include skin-whitening less.

B41, the image processing apparatus as described in any one of B24-B33, the facial feature information include skin quality feature, when When the skin quality has been characterized as acne or has had spot, corresponding U.S. face processing strategy at least includes removing acne and freckle.

B42, the image processing apparatus as described in any one of B24-B33, the facial feature information include sex character and Fat or thin feature, when the sex character is women, described fat or thin when being characterized as partially fat, corresponding U.S. face processing strategy at least includes Thin face.

B43, the image processing apparatus as described in any one of B24-B33, the detection module are used for:One is obtained when shooting During photo, at least face in the photo is detected.

B44, the image processing apparatus as described in any one of B24-B33, the detection module are used for:When shooting interface display During preview image, at least face in the preview image is detected.

B45, the image processing apparatus as described in any one of B24-B33, the detection module are used for:One is directed to when receiving During the U.S. face instruction of picture, at least face in the picture is detected.

B46, the image processing apparatus as described in any one of B24-B33, described device also include starting module, the startup Module is used for:Start camera programm and obtain the image with face, and start U.S. face processing, so as to trigger the detection module.

The embodiment of the invention also discloses C47, a kind of mobile terminal, including:

Touch-sensitive display;

One or more processors;

Memory;

One or more application programs, wherein one or more of application programs are stored in the memory and quilt It is configured to by one or more of computing devices, one or more of programs are configurable for performing A1 to A23 Method described in one.

It will be understood by those skilled in the art that the present invention includes being related to for performing one in operation described herein Or multinomial equipment.These equipment can be for needed for purpose and specially design and manufacture, or general-purpose computations can also be included Known device in machine.These equipment have the computer program being stored in it, and these computer programs are optionally activated Or reconstruct.Such computer program, which can be stored in equipment (for example, computer) computer-readable recording medium or be stored in, to be suitable to Storage e-command is simultaneously coupled in any kind of medium of bus respectively, and the computer-readable medium includes but is not limited to Any kind of disk (including floppy disk, hard disk, CD, CD-ROM and magneto-optic disk), ROM (Read-Only Memory, it is read-only to deposit Reservoir), RAM (Random Access Memory, random access memory), EPROM (Erasable Programmable Read- Only Memory, Erarable Programmable Read only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory, EEPROM), flash memory, magnetic card or light card.It is, readable Medium includes any medium for storing or transmitting information in the form of it can read by equipment (for example, computer).

Those skilled in the art of the present technique be appreciated that can be realized with computer program instructions these structure charts and/or The combination of each frame and these structure charts and/or the frame in block diagram and/or flow graph in block diagram and/or flow graph.This technology is led Field technique personnel be appreciated that these computer program instructions can be supplied to all-purpose computer, special purpose computer or other The processor of programmable data processing method is realized, so as to pass through the processing of computer or other programmable data processing methods The scheme that device is specified in the frame or multiple frames to perform structure chart disclosed by the invention and/or block diagram and/or flow graph.

Those skilled in the art of the present technique are appreciated that in the various operations discussed in the present invention, method, flow Step, measure, scheme can be replaced, changed, combined or deleted.Further, it is each with what is discussed in the present invention Kind operation, method, other steps in flow, measure, scheme can also be replaced, changed, reset, decomposed, combined or deleted. Further, it is of the prior art to have and the step in the various operations disclosed in the present invention, method, flow, measure, scheme It can also be replaced, changed, reset, decomposed, combined or deleted.

The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the scope of the invention, it is every to utilize Equivalent structure or equivalent flow conversion that description of the invention and accompanying drawing content are made, or directly or indirectly it is used in other correlations Technical field, be included within the scope of the present invention.

Claims (10)

1. a kind of image processing method, including:
An at least face in detection image, and obtain the facial feature information of each face;
According to the facial feature information of each face, corresponding U.S. face processing strategy is matched;
U.S. face processing strategy according to matching carries out U.S. face to each face respectively and handled.
2. image processing method according to claim 1, it is characterised in that described to be believed according to the face feature of each face Breath, matching the step of corresponding U.S. face processing strategy includes:
According to the facial feature information of each face, recommend the U.S. face processing parameter of corresponding star's face to user;
The U.S. face processing parameter that user is selected handles strategy as the U.S. face of the face.
3. image processing method according to claim 2, it is characterised in that described to recommend corresponding star's face to user The step of U.S. face processing parameter, includes:
The prompt message of the U.S. face processing parameter of corresponding star's face is selected in adjacent face region display reminding user.
4. image processing method according to claim 1, it is characterised in that described to be believed according to the face feature of each face Breath, matching the step of corresponding U.S. face processing strategy includes:
The U.S. face associated with the identification code of terminal is called to handle policy database;
According to the facial feature information of each face, corresponding U.S. face processing is matched in the U.S. face processing policy database Strategy.
5. image processing method according to claim 4, it is characterised in that the identification code is telephone number.
6. a kind of image processing apparatus, it is characterised in that including:
Detection module, at least face in detection image, obtains the facial feature information of each face;
Matching module, for the facial feature information according to each face, matches corresponding U.S. face processing strategy;
Processing module, is handled for carrying out U.S. face to each face according to the U.S. face processing strategy matched.
7. image processing apparatus according to claim 6, it is characterised in that the matching module is used for:According to each people The facial feature information of face, recommends the U.S. face processing parameter of corresponding star's face to user;The U.S. face that user is selected handles ginseng Number handles strategy as the U.S. face of the face.
8. image processing apparatus according to claim 7, it is characterised in that the matching module is used for:In adjacent face Region display reminding user selects the prompt message of the U.S. face processing parameter of corresponding star's face.
9. image processing apparatus according to claim 6, it is characterised in that the matching module is used for:Call and terminal The associated U.S. face processing policy database of identification code;According to the facial feature information of each face, handled in the U.S. face Corresponding U.S. face processing strategy is matched in policy database.
10. a kind of mobile terminal, including:
Touch-sensitive display;
One or more processors;
Memory;
One or more application programs, wherein one or more of application programs are stored in the memory and are configured For by one or more of computing devices, one or more of programs are configurable for perform claim requirement 1 to 5 Method described in one.
CN201710365380.2A 2017-05-22 2017-05-22 image processing method, device and mobile terminal CN107274354A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710365380.2A CN107274354A (en) 2017-05-22 2017-05-22 image processing method, device and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710365380.2A CN107274354A (en) 2017-05-22 2017-05-22 image processing method, device and mobile terminal

Publications (1)

Publication Number Publication Date
CN107274354A true CN107274354A (en) 2017-10-20

Family

ID=60065245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710365380.2A CN107274354A (en) 2017-05-22 2017-05-22 image processing method, device and mobile terminal

Country Status (1)

Country Link
CN (1) CN107274354A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578372A (en) * 2017-10-31 2018-01-12 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN107770446A (en) * 2017-10-31 2018-03-06 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN107808137A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN107832784A (en) * 2017-10-27 2018-03-23 维沃移动通信有限公司 A kind of method of image beautification and a kind of mobile terminal
CN107862658A (en) * 2017-10-31 2018-03-30 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN107862654A (en) * 2017-11-30 2018-03-30 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN107911609A (en) * 2017-11-30 2018-04-13 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN107948506A (en) * 2017-11-22 2018-04-20 珠海格力电器股份有限公司 A kind of image processing method, device and electronic equipment
CN108401109A (en) * 2018-03-18 2018-08-14 广东欧珀移动通信有限公司 Image acquiring method, device, storage medium and electronic equipment
CN109685741A (en) * 2018-12-28 2019-04-26 北京旷视科技有限公司 A kind of image processing method, device and computer storage medium
CN110222567A (en) * 2019-04-30 2019-09-10 维沃移动通信有限公司 A kind of image processing method and equipment
CN111488778A (en) * 2019-05-29 2020-08-04 北京京东尚科信息技术有限公司 Image processing method and apparatus, computer system, and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605975A (en) * 2013-11-28 2014-02-26 小米科技有限责任公司 Image processing method and device and terminal device
CN105512615A (en) * 2015-11-26 2016-04-20 小米科技有限责任公司 Picture processing method and apparatus
CN105530435A (en) * 2016-02-01 2016-04-27 深圳市金立通信设备有限公司 Shooting method and mobile terminal
CN106210545A (en) * 2016-08-22 2016-12-07 北京金山安全软件有限公司 A kind of video capture method, device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605975A (en) * 2013-11-28 2014-02-26 小米科技有限责任公司 Image processing method and device and terminal device
CN105512615A (en) * 2015-11-26 2016-04-20 小米科技有限责任公司 Picture processing method and apparatus
CN105530435A (en) * 2016-02-01 2016-04-27 深圳市金立通信设备有限公司 Shooting method and mobile terminal
CN106210545A (en) * 2016-08-22 2016-12-07 北京金山安全软件有限公司 A kind of video capture method, device and electronic equipment

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832784A (en) * 2017-10-27 2018-03-23 维沃移动通信有限公司 A kind of method of image beautification and a kind of mobile terminal
CN107770446B (en) * 2017-10-31 2020-03-27 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107770446A (en) * 2017-10-31 2018-03-06 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN107808137A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN107862658A (en) * 2017-10-31 2018-03-30 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN107578372A (en) * 2017-10-31 2018-01-12 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN107862658B (en) * 2017-10-31 2020-09-22 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107948506A (en) * 2017-11-22 2018-04-20 珠海格力电器股份有限公司 A kind of image processing method, device and electronic equipment
WO2019100766A1 (en) * 2017-11-22 2019-05-31 格力电器(武汉)有限公司 Image processing method and apparatus, electronic device and storage medium
CN107862654A (en) * 2017-11-30 2018-03-30 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN107911609A (en) * 2017-11-30 2018-04-13 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN108401109A (en) * 2018-03-18 2018-08-14 广东欧珀移动通信有限公司 Image acquiring method, device, storage medium and electronic equipment
CN108401109B (en) * 2018-03-18 2020-08-04 Oppo广东移动通信有限公司 Image acquisition method and device, storage medium and electronic equipment
CN109685741A (en) * 2018-12-28 2019-04-26 北京旷视科技有限公司 A kind of image processing method, device and computer storage medium
CN109685741B (en) * 2018-12-28 2020-12-11 北京旷视科技有限公司 Image processing method and device and computer storage medium
CN110222567A (en) * 2019-04-30 2019-09-10 维沃移动通信有限公司 A kind of image processing method and equipment
CN111488778A (en) * 2019-05-29 2020-08-04 北京京东尚科信息技术有限公司 Image processing method and apparatus, computer system, and readable storage medium

Similar Documents

Publication Publication Date Title
JP6309540B2 (en) Image processing method, image processing device, terminal device, program, and recording medium
JP6272342B2 (en) Image processing method, image processing device, terminal device, program, and recording medium
CN104982041B (en) For controlling the portable terminal and its method of hearing aid
US10169639B2 (en) Method for fingerprint template update and terminal device
US9992641B2 (en) Electronic device, server, and method for outputting voice
US9779527B2 (en) Method, terminal device and storage medium for processing image
US9392186B2 (en) Mobile terminal and control method thereof capturing first and second images with first and second flashes
CN107844781A (en) Face character recognition methods and device, electronic equipment and storage medium
CN105096241A (en) Face image beautifying device and method
CN107193455B (en) Information processing method and mobile terminal
WO2020156269A1 (en) Display method for electronic device having flexible screen and electronic device
CN103283210A (en) Mobile device and method for proximity detection verification
CN104917881A (en) Multi-mode mobile terminal and implementation method thereof
CN102057656A (en) Developing a notification framework for electronic device events
US20110157009A1 (en) Display device and control method thereof
CN107645611A (en) A kind of method of payment and mobile terminal
CN103473494A (en) Application running method, device and terminal device
CN107977152A (en) A kind of picture sharing method, terminal and storage medium based on dual-screen mobile terminal
CN105306741B (en) A kind of antitheft mobile phone and implementation method with fingerprint identification function
CN104966086B (en) Live body discrimination method and device
WO2019052418A1 (en) Facial recognition method and related product
CN105306815A (en) Shooting mode switching device, method and mobile terminal
CN107613550B (en) Unlocking control method and related product
CN107767839B (en) Brightness adjusting method and related product
CN105430600B (en) A kind of data transmission method and the terminal of data transmission

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination