CN107203978A - A kind of image processing method and mobile terminal - Google Patents
A kind of image processing method and mobile terminal Download PDFInfo
- Publication number
- CN107203978A CN107203978A CN201710371510.3A CN201710371510A CN107203978A CN 107203978 A CN107203978 A CN 107203978A CN 201710371510 A CN201710371510 A CN 201710371510A CN 107203978 A CN107203978 A CN 107203978A
- Authority
- CN
- China
- Prior art keywords
- image
- characteristic area
- information
- image processing
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 22
- 238000012545 processing Methods 0.000 claims abstract description 219
- 230000001815 facial effect Effects 0.000 claims abstract description 143
- 238000000034 method Methods 0.000 claims abstract description 50
- 238000001514 detection method Methods 0.000 claims description 28
- 208000002874 Acne Vulgaris Diseases 0.000 claims description 10
- 206010000496 acne Diseases 0.000 claims description 10
- 238000010612 desalination reaction Methods 0.000 claims description 5
- 230000006399 behavior Effects 0.000 claims description 3
- 241000208340 Araliaceae Species 0.000 claims 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims 1
- 235000003140 Panax quinquefolius Nutrition 0.000 claims 1
- 235000008434 ginseng Nutrition 0.000 claims 1
- 238000003860 storage Methods 0.000 description 18
- 230000008569 process Effects 0.000 description 13
- 230000006870 function Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 239000000047 product Substances 0.000 description 8
- 230000001360 synchronised effect Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000000717 retained effect Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000000151 deposition Methods 0.000 description 2
- 210000004209 hair Anatomy 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004549 pulsed laser deposition Methods 0.000 description 2
- 230000002087 whitening effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 206010034719 Personality change Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
Abstract
The present invention provides a kind of image processing method and mobile terminal.This method includes:The first preview image gathered to camera carries out recognition of face;If recognizing the first facial image, the first identity of the first facial image is obtained;According to the corresponding relation between the identity and characteristic area information prestored, the corresponding fisrt feature area information of the first identity is determined;Based on fisrt feature area information, image procossing is carried out to preview image;Wherein, characteristic area information includes:Pending characteristic area, the image processing data of the positional information of characteristic area, area information and characteristic area, image processing data includes image processing type and Image Processing parameter.The present invention passes through the different identity of different facial image correspondences so that the different characteristic area information of different identity mark correspondence, it is to avoid the problem of U.S. face processing is mechanical single is carried out to facial image, different user is met to beautiful individual demand.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of image processing method and mobile terminal.
Background technology
At present, during mobile terminal user is taken pictures using U.S. face, mole functional switch typically is dispelled come to face by opening
On mole detected, then mark the position of mole, and according to the demand of user, mole is handled.
In the prior art however just retain all moles, otherwise be exactly to dispel the mole recognized.But user sees to mole
It is exactly best that method, which is that different, not all moles all remove,.The position of some moles represents the feature of user, and user wishes
It can retain and keep the authenticity of image, and some moles can allow skin cleaner after dispelling, and be more fully apparent from.
Moreover, because mole is different in the position of face, can also influence the detection of mole.Such as, on eyebrow side, on corners of the mouth side
On mole may software can not just accurately identify.So, identification less than mole also have user and want to dispel, but because technology is former
Because dispelling sordid situation.So, existing U.S. face shooting technology can not meet user to beautiful individual demand.
The content of the invention
The embodiment of the present invention provides a kind of image processing method and mobile terminal, is deposited with solving existing U.S. face shooting technology
None- identified to characteristic information all on facial image and mechanical single U.S. face processing the problem of.
In a first aspect, the embodiment of the present invention provides a kind of image processing method, including:
The first preview image gathered to camera carries out recognition of face;
If recognizing the first facial image, the first identity of first facial image is obtained;
According to the corresponding relation between the identity and characteristic area information prestored, the first identity mark is determined
Know corresponding fisrt feature area information;
Based on the fisrt feature area information, image procossing is carried out to the preview image;
Wherein, the characteristic area information includes:Pending characteristic area, the positional information of the characteristic area, face
The image processing data of product information and the characteristic area, described image processing information includes image processing type and image procossing
Parameter.
Second aspect, the embodiment of the present invention provides a kind of mobile terminal, including:
First face recognition module, the first preview image for being gathered to camera carries out recognition of face;
First acquisition module, for when recognizing the first facial image, obtaining the first body of first facial image
Part mark;
Information determination module, user according to the corresponding relation between the identity and characteristic area information prestored,
Determine the corresponding fisrt feature area information of first identity;
Image processing module, for based on the fisrt feature area information, image procossing to be carried out to the preview image;
Wherein, the characteristic area information includes:Pending characteristic area, the positional information of the characteristic area, face
The image processing data of product information and the characteristic area, described image processing information includes image processing type and image procossing
Parameter.
The third aspect, the embodiment of the present invention provides a kind of mobile terminal, including:Processor, memory and it is stored in storage
On device and the image processing program that can run on a processor, described image processing routine is realized such as during the computing device
The image processing method that first aspect of the embodiment of the present invention is provided.
Fourth aspect, the embodiment of the present invention provides a kind of computer-readable recording medium, the computer-readable storage medium
Be stored with image processing program in matter, and such as first party of the embodiment of the present invention is realized when described image processing routine is executed by processor
The step of image processing method that face is provided.
In the such scheme of the embodiment of the present invention, recognition of face is carried out by the first preview image gathered to camera,
Obtain the first facial image the first identity and identity and characteristic area information between corresponding relation, obtain with
Identity correspondence fisrt feature area information, and image procossing is carried out to preview image according to the fisrt feature area information,
So, the different identity of different facial image correspondences is passed through so that the different characteristic area of different identity mark correspondence
Information, it is to avoid the problem of U.S. face processing is mechanical single is carried out to facial image, the difference for meeting different user to facial image is beautiful
Face processing requirement, realizes different user to beautiful individual demand.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention
The accompanying drawing needed to use is briefly described, it should be apparent that, drawings in the following description are only some implementations of the present invention
Example, for those of ordinary skill in the art, without having to pay creative labor, can also be according to these accompanying drawings
Obtain other accompanying drawings.
The image processing method schematic flow sheet that Fig. 1 provides for one embodiment of the invention;
Fig. 2 is the particular flow sheet of step 104 in Fig. 1;
The image processing method schematic flow sheet that Fig. 3 provides for another embodiment of the present invention;
Fig. 4 is the particular flow sheet of step 204 in Fig. 3;
The particular flow sheet that Fig. 5 is step 204-11 in Fig. 4;
The image processing method schematic flow sheet that Fig. 6 provides for further embodiment of this invention;
The mobile terminal structure schematic diagram that Fig. 7 provides for one embodiment of the invention;
The mobile terminal structure schematic diagram that Fig. 8 provides for another embodiment of the present invention;
The mobile terminal structure schematic diagram that Fig. 9 provides for further embodiment of this invention;
The mobile terminal structure schematic diagram that Figure 10 provides for yet another embodiment of the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is a part of embodiment of the invention, rather than whole embodiments.Based on this hair
Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made
Example, belongs to the scope of protection of the invention.
The image processing method schematic flow sheet that Fig. 1 provides for one embodiment of the invention, below should figure illustrate this
The implementation process of method.
As shown in figure 1, the step of image processing method, including:
Step 101, the first preview image gathered to camera carries out recognition of face.
Here, on the first preview image that the camera that can detect mobile terminal by face recognition technology is gathered
With the presence or absence of facial image.
Step 102, if recognizing the first facial image, the first identity of first facial image is obtained.
It should be noted that, the different identity of different facial images correspondences, so, by different identity come
Reach the purpose for distinguishing different facial images.
Specifically, being extracted to the characteristic information of facial image, and the characteristic information of the facial image extracted is protected
After depositing, an identity is assigned to this feature information, for characterizing user identity.
Step 103, according to the corresponding relation between the identity and characteristic area information prestored, described is determined
The corresponding fisrt feature area information of one identity.
Wherein, the characteristic area information includes:Pending characteristic area, the positional information of the characteristic area, face
The image processing data of product information and the characteristic area, described image processing information includes image processing type and image procossing
Parameter.
Here it should be noted that, characteristic area is specifically referred to:Scheme where at least one on facial image in mole, spot, acne
As region.
It should be noted that, the different characteristic area information of different identity correspondences.By which in characteristic area information
Characteristic area is defined as pending characteristic area, and user can voluntarily be set according to the different demands of oneself, and characteristic area
Image processing data is alternatively user and beautiful different demands is voluntarily set according to itself, to meet different user to beautiful individual character
Change demand.
Step 104, based on the fisrt feature area information, image procossing is carried out to the preview image.
It is preferred that, as shown in Fig. 2 step 104 also may particularly include:
Step 1041, characteristic area pending in the fisrt feature area information, the position of the characteristic area are obtained
Confidence breath, area information, the first image processing data.
Step 1042, the image processing type and Image Processing parameter in described first image processing information are obtained.
Specifically, described image processing type includes removing processing and/or desalt processing;Described image processing parameter includes
Desalinate grade point.
Step 1043, positional information, area information based on the characteristic area, according to described first image processing information
In image processing type and Image Processing parameter, characteristic area pending in the fisrt feature area information is schemed
As processing.
Here, the image processing data in fisrt feature area information, characteristic processing is carried out to preview image.Due to
The image processing data of characteristic area is that user is voluntarily set according to itself to beautiful different demands, after image procossing, most
The image obtained eventually is also the image for meeting users ' individualized requirement.
In the embodiment of the present invention, recognition of face is carried out by the first preview image gathered to camera, obtained the first
Corresponding relation between the first identity and identity of face image and characteristic area information, is obtained and identity pair
Fisrt feature area information is answered, and image procossing is carried out to preview image according to the fisrt feature area information, so, by not
The different identity of same facial image correspondence so that the different characteristic area information of different identity mark correspondence, it is to avoid right
Facial image carries out the problem of U.S. face processing is mechanical single, meets difference U.S. face processing requirement of the different user to facial image,
Realize different user to beautiful individual demand.
The image processing method schematic flow sheet that Fig. 3 provides for another embodiment of the present invention, below should figure illustrate
The implementation process of this method.
It should be noted that, first preview image gathered to camera carries out face knowledge to this method in Fig. 1
Performed before other step.As shown in figure 3, the step of image processing method, including:
Step 201, the second preview image gathered to camera carries out recognition of face.
Here, on the second preview image that the camera that can detect mobile terminal by face recognition technology is gathered
With the presence or absence of facial image.
Step 202, if recognizing the second facial image, the characteristic area for second facial image that judges whether to be stored with
Domain information.
If it is not, then performing step 203;If so, the characteristic area information for second facial image that is stored with terminal, then root
According to this feature area information, image procossing is carried out to the second preview image.
Step 203, if not storing the characteristic area information of second facial image, for second facial image point
With the second identity.
Specifically, being extracted to the characteristic information of facial image, and the characteristic information of the facial image extracted is protected
After depositing, an identity is assigned to facial image, for characterizing user identity.Such as, an identity ID face is assigned
【i】, wherein, i is natural number.Here face【i】Whose face, such as user A are saved for indicating, facial image is extracted special
After reference breath, the mark number for assigning user A is face【1】, the facial image occurred in later picture passes through facial image special
Reference breath contrast, it is exactly user A that can recognize this facial image.
The different identity of different facial image correspondences, so, differentiation is reached by different identity not
The purpose of same facial image.
Step 204, the second feature area information of second facial image is obtained.
Specifically, as shown in figure 4, step 204 also may particularly include:
Step 204-1, carries out feature detection to second facial image, obtains at least one pending characteristic area
Domain.
Specifically, characteristic area includes at least one place image-region in mole, spot, acne.
Step 204-2, described at least one pending characteristic area is marked, and shows on each characteristic area
Show label information.
Here, the pending characteristic area that terminal is detected by marking automatically, and be shown on facial image, can be just
The position of characteristic area on facial image is checked in user.Which characteristic area is gone out by software Intelligent Recognition, just
Cancellation marking operation subsequently is carried out to characteristic area in user, that is, user can independently select to take which characteristic area
Disappear mark, i.e., to having cancelled the characteristic area of mark without image procossing.
Step 204-3, touch control operation of the detection user in the preview image.
Step 204-4, if detecting the first touch control operation, obtains the first operating position of first touch control operation.
It is preferred that, the first touch control operation is slide, for drawing a circle to approve out the characteristic area that user's identification goes out, is easy to
This feature region is marked.
Step 204-5, feature detection is carried out to the preset range image-region where first operating position.
Step 204-6, if detecting characteristic area, the characteristic area detected is marked, and display mark
Remember information.
Wherein, the label information is used to indicate that the characteristic area is pending image-region.
Here, step 204-3~step 204-6 is to go out step 204-2 by the first touch control operation manual identified of user
Middle terminal fails the characteristic area automatically identified, be to step 204-2 further supplement with it is perfect.
It should be noted that the label information in step 204-2 is known automatic identification by terminal and arrived;Step 204-6
In label information when arrived by the first touch control operation manual identified of user.
Step 204-7, if detecting the second touch control operation at least one pending characteristic area, is obtained
The operating position of second touch control operation.
Here, it is preferred that, the second touch-control behaviour is clicking operation, for recognizing that user takes to characteristic area by the operation
Disappear the instruction of image procossing.
Step 204-8, if receiving the cancellation figure to the corresponding characteristic area of operating position of second touch control operation
As the instruction of processing, then the corresponding characteristic area of operating position of second touch control operation is defined as non-characteristic area;
Step 204-9, removes the label information on the corresponding characteristic area of operating position of second touch control operation.
Here, step 204-2~step 204-6 and step 204-7~step 204-9 is performed parallel.
It should be noted that, can be according to user to beautiful difference by above-mentioned steps 204-2~step 204-9 execution
Understand, user is wished that the characteristic area retained retains, to keep the authenticity of image;Remove the characteristic area that user wants to remove
Domain, to cause skin cleaner, penetrating, but also can detect the characteristic area that software possibly can not be accurately identified, and
And mark is shown, intuitively, and is easy to man-machine interaction.
Step 204-10, calculates positional information, the area information of each characteristic area at least one described characteristic area;
Step 204-11, obtains the second image processing data of second facial image;
Specifically, as shown in figure 5, step 204-11 also may particularly include:
Step 204-111, obtains the image processing type and Image Processing parameter of user's input.
Specifically, described image processing type includes removing processing and/or desalt processing;Described image processing parameter includes
Desalinate grade point.
Here, removal processing refers specifically to the processing of the removal to the mole in facial image, spot, acne;Desalt processing is referred specifically to pair
Mole, spot, the desalt processing of acne in facial image.
Here, Image Processing parameter may also include:Colour of skin index, mill skin index, whitening index, thin face index, eye increase
Three-dimensional index of strong index, face etc..
Step 204-112, the image processing type and Image Processing parameter that the user is inputted is defined as described second
Image processing data.
Step 204-12, will at least one described characteristic area, all positional informations, the area information and described
Second image processing data is defined as the second feature area information.
Step 205, the corresponding relation set up between second identity and the second feature area information.
Here, because the second identity represents the second facial image, then the second identity and second of foundation is passed through
Corresponding relation between characteristic area information, that is, obtain the second feature area information relevant with the second facial image.It is,
The different characteristic area information of different facial image correspondences, and using the image processing data in characteristic area information, to bag
The preview image for including facial image carries out image procossing, is met the image of users ' individualized requirement.
Image processing method provided in an embodiment of the present invention, does not store the facial image identified in preview image in terminal
Characteristic area information when, by for facial image distribute identity, and by terminal automatic identification add human assistance
The characteristic area of facial image is recognized, and characteristic area is marked and correspondingly touch control operation, and then obtain face figure
The characteristic area information of picture, the corresponding relation finally set up between identity and the characteristic area information of facial image.So,
By assigning different identity for different facial images, and by acquisition characteristic area information corresponding with facial image
Stored, be easy to SS later by carrying out identification to facial image, obtain characteristic area corresponding with facial image
Information, and then the image procossing to the preview image including facial image is realized, it is met individual character of the user to image procossing
Change demand.
The image processing method schematic flow sheet that Fig. 6 provides for further embodiment of this invention, below should figure illustrate
The implementation process of this method.As shown in fig. 6, the step of image processing method includes:
Step 301, the camera function for detecting mobile terminal has turned on;
Step 302, recognition of face is carried out to preview image, judges whether facial image;
If so, existing, step 303 is being performed;Step 302 is continued executing with conversely, then performing.
Step 303, judge whether to have carried out identity ID identifications;
If it is not, then performing step 304;If so, then performing step 309.
It should be noted that usually, terminal can carry out information gathering to the facial image of preview image, be confirmed whether
The acquired mistake human face image information.
If gathering, into step 309;
If not gathering also, human face image information is acquired into step 304.
Facial image ID identifications are in order to according to the different U.S. face schemes of different Man's Demands offers.The feature of face
Information is saved, and the face characteristic is indicated, and assigns an identity ID face【i】, wherein, i is natural number, this
In, face【i】The facial image of which user is saved for indicating.User is assigned after such as user A, extraction portrait feature
A mark number is face【1】, the facial image occurred in preview interface of taking pictures later, by portrait Characteristic Contrast, can recognize
Face characteristic to the facial image is exactly user A.It is, matching face【i】Whether it is the identity ID existed.
Step 304, the mole in facial image is recognized by software recognizer;
Here, by software engineering, the mole scanning in face is detected, and record the mole recognized in face figure
Coordinate position as in, and pointed out on screen, show to have recognized these moles.But due to software engineering recognizer
Defect, it is impossible to recognize all moles, it is necessary into step 305 complete and accurate.
Step 305, manual identified mole;
Here, specifically, detecting touch control operation of the user in preview image, and the first touch control operation is being detected (such as
Slide) after, obtain the first operating position of first touch control operation;To the preset range figure where first operating position
As region progress feature detection, and after characteristic area is detected, this feature region is marked.Namely user manually knows
The process of other mole.
Here, the position of mole is gone out by user manual identification, and is marked.By man-machine interaction, user is software skill
The mole that art recognizer is not recognized is supplemented, can be with all under complete documentation by the mode of software plus man-machine interactively
The coordinate position of mole, into step 306.
Step 306, artificial screening mole;
Here, after terminal records mole all on facial image, mole to be retained is marked guiding user,
Then the information of these moles is preserved.For example, being touched detecting to second of image-region where at least one mole on facial image
Control operation (such as clicking operation), obtains the operating position of second touch control operation, and reception takes to the corresponding mole of the operating position
The instruction for removing or desalinating is eliminated, the corresponding mole of the operating position is defined as non-characteristic area.It is, terminal gets user
Want the positional information of mole retained.So far, terminal has grasped the positional information of user mole all on the face and has wanted what is dispelled
The positional information of mole.
Step 307, the information of facial image and mole is preserved;
Here, the information of the mole portrait ID identifications of step 302 collected with step 304 and step 305 is mapped,
And be saved in the camera software of terminal.
Step 308, gather the U.S. face processing information of correspondence user identity and preserve.
It should be noted that U.S. face processing information is image processing data, including:Image processing type and image procossing
Parameter.
Here, image processing type includes removing processing and/or desalt processing;Image Processing parameter includes desalination grade
Value.
Specifically, Image Processing parameter may also include:Colour of skin index, mill skin index, whitening index, thin face index, eye
Three-dimensional index of Augmentation index, face etc..
Step 309, user identity ID is read;
Here, the collection information completed according to step 303 before and step 308, inquires the identity ID face of user
【i】。
Step 310, the corresponding U.S. face processing informations of user identity ID are called;
Specifically, provide U.S. face scheme to step 108 collection information according to step 103 before, by U.S. face scheme send to
Software is dispelled mole engine.
Step 311, detect whether there is photographing instruction;
If so, then performing step 312;Conversely, then continuing executing with step 311.
Step 312, image procossing is carried out to facial image according to U.S. face processing information corresponding with user identity ID.
Step 313, complete U.S. face to dispel mole, obtain image of taking pictures.
In the embodiment of the present invention, by obtaining user identity ID and user identity ID corresponding with facial image and U.S. face
The corresponding relation of processing information, obtains U.S. face processing information corresponding with facial image, and according to the U.S. face processing information to people
Face image carries out image procossing, obtains image of taking pictures, so, passes through different user identity ID so that different facial images pair
Different U.S. face processing informations are answered, different user can be met to beautiful individual demand.
The embodiment of the present invention additionally provides a kind of computer-readable recording medium, is stored thereon with image processing program and (refers to
Make), the program (instruction) realizes following steps when being executed by processor:
The first preview image gathered to camera carries out recognition of face;
If recognizing the first facial image, the first identity of first facial image is obtained;
According to the corresponding relation between the identity and characteristic area information prestored, the first identity mark is determined
Know corresponding fisrt feature area information;
Based on the fisrt feature area information, image procossing is carried out to the preview image;
Wherein, the characteristic area information includes:Pending characteristic area, the positional information of the characteristic area, face
The image processing data of product information and the characteristic area, described image processing information includes image processing type and image procossing
Parameter.
Alternatively, following steps can also be realized when the program (instruction) is executed by processor:
Before the step of first preview image gathered to camera carries out recognition of face, methods described is also wrapped
Include:
The second preview image gathered to camera carries out recognition of face;
If recognizing the second facial image, the characteristic area information for second facial image that judges whether to be stored with;
If not storing the characteristic area information of second facial image, the second body is distributed for second facial image
Part mark;
Obtain the second feature area information of second facial image;
The corresponding relation set up between second identity and the second feature area information.
The step of obtaining the second feature area information of second facial image, including:
Feature detection is carried out to second facial image, at least one pending characteristic area is obtained;
Calculate positional information, the area information of each characteristic area at least one described characteristic area;
Obtain the second image processing data of second facial image;
At at least one characteristic area, all positional informations, the area information and second image
Reason information is defined as the second feature area information;
Wherein, the characteristic area includes image-region where at least one in mole, spot, acne.
To second facial image carry out feature detection, the step of obtaining at least one pending characteristic area it
Afterwards, methods described also includes:
Described at least one pending characteristic area is marked, and shows that mark is believed on each characteristic area
Breath;
Detect touch control operation of the user in the preview image;
If detecting the first touch control operation, the first operating position of first touch control operation is obtained;
Feature detection is carried out to the preset range image-region where first operating position;
If detecting characteristic area, the characteristic area detected is marked, and display label information;
Wherein, the label information is used to indicate that the characteristic area is pending image-region.
After the step of detecting touch control operation of the user in the preview image, methods described also includes:
If detecting the second touch control operation at least one pending characteristic area, obtain described second and touch
Control the operating position of operation;
If receiving the finger of the cancellation image procossing to the corresponding characteristic area of operating position of second touch control operation
Order, then be defined as non-characteristic area by the corresponding characteristic area of operating position of second touch control operation;
Remove the label information on the corresponding characteristic area of operating position of second touch control operation.
The step of obtaining the second image processing data of second facial image, including:
Obtain the image processing type and Image Processing parameter of user's input;
The image processing type and Image Processing parameter that the user is inputted are defined as second image processing data;
Wherein, described image processing type includes removing processing and/or desalt processing;Described image processing parameter includes light
Change grade point.
Based on the fisrt feature area information, the step of image procossing is carried out to the preview image, including:
Obtain characteristic area pending in the fisrt feature area information, the positional information of the characteristic area, face
Product information, the first image processing data;
Obtain the image processing type and Image Processing parameter in described first image processing information;
Positional information, area information based on the characteristic area, according to the image in described first image processing information
Type and Image Processing parameter are handled, image procossing is carried out to characteristic area pending in the fisrt feature area information.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method
Or technology come realize information store.Information can be computer-readable instruction, data structure, the module of program or other data.
The example of the storage medium of computer includes, but are not limited to phase transition internal memory (PRAM), static RAM (SRAM), moved
State random access memory (DRAM), other kinds of random access memory (RAM), read-only storage (ROM), electric erasable
Programmable read only memory (EEPROM), fast flash memory bank or other memory techniques, read-only optical disc read-only storage (CD-ROM),
Digital versatile disc (DVD) or other optical storages, magnetic cassette tape, the storage of tape magnetic rigid disk or other magnetic storage apparatus
Or any other non-transmission medium, the information that can be accessed by a computing device available for storage.Define, calculate according to herein
Machine computer-readable recording medium does not include temporary computer readable media (transitory media), such as data-signal and carrier wave of modulation.
The mobile terminal structure schematic diagram that Fig. 7 provides for one embodiment of the invention.As shown in fig. 7, the mobile terminal 400 is wrapped
Include:
First face recognition module 401, the first preview image for being gathered to camera carries out recognition of face;
First acquisition module 402, for when recognizing the first facial image, obtaining the first of first facial image
Identity;
Information determination module 403, user is according to the corresponding pass between the identity prestored and characteristic area information
System, determines the corresponding fisrt feature area information of first identity;
Image processing module 404, for based on the fisrt feature area information, being carried out to the preview image at image
Reason;
Wherein, the characteristic area information includes:Pending characteristic area, the positional information of the characteristic area, face
The image processing data of product information and the characteristic area, described image processing information includes image processing type and image procossing
Parameter.
Specifically, as shown in figure 8, the mobile terminal structure schematic diagram provided for another embodiment of the present invention.The present embodiment
Mobile terminal 400 also include:
Second face recognition module 405, the second preview image for being gathered to camera carries out recognition of face;
Judge module 406, for when recognizing the second facial image, judging whether second facial image that is stored with
Characteristic area information;
Distribute module 407 is identified, for when not storing the characteristic area information of second facial image, being described the
Two facial images distribute the second identity;
Second acquisition module 408, the second feature area information for obtaining second facial image;
Relation sets up module 409, for setting up between second identity and the second feature area information
Corresponding relation.
Specifically, the second acquisition module 408 may particularly include:
First detection sub-module 408-1, for second facial image carry out feature detection, obtain it is pending extremely
A few characteristic area;
Calculating sub module 408-2, calculates the positional information of each characteristic area, area at least one described characteristic area
Information;
First acquisition submodule 408-3, the second image processing data for obtaining second facial image;
Information determination sub-module 408-4, for will at least one described characteristic area, all positional informations, described
Area information and second image processing data are defined as the second feature area information;
Wherein, the characteristic area includes image-region where at least one in mole, spot, acne.
Specifically, the second acquisition module 408 also may particularly include:
Fisrt feature marks submodule 408-5, for carrying out feature detection to second facial image, obtains pending
At least one characteristic area after, described at least one pending characteristic area is marked, and in each characteristic area
Label information is shown on domain;
Second detection sub-module 408-6, for detecting touch control operation of the user in the preview image;
Second acquisition submodule 408-7, for when detecting the first touch control operation, obtaining first touch control operation
First operating position;
Second feature marks submodule 408-8, for when detecting characteristic area, to the characteristic area detected
It is marked, and display label information;
Wherein, the label information is used to indicate that the characteristic area is pending image-region.
Specifically, the second acquisition module 408 also may particularly include:
3rd acquisition submodule 408-9, for after touch control operation of the detection user in the preview image, and inspection
When measuring the second touch control operation at least one pending characteristic area, the operation of second touch control operation is obtained
Position;
Non- characteristic area determination sub-module 408-10, for receiving the operating position correspondence to second touch control operation
Characteristic area cancellation image procossing instruction, then it is the corresponding characteristic area of operating position of second touch control operation is true
It is set to non-characteristic area;
Label information removes submodule 408-11, the corresponding feature of operating position for removing second touch control operation
Label information on region.
Specifically, the first acquisition submodule 408-3 may particularly include:
Acquiring unit 408-31, image processing type and Image Processing parameter for obtaining user's input;
Information determination unit 408-32, image processing type and Image Processing parameter for the user to be inputted are determined
For second image processing data;
Wherein, described image processing type includes removing processing and/or desalt processing;Described image processing parameter includes light
Change grade point.
Specifically, image processing module 404 may particularly include:
4th acquisition submodule 4041, for obtaining characteristic area pending in the fisrt feature area information, institute
State the positional information, area information, the first image processing data of characteristic area;
5th acquisition submodule 4042, for obtaining image processing type and image in described first image processing information
Processing parameter;
Image procossing submodule 4043, for the positional information based on the characteristic area, area information, according to described
Image processing type and Image Processing parameter in one image processing data, to pending in the fisrt feature area information
Characteristic area carries out image procossing.
Mobile terminal provided in an embodiment of the present invention, carries out recognition of face to the first preview image that camera is gathered, obtains
Take the first facial image the first identity and identity and characteristic area information between corresponding relation, obtain and body
Part mark correspondence fisrt feature area information, and image procossing is carried out to preview image according to the fisrt feature area information, this
Sample, passes through the different identity of different facial image correspondences so that the different characteristic area letter of different identity mark correspondence
Breath, it is to avoid the problem of U.S. face processing is mechanical single is carried out to facial image, meets difference U.S. face of the different user to facial image
Processing requirement, realizes different user to beautiful individual demand.
The embodiment of the present invention also provides a kind of mobile terminal, including processor, memory, is stored on the memory simultaneously
The image processing program that can be run on the processor, is realized above-mentioned when described image processing routine is by the computing device
Each process of image processing method embodiment, and identical technique effect can be reached, to avoid repeating, repeat no more here.
As shown in figure 9, the mobile terminal structure schematic diagram provided for further embodiment of this invention.Mobile end shown in Fig. 9
End 500, including:
At least one processor 501, memory 502, at least one network interface 504 and user interface 503.Mobile terminal
Each component in 500 is coupled by bus system 505.It is understood that bus system 505 be used for realize these components it
Between connection communication.Bus system 505 is in addition to including data/address bus, in addition to power bus, controlling bus and status signal
Bus.But for the sake of clear explanation, various buses are all designated as bus system 505 in fig .9.
Wherein, user interface 503 can include display, keyboard or pointing device (for example, mouse, trace ball
(trackball), touch-sensitive plate or touch-screen etc..
It is appreciated that the memory 502 in the embodiment of the present invention can be volatile memory or nonvolatile memory,
Or may include both volatibility and nonvolatile memory.Wherein, nonvolatile memory can be read-only storage (Read-
Only Memory, ROM), programmable read only memory (Programmable ROM, PROM), the read-only storage of erasable programmable
Device (Erasable PROM, EPROM), Electrically Erasable Read Only Memory (Electrically EPROM, EEPROM) or
Flash memory.Volatile memory can be random access memory (Random Access Memory, RAM), and it is used as outside high
Speed caching.By exemplary but be not restricted explanation, the RAM of many forms can use, such as static RAM
(Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory
(Synchronous DRAM, SDRAM), double data speed synchronous dynamic RAM (Double Data Rate
SDRAM, DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links
Dynamic random access memory (Synch link DRAM, SLDRAM) and direct rambus random access memory (Direct
Rambus RAM, DRRAM).The memory 502 of system and method described herein be intended to including but not limited to these and it is any its
It is adapted to the memory of type.
In some embodiments, memory 502 stores following element, can perform module or data structure, or
Their subset of person, or their superset:Operating system 5021 and application program 5022.
Wherein, operating system 5021, comprising various system programs, such as ccf layer, core library layer, driving layer, are used for
Realize various basic businesses and handle hardware based task.Application program 5022, includes various application programs, such as media
Player (Media Player), browser (Browser) etc., for realizing various applied business.Realize the embodiment of the present invention
The program of method may be embodied in application program 5022.
In embodiments of the present invention, mobile terminal 500 also includes:Storage on a memory 502 and can be on processor 501
The image processing program of operation, can be the image processing program in application program 5022 specifically, image processing program is located
Reason device 501 realizes following steps when performing:The first preview image gathered to camera carries out recognition of face;If recognizing
One facial image, then obtain the first identity of first facial image;According to the identity and feature prestored
Corresponding relation between area information, determines the corresponding fisrt feature area information of first identity;Based on described
One characteristic area information, image procossing is carried out to the preview image;Wherein, the characteristic area information includes:Pending
Characteristic area, the image processing data of the positional information of the characteristic area, area information and the characteristic area, described image
Processing information includes image processing type and Image Processing parameter.
Need exist for explanation, pending characteristic area, positional information, area information and the institute of the characteristic area
Stating the image processing data of characteristic area can be stored in memory 502, and processor 501 can call treating in memory 502 to locate
The characteristic area of reason, the image processing data of the positional information of the characteristic area, area information and the characteristic area.
Alternatively, following steps can be also realized when image processing program is performed by processor 501:Camera is gathered
First preview image is carried out before recognition of face, and the second preview image gathered to camera carries out recognition of face;If identification
To the second facial image, then the characteristic area information for second facial image that judges whether to be stored with;If not storing described
The characteristic area information of two facial images, then distribute the second identity for second facial image;Obtain second people
The second feature area information of face image;Set up corresponding between second identity and the second feature area information
Relation.
Alternatively, following steps can be also realized when image processing program is performed by processor 501:To the second face figure
As carrying out feature detection, at least one pending characteristic area is obtained;Calculate each special at least one described characteristic area
Levy positional information, the area information in region;Obtain the second image processing data of second facial image;At least one by described in
Individual characteristic area, all positional informations, the area information and second image processing data are defined as described second
Characteristic area information;Wherein, the characteristic area includes image-region where at least one in mole, spot, acne.
Alternatively, following steps can be also realized when image processing program is performed by processor 501:To the second face figure
As carrying out feature detection, after obtaining at least one pending characteristic area, to described at least one pending characteristic area
Domain is marked, and shows label information on each characteristic area;Detect touch control operation of the user in the preview image;
If detecting the first touch control operation, the first operating position of first touch control operation is obtained;To first operating position
The preset range image-region at place carries out feature detection;If detecting characteristic area, to the characteristic area detected
It is marked, and display label information;Wherein, the label information is used to indicate that the characteristic area is pending image district
Domain.
Explanation is needed exist for, the first touch control operation, label information can be stored in memory 502, and processor 501 can
Call the first touch control operation, the label information in memory 502.
Alternatively, following steps can be also realized when image processing program is performed by processor 501:Detect user described pre-
Look at after the touch control operation in image, if detecting the second touch control operation at least one pending characteristic area,
Then obtain the operating position of second touch control operation;If receiving the corresponding spy of operating position to second touch control operation
The instruction of the cancellation image procossing in region is levied, then is defined as the corresponding characteristic area of operating position of second touch control operation
Non- characteristic area;Remove the label information on the corresponding characteristic area of operating position of second touch control operation.
Explanation is needed exist for, the second touch control operation, the instruction of cancellation image procossing can be stored in memory 502,
Processor 501 can call the second touch control operation in memory 502, cancel the instruction of image procossing.
Alternatively, following steps can be also realized when image processing program is performed by processor 501:Obtain the figure of user's input
As processing type and Image Processing parameter;The image processing type and Image Processing parameter that the user is inputted are defined as described
Second image processing data;Wherein, described image processing type includes removing processing and/or desalt processing;Described image processing
Parameter includes desalination grade point.
Explanation is needed exist for, image processing type and Image Processing parameter can be stored in memory 502, processor
501 can call image processing type and Image Processing parameter in memory 502.
The mobile terminal of the present invention such as can be mobile phone, tablet personal computer, personal digital assistant (Personal Digital
Assistant, PDA) or vehicle-mounted computer etc. mobile terminal.
Mobile terminal 500 can realize each process that mobile terminal is realized in previous embodiment, to avoid repeating, here
Repeat no more.
The mobile terminal 500 of the embodiment of the present invention, image processing program realizes following steps when being performed by processor 501:
The first preview image gathered to camera carries out recognition of face;If recognizing the first facial image, described first is obtained
First identity of facial image;According to the corresponding relation between the identity and characteristic area information prestored, really
Determine the corresponding fisrt feature area information of first identity;Based on the fisrt feature area information, to the preview
Image carries out image procossing;Wherein, the characteristic area information includes:Pending characteristic area, the position of the characteristic area
The image processing data of confidence breath, area information and the characteristic area, described image processing information includes image processing type
And Image Processing parameter, so, pass through the different identity of different facial image correspondences so that different identity mark correspondence
Different characteristic area information, it is to avoid the problem of U.S. face processing is mechanical single is carried out to facial image, different user is met to people
The U.S. face processing requirement of difference of face image, realizes different user to beautiful individual demand.
The method that the embodiments of the present invention are disclosed can apply in processor 501, or real by processor 501
It is existing.Processor 501 is probably a kind of IC chip, the disposal ability with signal.In implementation process, the above method
Each step can be completed by the integrated logic circuit of the hardware in processor 501 or the instruction of software form.Above-mentioned place
Reason device 501 can be general processor, digital signal processor (Digital Signal Processor, DSP), special integrated
Circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field
Programmable Gate Array, FPGA) or other PLDs, discrete gate or transistor logic,
Discrete hardware components.It can realize or perform disclosed each method, step and the logic diagram in the embodiment of the present invention.It is general
Processor can be microprocessor or the processor can also be any conventional processor etc..With reference to institute of the embodiment of the present invention
The step of disclosed method, can be embodied directly in hardware decoding processor and perform completion, or with the hardware in decoding processor
And software module combination performs completion.Software module can be located at random access memory, and flash memory, read-only storage may be programmed read-only
In the ripe computer-readable recording medium in this area such as memory or electrically erasable programmable memory, register.The meter
Calculation machine readable storage medium storing program for executing is located at memory 502, and processor 501 reads the information in memory 502, is completed with reference to its hardware
The step of stating method.Specifically, be stored with image processing program on the computer-readable recording medium, and image processing program is located
Manage each step realized when device 501 is performed such as above-mentioned image processing method embodiment.
It is understood that embodiments described herein can with hardware, software, firmware, middleware, microcode or its
Combine to realize.Realized for hardware, processing unit can be realized in one or more application specific integrated circuit (Application
Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal Processing,
DSP), digital signal processing appts (DSP Device, DSPD), programmable logic device (Programmable Logic
Device, PLD), field programmable gate array (Field-Programmable Gate Array, FPGA), general processor,
In controller, microcontroller, microprocessor, other electronic units for performing herein described function or its combination.
Realize, can be realized by performing the module (such as process, function) of function described herein herein for software
Described technology.Software code is storable in memory and by computing device.Memory can within a processor or
Realized outside processor.
As shown in Figure 10, the mobile terminal structure schematic diagram provided for yet another embodiment of the invention.Movement shown in Figure 10
Terminal 600, including:
Radio frequency (Radio Frequency, RF) circuit 610, memory 620, input block 630, display unit 640, place
Manage device 660, voicefrequency circuit 670, WiFi (Wireless Fidelity) modules 680 and power supply 690.
Wherein, input block 630 can be used for the numeral or character information for receiving user's input, and produce and mobile terminal
The signal input that 600 user is set and function control is relevant.Specifically, in the embodiment of the present invention, the input block 630 can
With including contact panel 631.Contact panel 631, also referred to as touch-screen, collect touch operation of the user on or near it
(such as user uses the operations of any suitable object or annex on contact panel 631 such as finger, stylus), and according to advance
The formula of setting drives corresponding attachment means.Optionally, contact panel 631 may include touch detecting apparatus and touch controller
Two parts.Wherein, touch detecting apparatus detects the touch orientation of user, and detects the signal that touch operation is brought, by signal
Send touch controller to;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate,
Give the processor 660 again, and the order sent of reception processing device 660 and can be performed.Furthermore, it is possible to using resistance-type,
The polytypes such as condenser type, infrared ray and surface acoustic wave realize contact panel 631.Except contact panel 631, input block
630 can also include other input equipments 632, and other input equipments 632 can include but is not limited to physical keyboard, function key
One or more in (such as volume control button, switch key etc.), trace ball, mouse, action bars etc..
Wherein, display unit 640 can be used for information and the movement for showing the information inputted by user or being supplied to user
The various menu interfaces of terminal 600.Display unit 640 may include display panel 641, optionally, can use LCD or organic hairs
The forms such as optical diode (Organic Light-Emitting Diode, OLED) configure display panel 641.
It should be noted that contact panel 631 can cover display panel 641, touch display screen is formed, when touch display screen inspection
Measure after the touch operation on or near it, processor 660 is sent to determine the type of touch event, with preprocessor
660 provide corresponding visual output according to the type of touch event in touch display screen.
Touch display screen includes Application Program Interface viewing area and conventional control viewing area.The Application Program Interface viewing area
And arrangement mode of the conventional control viewing area is not limited, can be arranged above and below, left-right situs etc. can distinguish two and show
Show the arrangement mode in area.The Application Program Interface viewing area is displayed for the interface of application program.Each interface can be with
The interface element such as the icon comprising at least one application program and/or widget desktop controls.The Application Program Interface viewing area
It can also be the empty interface not comprising any content.The conventional control viewing area is used to show the higher control of utilization rate, for example,
Application icons such as settings button, interface numbering, scroll bar, phone directory icon etc..
Wherein processor 660 is the control centre of mobile terminal 600, utilizes various interfaces and connection whole mobile phone
Various pieces, software program and/or module in first memory 621 are stored in by operation or execution, and call storage
Data in second memory 622, perform the various functions and processing data of mobile terminal 600, so as to mobile terminal 600
Carry out integral monitoring.Optionally, processor 660 may include one or more processing units.
In embodiments of the present invention, mobile terminal 600 also includes:It is stored in first memory 621 and can be in processor 660
The image processing program of upper operation and the data that can be called in second memory 622 and by processor 660 are stored in, specifically,
Image processing program realizes following steps when being performed by processor 660:The first preview image gathered to camera enters pedestrian
Face is recognized;If recognizing the first facial image, the first identity of first facial image is obtained;According to prestoring
Identity and characteristic area information between corresponding relation, determine the corresponding fisrt feature region of first identity
Information;Based on the fisrt feature area information, image procossing is carried out to the preview image;Wherein, the characteristic area letter
Breath includes:At pending characteristic area, the image of the positional information of the characteristic area, area information and the characteristic area
Information is managed, described image processing information includes image processing type and Image Processing parameter.
It should be noted that pending characteristic area, positional information, area information and the spy of the characteristic area
Levying the image processing data in region can be stored in second memory 622, and processor 660 can be called in second memory 622
Pending characteristic area, the image processing data of the positional information of the characteristic area, area information and the characteristic area.
Alternatively, following steps can be also realized when image processing program is performed by processor 660:Camera is gathered
First preview image is carried out before recognition of face, and the second preview image gathered to camera carries out recognition of face;If identification
To the second facial image, then the characteristic area information for second facial image that judges whether to be stored with;If not storing described
The characteristic area information of two facial images, then distribute the second identity for second facial image;Obtain second people
The second feature area information of face image;Set up corresponding between second identity and the second feature area information
Relation.
Alternatively, following steps can be also realized when image processing program is performed by processor 660:To the second face figure
As carrying out feature detection, at least one pending characteristic area is obtained;Calculate each special at least one described characteristic area
Levy positional information, the area information in region;Obtain the second image processing data of second facial image;At least one by described in
Individual characteristic area, all positional informations, the area information and second image processing data are defined as described second
Characteristic area information;Wherein, the characteristic area includes image-region where at least one in mole, spot, acne.
Alternatively, following steps can be also realized when image processing program is performed by processor 660:To the second face figure
As carrying out feature detection, after obtaining at least one pending characteristic area, to described at least one pending characteristic area
Domain is marked, and shows label information on each characteristic area;Detect touch control operation of the user in the preview image;
If detecting the first touch control operation, the first operating position of first touch control operation is obtained;To first operating position
The preset range image-region at place carries out feature detection;If detecting characteristic area, to the characteristic area detected
It is marked, and display label information;Wherein, the label information is used to indicate that the characteristic area is pending image district
Domain.
Explanation is needed exist for, the first touch control operation, label information can be stored in second memory 622, processor
660 can call the first touch control operation, label information in second memory 622.
Alternatively, following steps can be also realized when image processing program is performed by processor 660:Detect user described pre-
Look at after the touch control operation in image, if detecting the second touch control operation at least one pending characteristic area,
Then obtain the operating position of second touch control operation;If receiving the corresponding spy of operating position to second touch control operation
The instruction of the cancellation image procossing in region is levied, then is defined as the corresponding characteristic area of operating position of second touch control operation
Non- characteristic area;Remove the label information on the corresponding characteristic area of operating position of second touch control operation.
Explanation is needed exist for, the second touch control operation can be stored in second memory 622, cancel the finger of image procossing
Order can be stored in first memory 621, and processor 660 can call the second touch control operation and first in second memory 622
The instruction of cancellation image procossing in memory 621.
Alternatively, following steps can be also realized when image processing program is performed by processor 660:Obtain the figure of user's input
As processing type and Image Processing parameter;The image processing type and Image Processing parameter that the user is inputted are defined as described
Second image processing data;Wherein, described image processing type includes removing processing and/or desalt processing;Described image processing
Parameter includes desalination grade point.
Explanation is needed exist for, image processing type and Image Processing parameter can be stored in second memory 622, located
Reason device 660 can call the image processing type and Image Processing parameter in second memory 622.
Mobile terminal 600 provided in an embodiment of the present invention, realizes following step when image processing program is performed by processor 660
Suddenly:The first preview image gathered to camera carries out recognition of face;If recognizing the first facial image, described is obtained
First identity of one facial image;According to the corresponding relation between the identity and characteristic area information prestored,
Determine the corresponding fisrt feature area information of first identity;Based on the fisrt feature area information, to described pre-
Image of looking at carries out image procossing;Wherein, the characteristic area information includes:Pending characteristic area, the characteristic area
The image processing data of positional information, area information and the characteristic area, described image processing information includes image processing class
Type and Image Processing parameter, so, pass through the different identity of different facial image correspondences so that different identity mark pair
Answer different characteristic area information, it is to avoid the problem of U.S. face processing is mechanical single is carried out to facial image, different user pair is met
The U.S. face processing requirement of difference of facial image, realizes different user to beautiful individual demand.
The mobile terminal of the present invention such as can be mobile phone, tablet personal computer, personal digital assistant (Personal Digital
Assistant, PDA) or vehicle-mounted computer etc. mobile terminal.
Mobile terminal 600 can realize each process that mobile terminal is realized in previous embodiment, to avoid repeating, here
Repeat no more.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein
Member and algorithm steps, can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
Performed with hardware or software mode, depending on the application-specific and design constraint of technical scheme.Professional and technical personnel
Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed
The scope of the present invention.
It is apparent to those skilled in the art that, for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, may be referred to the corresponding process in preceding method embodiment, will not be repeated here.
In embodiment provided herein, it should be understood that disclosed apparatus and method, others can be passed through
Mode is realized.For example, device embodiment described above is only schematical, for example, the division of the unit, is only
A kind of division of logic function, can there is other dividing mode when actually realizing, such as multiple units or component can combine or
Person is desirably integrated into another system, or some features can be ignored, or does not perform.Another, shown or discussed is mutual
Between coupling or direct-coupling or communication connection can be the INDIRECT COUPLING or communication link of device or unit by some interfaces
Connect, can be electrical, machinery or other forms.
The unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.
If the function is realized using in the form of SFU software functional unit and is used as independent production marketing or in use, can be with
It is stored in a computer read/write memory medium.Understood based on such, technical scheme is substantially in other words
The part contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter
Calculation machine software product is stored in a storage medium, including some instructions are to cause a computer equipment (can be individual
People's computer, server, or network equipment etc.) perform all or part of step of each of the invention embodiment methods described.
And foregoing storage medium includes:USB flash disk, mobile hard disk, ROM, RAM, magnetic disc or CD etc. are various can be with store program codes
Medium.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with
The hardware of correlation is controlled to complete by computer program, described program can be stored in a computer read/write memory medium
In, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage medium can be magnetic
Dish, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access
Memory, RAM) etc..
Term " first ", " second " in description and claims of this specification etc. are for distinguishing similar pair
As without for describing specific order or precedence.It should be appreciated that the data so used in the appropriate case can be with
Exchange, for example can be with addition to those for illustrating or describing herein so as to embodiments of the invention described herein
Order is implemented.In addition, term " comprising " and " having " and their any deformation, it is intended that covering is non-exclusive to be included,
For example, the process, method, system, product or the equipment that contain series of steps or unit be not necessarily limited to clearly to list that
A little steps or unit, but may include not list clearly or for intrinsic its of these processes, method, product or equipment
Its step or unit.
Described above is the preferred embodiment of the present invention, it is noted that for those skilled in the art
For, on the premise of principle of the present invention is not departed from, some improvements and modifications can also be made, these improvements and modifications
It should be regarded as protection scope of the present invention.
Claims (16)
1. a kind of image processing method, applied to the mobile terminal including camera, it is characterised in that including:
The first preview image gathered to camera carries out recognition of face;
If recognizing the first facial image, the first identity of first facial image is obtained;
According to the corresponding relation between the identity and characteristic area information prestored, first identity pair is determined
The fisrt feature area information answered;
Based on the fisrt feature area information, image procossing is carried out to the preview image;
Wherein, the characteristic area information includes:Pending characteristic area, the positional information of the characteristic area, area letter
The image processing data of breath and the characteristic area, described image processing information includes image processing type and image procossing is joined
Number.
2. according to the method described in claim 1, it is characterised in that first preview image gathered to camera is carried out
Before the step of recognition of face, methods described also includes:
The second preview image gathered to camera carries out recognition of face;
If recognizing the second facial image, the characteristic area information for second facial image that judges whether to be stored with;
If not storing the characteristic area information of second facial image, the second identity mark is distributed for second facial image
Know;
Obtain the second feature area information of second facial image;
The corresponding relation set up between second identity and the second feature area information.
3. method according to claim 2, it is characterised in that the second feature area of acquisition second facial image
The step of domain information, including:
Feature detection is carried out to second facial image, at least one pending characteristic area is obtained;
Calculate positional information, the area information of each characteristic area at least one described characteristic area;
Obtain the second image processing data of second facial image;
Will at least one described characteristic area, all positional informations, the area information and second image procossing letter
Breath is defined as the second feature area information;
Wherein, the characteristic area includes image-region where at least one in mole, spot, acne.
4. method according to claim 3, it is characterised in that described that feature detection is carried out to second facial image,
After the step of obtaining at least one pending characteristic area, methods described also includes:
Described at least one pending characteristic area is marked, and label information is shown on each characteristic area;
Detect touch control operation of the user in the preview image;
If detecting the first touch control operation, the first operating position of first touch control operation is obtained;
Feature detection is carried out to the preset range image-region where first operating position;
If detecting characteristic area, the characteristic area detected is marked, and display label information;
Wherein, the label information is used to indicate that the characteristic area is pending image-region.
5. method according to claim 4, it is characterised in that touch-control behaviour of the detection user in the preview image
After the step of making, methods described also includes:
If detecting the second touch control operation at least one pending characteristic area, the second touch-control behaviour is obtained
The operating position of work;
If receiving the instruction of the cancellation image procossing to the corresponding characteristic area of operating position of second touch control operation,
The corresponding characteristic area of operating position of second touch control operation is defined as non-characteristic area;
Remove the label information on the corresponding characteristic area of operating position of second touch control operation.
6. method according to claim 3, it is characterised in that at the second image of acquisition second facial image
The step of managing information, including:
Obtain the image processing type and Image Processing parameter of user's input;
The image processing type and Image Processing parameter that the user is inputted are defined as second image processing data;
Wherein, described image processing type includes removing processing and/or desalt processing;Described image processing parameter includes desalination etc.
Level value.
7. according to the method described in claim 1, it is characterised in that described to be based on the fisrt feature area information, to described
The step of preview image carries out image procossing, including:
Obtain characteristic area pending in the fisrt feature area information, the positional information of the characteristic area, area letter
Breath, the first image processing data;
Obtain the image processing type and Image Processing parameter in described first image processing information;
Positional information, area information based on the characteristic area, according to the image procossing in described first image processing information
Type and Image Processing parameter, image procossing is carried out to characteristic area pending in the fisrt feature area information.
8. a kind of mobile terminal, it is characterised in that including:
First face recognition module, the first preview image for being gathered to camera carries out recognition of face;
First acquisition module, for when recognizing the first facial image, obtaining the first identity mark of first facial image
Know;
Information determination module, user according to the corresponding relation between the identity and characteristic area information prestored, it is determined that
The corresponding fisrt feature area information of first identity;
Image processing module, for based on the fisrt feature area information, image procossing to be carried out to the preview image;
Wherein, the characteristic area information includes:Pending characteristic area, the positional information of the characteristic area, area letter
The image processing data of breath and the characteristic area, described image processing information includes image processing type and image procossing is joined
Number.
9. mobile terminal according to claim 8, it is characterised in that the mobile terminal also includes:
Second face recognition module, the second preview image for being gathered to camera carries out recognition of face;
Judge module, the feature for second facial image that when recognizing the second facial image, judges whether to be stored with
Area information;
Distribute module is identified, for being second face when not storing the characteristic area information of second facial image
Image distributes the second identity;
Second acquisition module, the second feature area information for obtaining second facial image;
Relation sets up module, for setting up the corresponding pass between second identity and the second feature area information
System.
10. mobile terminal according to claim 9, it is characterised in that second acquisition module includes:
First detection sub-module, for carrying out feature detection to second facial image, obtains at least one pending special
Levy region;
Calculating sub module, calculates positional information, the area information of each characteristic area at least one described characteristic area;
First acquisition submodule, the second image processing data for obtaining second facial image;
Information determination sub-module, for will at least one described characteristic area, all positional informations, the area information and
Second image processing data is defined as the second feature area information;
Wherein, the characteristic area includes image-region where at least one in mole, spot, acne.
11. mobile terminal according to claim 10, it is characterised in that second acquisition module also includes:
Fisrt feature marks submodule, for carrying out feature detection to second facial image, obtains pending at least one
After individual characteristic area, described at least one pending characteristic area is marked, and shown on each characteristic area
Label information;
Second detection sub-module, for detecting touch control operation of the user in the preview image;
Second acquisition submodule, the first operation for when detecting the first touch control operation, obtaining first touch control operation
Position;
Second feature marks submodule, for when detecting characteristic area, the characteristic area detected to be marked,
And display label information;
Wherein, the label information is used to indicate that the characteristic area is pending image-region.
12. mobile terminal according to claim 11, it is characterised in that second acquisition module also includes:
3rd acquisition submodule, for after touch control operation of the detection user in the preview image, and is detected to institute
When stating the second touch control operation of at least one pending characteristic area, the operating position of second touch control operation is obtained;
Non- characteristic area determination sub-module, for receiving the corresponding characteristic area of operating position to second touch control operation
Cancellation image procossing instruction, then the corresponding characteristic area of operating position of second touch control operation is defined as non-feature
Region;
Label information removes submodule, the mark on the corresponding characteristic area of operating position for removing second touch control operation
Remember information.
13. mobile terminal according to claim 10, it is characterised in that first acquisition submodule includes:
Acquiring unit, image processing type and Image Processing parameter for obtaining user's input;
Information determination unit, image processing type and Image Processing parameter for the user to be inputted are defined as described second
Image processing data;
Wherein, described image processing type includes removing processing and/or desalt processing;Described image processing parameter includes desalination etc.
Level value.
14. mobile terminal according to claim 8, it is characterised in that described image processing module includes:
4th acquisition submodule, for obtaining characteristic area pending in the fisrt feature area information, the characteristic area
The positional information in domain, area information, the first image processing data;
5th acquisition submodule, for obtaining image processing type and image procossing ginseng in described first image processing information
Number;
Image procossing submodule, for the positional information based on the characteristic area, area information, at described first image
The image processing type and Image Processing parameter in information are managed, to characteristic area pending in the fisrt feature area information
Carry out image procossing.
15. a kind of mobile terminal, it is characterised in that including:Processor, memory and it is stored on the memory and can be in institute
The image processing program run on processor is stated, described image processing routine is realized such as claim during the computing device
The step of image processing method any one of 1 to 7.
16. a kind of computer-readable recording medium, it is characterised in that be stored with the computer-readable recording medium at image
Reason program, realizes the image procossing as any one of claim 1 to 7 when described image processing routine is executed by processor
The step of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710371510.3A CN107203978A (en) | 2017-05-24 | 2017-05-24 | A kind of image processing method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710371510.3A CN107203978A (en) | 2017-05-24 | 2017-05-24 | A kind of image processing method and mobile terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107203978A true CN107203978A (en) | 2017-09-26 |
Family
ID=59906138
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710371510.3A Pending CN107203978A (en) | 2017-05-24 | 2017-05-24 | A kind of image processing method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107203978A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107578371A (en) * | 2017-09-29 | 2018-01-12 | 北京金山安全软件有限公司 | Image processing method and device, electronic equipment and medium |
CN107864333A (en) * | 2017-11-08 | 2018-03-30 | 广东欧珀移动通信有限公司 | Image processing method, device, terminal and storage medium |
CN107945135A (en) * | 2017-11-30 | 2018-04-20 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN108076290A (en) * | 2017-12-20 | 2018-05-25 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN108897996A (en) * | 2018-06-05 | 2018-11-27 | 北京市商汤科技开发有限公司 | Identification information correlating method and device, electronic equipment and storage medium |
CN108898649A (en) * | 2018-06-15 | 2018-11-27 | Oppo广东移动通信有限公司 | Image processing method and device |
WO2019071550A1 (en) * | 2017-10-13 | 2019-04-18 | 深圳传音通讯有限公司 | Image processing method, mobile terminal, and computer-readable storage medium |
CN111246093A (en) * | 2020-01-16 | 2020-06-05 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN114267058A (en) * | 2020-09-16 | 2022-04-01 | 腾讯科技(深圳)有限公司 | Face recognition method and device, computer equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927718A (en) * | 2014-04-04 | 2014-07-16 | 北京金山网络科技有限公司 | Picture processing method and device |
CN104715236A (en) * | 2015-03-06 | 2015-06-17 | 广东欧珀移动通信有限公司 | Face beautifying photographing method and device |
CN104992402A (en) * | 2015-07-02 | 2015-10-21 | 广东欧珀移动通信有限公司 | Facial beautification processing method and device |
CN105741231A (en) * | 2016-02-02 | 2016-07-06 | 深圳中博网络技术有限公司 | Skin beautifying processing method and device of image |
CN105825486A (en) * | 2016-04-05 | 2016-08-03 | 北京小米移动软件有限公司 | Beautifying processing method and apparatus |
-
2017
- 2017-05-24 CN CN201710371510.3A patent/CN107203978A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927718A (en) * | 2014-04-04 | 2014-07-16 | 北京金山网络科技有限公司 | Picture processing method and device |
CN104715236A (en) * | 2015-03-06 | 2015-06-17 | 广东欧珀移动通信有限公司 | Face beautifying photographing method and device |
CN104992402A (en) * | 2015-07-02 | 2015-10-21 | 广东欧珀移动通信有限公司 | Facial beautification processing method and device |
CN105741231A (en) * | 2016-02-02 | 2016-07-06 | 深圳中博网络技术有限公司 | Skin beautifying processing method and device of image |
CN105825486A (en) * | 2016-04-05 | 2016-08-03 | 北京小米移动软件有限公司 | Beautifying processing method and apparatus |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107578371A (en) * | 2017-09-29 | 2018-01-12 | 北京金山安全软件有限公司 | Image processing method and device, electronic equipment and medium |
WO2019071550A1 (en) * | 2017-10-13 | 2019-04-18 | 深圳传音通讯有限公司 | Image processing method, mobile terminal, and computer-readable storage medium |
CN107864333A (en) * | 2017-11-08 | 2018-03-30 | 广东欧珀移动通信有限公司 | Image processing method, device, terminal and storage medium |
CN107945135B (en) * | 2017-11-30 | 2021-03-02 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, storage medium, and electronic device |
CN107945135A (en) * | 2017-11-30 | 2018-04-20 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN108076290A (en) * | 2017-12-20 | 2018-05-25 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN108076290B (en) * | 2017-12-20 | 2021-01-22 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN108897996A (en) * | 2018-06-05 | 2018-11-27 | 北京市商汤科技开发有限公司 | Identification information correlating method and device, electronic equipment and storage medium |
CN108897996B (en) * | 2018-06-05 | 2022-05-10 | 北京市商汤科技开发有限公司 | Identification information association method and device, electronic equipment and storage medium |
CN108898649A (en) * | 2018-06-15 | 2018-11-27 | Oppo广东移动通信有限公司 | Image processing method and device |
CN111246093A (en) * | 2020-01-16 | 2020-06-05 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN111246093B (en) * | 2020-01-16 | 2021-07-20 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN114267058A (en) * | 2020-09-16 | 2022-04-01 | 腾讯科技(深圳)有限公司 | Face recognition method and device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107203978A (en) | A kind of image processing method and mobile terminal | |
CN107172296A (en) | A kind of image capturing method and mobile terminal | |
CN106651955A (en) | Method and device for positioning object in picture | |
CN107678641A (en) | A kind of method and mobile terminal into target display interface | |
CN107295195A (en) | A kind of fingerprint identification method and mobile terminal | |
CN107423409A (en) | A kind of image processing method, image processing apparatus and electronic equipment | |
CN107155064B (en) | A kind of image pickup method and mobile terminal | |
CN104574599A (en) | Authentication method and device, and intelligent door lock | |
CN107730077A (en) | Node tasks data display method, device, storage medium and computer equipment | |
CN107277481A (en) | A kind of image processing method and mobile terminal | |
CN110134459A (en) | Using starting method and Related product | |
CN107172346A (en) | A kind of weakening method and mobile terminal | |
CN107506111A (en) | The encryption and decryption method and terminal of a kind of terminal applies | |
CN107404577A (en) | A kind of image processing method, mobile terminal and computer-readable recording medium | |
CN106557755A (en) | Fingerprint template acquisition methods and device | |
CN107257440A (en) | It is a kind of to detect method, equipment and storage medium that video tracking is shot | |
CN106096043B (en) | A kind of photographic method and mobile terminal | |
CN106973222A (en) | The control method and mobile terminal of a kind of Digital Zoom | |
CN110099219A (en) | Panorama shooting method and Related product | |
CN107194968A (en) | Recognition and tracking method, device, intelligent terminal and the readable storage medium storing program for executing of image | |
CN107622478A (en) | A kind of image processing method, mobile terminal and computer-readable recording medium | |
CN107644335A (en) | A kind of method of payment, mobile terminal and server | |
CN107025421A (en) | Fingerprint identification method and device | |
CN107704190A (en) | Gesture identification method, device, terminal and storage medium | |
CN107678646A (en) | Pay control method, device, computer installation and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170926 |