CN107862266A - Image processing method and related product - Google Patents
Image processing method and related product Download PDFInfo
- Publication number
- CN107862266A CN107862266A CN201711035259.XA CN201711035259A CN107862266A CN 107862266 A CN107862266 A CN 107862266A CN 201711035259 A CN201711035259 A CN 201711035259A CN 107862266 A CN107862266 A CN 107862266A
- Authority
- CN
- China
- Prior art keywords
- face
- training pattern
- default
- characteristic collection
- face training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
The embodiment of the present application discloses a kind of image processing method and Related product, and method includes:Obtain the first face training pattern, the first face training pattern corresponds to trusted application TA, the TA is used to preserve the first face feature set and default face template, and the first face feature set is to carry out feature extraction to the default face template by the first face training pattern to obtain;The first face training pattern is upgraded, obtains the second face training pattern;Feature extraction is carried out to the default face template according to the second face training pattern, obtains the second face characteristic collection, and the second face characteristic collection is stored in the TA, the second face characteristic collection is used to carry out aspect ratio pair in face recognition process.The embodiment of the present application can be carried out feature extraction to face template using the face training pattern after upgrading, obtain face characteristic collection, advantageously ensure that recognition of face success rate after the upgrading of face training pattern.
Description
Technical field
The application is related to technical field of mobile terminals, and in particular to a kind of image processing method and Related product.
Background technology
With a large amount of popularization and applications of mobile terminal (mobile phone, tablet personal computer etc.), the application that mobile terminal can be supported is got over
Come more, function is stronger and stronger, and mobile terminal develops towards variation, personalized direction, and turning into can not in user's life
The appliance and electronic lacked.
At present, recognition of face is increasingly favored by mobile terminal production firm, and artificial intelligence is in face
Identification field also begins to progressively prevailing, and still, existing face training pattern after the upgrade, can cause recognition of face success rate
Decline, how to solve face training pattern after the upgrade, lifted recognition of face success rate the problem of it is urgently to be resolved hurrily.
The content of the invention
The embodiment of the present application provides a kind of image processing method and Related product, can upgrade it with face training pattern
Afterwards, recognition of face success rate is lifted.
In a first aspect, the embodiment of the present application provides a kind of mobile terminal, including application processor (application
Processor, AP), and the memory being connected with the AP, wherein,
The memory, it is used for preservation first for storing the first face training pattern and trusted application TA, the TA
Face characteristic collection and default face template, the first face feature set are to described pre- by the first face training pattern
Obtained if face template carries out feature extraction;
The AP, for obtaining the first face training pattern;The first face training pattern is upgraded, obtained
To the second face training pattern;And feature is carried out to the default face template according to the second face training pattern and carried
Take, obtain the second face characteristic collection, and the second face characteristic collection is stored in the TA, the second face characteristic collection is used
In the progress aspect ratio pair in face recognition process.
Second aspect, the embodiment of the present application provide a kind of image processing method, applied to including application processor AP, with
And the mobile terminal with the AP memories being connected, methods described include:
Memory storage the first face training pattern and the trusted application TA, the TA are used to preserve the first face spy
Collection and default face template, the first face feature set are to the default face by the first face training pattern
Template carries out feature extraction and obtained;
The AP obtains the first face training pattern;The first face training pattern is upgraded, obtains
Two face training patterns;And feature extraction is carried out to the default face template according to the second face training pattern, obtain
The TA is stored in the second face characteristic collection, and by the second face characteristic collection, the second face characteristic collection is used for
Aspect ratio pair is carried out in face recognition process.
The third aspect, the embodiment of the present application provide a kind of image processing method, including:
The first face training pattern is obtained, the first face training pattern corresponds to trusted application TA, the TA for protecting
The first face feature set and default face template are deposited, the first face feature set is to pass through the first face training pattern pair
The default face template carries out feature extraction and obtained;
The first face training pattern is upgraded, obtains the second face training pattern;
Feature extraction is carried out to the default face template according to the second face training pattern, obtains the second face spy
Collection, and the second face characteristic collection is stored in the TA, the second face characteristic collection is used in face recognition process
Middle progress aspect ratio pair.
Fourth aspect, the embodiment of the present application provide a kind of image processing apparatus, including:
First acquisition unit, for obtaining the first face training pattern, the first face training pattern corresponds to credible answer
It is used to preserve the first face feature set and default face template with TA, the TA, the first face feature set is by described
First face training pattern carries out feature extraction to the default face template and obtained;
Upgrade unit, for upgrading to the first face training pattern, obtain the second face training pattern;
First extraction unit, carried for carrying out feature to the default face template according to the second face training pattern
Take, obtain the second face characteristic collection, and the second face characteristic collection is stored in the TA, the second face characteristic collection is used
In the progress aspect ratio pair in face recognition process.
5th aspect, the embodiment of the present application provide a kind of mobile terminal, including:Application processor AP and memory;With
And one or more programs, one or more of programs are stored in the memory, and it is configured to by the AP
Perform, described program includes being used for such as the instruction of the part or all of step described in the third aspect.
6th aspect, the embodiment of the present application provide a kind of computer-readable recording medium, wherein, it is described computer-readable
Storage medium is used to store computer program, wherein, the computer program causes computer to perform such as the embodiment of the present application the
The instruction of part or all of step described in three aspects.
7th aspect, the embodiment of the present application provide a kind of computer program product, wherein, the computer program product
Non-transient computer-readable recording medium including storing computer program, the computer program are operable to make calculating
Machine is performed such as the part or all of step described in the embodiment of the present application third aspect.The computer program product can be one
Individual software installation bag.
Implement the embodiment of the present application, have the advantages that:
As can be seen that image processing method and Related product described in the embodiment of the present application, obtain the first face instruction
Practice model, the first face training pattern corresponds to trusted application TA, TA and is used for the first face feature set of preservation and default face template,
First face feature set is to carry out feature extraction to default face template by the first face training pattern to obtain, to the first face
Training pattern is upgraded, and obtains the second face training pattern, and default face template is carried out according to the second face training pattern
Feature extraction, the second face characteristic collection is obtained, and the second face characteristic collection is stored in TA, the second face characteristic collection is used in people
Aspect ratio pair is carried out in face identification process, so as to which after the upgrading of face training pattern, the face training after upgrading can be utilized
Model carries out feature extraction to face template, obtains face characteristic collection, and such face characteristic collection is more beneficial for ensureing that face is known
Other success rate.
Brief description of the drawings
, below will be to embodiment or existing in order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art
There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of application, for those of ordinary skill in the art, on the premise of not paying creative work, can be with
Other accompanying drawings are obtained according to these accompanying drawings.
Figure 1A is a kind of configuration diagram for Example mobile terminals that the embodiment of the present application provides;
Figure 1B is a kind of structural representation for mobile terminal that the embodiment of the present application provides;
Fig. 1 C are a kind of schematic flow sheets of image processing method disclosed in the embodiment of the present application;
Fig. 2 is the schematic flow sheet of another image processing method disclosed in the embodiment of the present application;
Fig. 3 is a kind of another structural representation for mobile terminal that the embodiment of the present application provides;
Fig. 4 A are a kind of structural representations for image processing apparatus that the embodiment of the present application provides;
Fig. 4 B are the structural representations of the upgrade unit of the image processing apparatus described by Fig. 4 A that the embodiment of the present application provides
Figure;
Fig. 4 C are another structural representations of the image processing apparatus described by Fig. 4 A that the embodiment of the present application provides;
Fig. 4 D are another structural representations of the image processing apparatus described by Fig. 4 A that the embodiment of the present application provides;
Fig. 4 E are another structural representations of the image processing apparatus described by Fig. 4 A that the embodiment of the present application provides;
Fig. 5 is the structural representation of another mobile terminal disclosed in the embodiment of the present application.
Embodiment
In order that those skilled in the art more fully understand application scheme, below in conjunction with the embodiment of the present application
Accompanying drawing, the technical scheme in the embodiment of the present application is clearly and completely described, it is clear that described embodiment is only
Some embodiments of the present application, rather than whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art
The every other embodiment obtained under the premise of creative work is not made, belong to the scope of the application protection.
Term " first ", " second " in the description and claims of this application and above-mentioned accompanying drawing etc. are to be used to distinguish
Different objects, rather than for describing particular order.In addition, term " comprising " and " having " and their any deformations, it is intended that
It is to cover non-exclusive include.Such as process, method, system, product or the equipment for containing series of steps or unit do not have
The step of being defined in the step of having listed or unit, but alternatively also including not listing or unit, or alternatively also wrap
Include for other intrinsic steps of these processes, method, product or equipment or unit.
Referenced herein " embodiment " is it is meant that the special characteristic, structure or the characteristic that describe can wrap in conjunction with the embodiments
It is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identical
Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and
Implicitly understand, embodiment described herein can be combined with other embodiments.
Mobile terminal involved by the embodiment of the present application can include the various handheld devices with radio communication function,
Mobile unit, wearable device, computing device or other processing equipments for being connected to radio modem, and various forms
User equipment (user equipment, UE), mobile station (mobile station, MS), terminal device (terminal
Device) etc..For convenience of description, apparatus mentioned above is referred to as mobile terminal.In addition, the movement in the embodiment of the present application
Terminal is provided with Android operation system (android OS), and is also equipped with credible performing environment (trusted execution
Environment, TEE), the corresponding trusted application (trusted application, TA) of credible performing environment is the first
Face training pattern can be stored in advance in mobile terminal, and it is implemented based on TEE, and the first face training pattern can be following at least one
Grader:SVMs (support vector machine, SVM) grader, genetic algorithm class device, neutral net are calculated
Method grader, cascade classifier (such as Genetic algorithms~+ SVM), TA are used to preserve the first face feature set and default face mould
Plate, the first face feature set are to carry out feature extraction to default face template by the first face training pattern to obtain, and preset people
Face template can be pre-saved in TA.Above-mentioned first face feature set can be at least one below:Feature point set or feature wheel
Exterior feature collection.For example, in concrete application, facial image can be obtained by camera, and then, by the first face training pattern to this
Facial image carries out feature extraction, obtains a feature set, and aspect ratio pair is carried out using this feature collection and the first face feature set.
The embodiment of the present application is described in detail below.A kind of Example mobile terminals 1000 as shown in Figure 1A, the shifting
The face identification device of dynamic terminal 1000 can be camera module 21, and above-mentioned camera module can be single camera, for example,
Visible image capturing head, or, infrared camera.Or above-mentioned camera module can be dual camera, above-mentioned dual camera can
It is visible image capturing head with one, one is infrared camera, or, both visible image capturing head a, for example, shooting
Head be visible image capturing head, and another camera be infrared camera, in another example, camera is for visible image capturing head and separately
One camera is also visible image capturing head, and either above-mentioned camera module 21 can be front camera or rearmounted shooting
Head.
Figure 1B is referred to, Figure 1B is a kind of structural representation of shown mobile terminal 100, and the mobile terminal 100 wraps
Include:Application processor AP110, memory 160, face identification device 120, wherein, the AP110 is deposited by the connection of bus 150
Reservoir 160, face identification device 120.
Based on the mobile terminal described by Figure 1A-Figure 1B, can be used for implementing function such as:
The memory 160, for storing the first face training pattern and trusted application TA, the TA is used to preserving the
One face characteristic collection and default face template, the first face feature set are to described by the first face training pattern
Default face template carries out feature extraction and obtained;
The AP110, for obtaining the first face training pattern;The first face training pattern is risen
Level, obtains the second face training pattern;And the default face template is carried out according to the second face training pattern special
Sign extraction, obtains the second face characteristic collection, and the second face characteristic collection is stored in into the TA, second face characteristic
Collect for carrying out aspect ratio pair in face recognition process.
In a possible example, it is described the first face training pattern is upgraded in terms of, the AP110
It is specifically used for:
Obtain user and be accustomed to parameter;
It is accustomed to parameter according to the user to upgrade the first face training pattern.
In a possible example, the AP110 also particularly useful for:
Obtain the face unblock record of specified time section;
Face unblock record is analyzed, obtains recognition of face evaluation of estimate;
When the recognition of face evaluation of estimate is less than default Evaluation threshold, perform described to the first face training pattern
The step of being upgraded.
In a possible example, the AP110 also particularly useful for:
The upgrade command sent by server is received, performs the step upgraded to the first face training pattern
Suddenly.
In a possible example, the AP110 also particularly useful for:
Obtain target facial image;
Feature extraction is carried out to the target facial image according to the second face training pattern, obtains third party's face spy
Collection;
The second face characteristic collection and third party's face feature set are subjected to aspect ratio pair;
When the second face characteristic collection and third party's face feature set compare successfully, operation is unlocked.
Based on the mobile terminal described by above-mentioned Figure 1A-Figure 1B, available for a kind of execution image procossing as described below
Method, it is specific as follows:
The memory 160 stores the first face training pattern and trusted application TA, the TA are the first for preserving
Face feature set and default face template, the first face feature set are to described default by the first face training pattern
Face template carries out feature extraction and obtained;
The AP110 obtains the first face training pattern;The first face training pattern is upgraded, obtained
Second face training pattern;And feature extraction is carried out to the default face template according to the second face training pattern,
The second face characteristic collection is obtained, and the second face characteristic collection is stored in the TA, the second face characteristic collection is used for
Aspect ratio pair is carried out in face recognition process.
As can be seen that the image processing method described in the embodiment of the present application, the first face training pattern of acquisition, first
Face training pattern corresponds to trusted application TA, TA and is used for the first face feature set of preservation and default face template, and the first face is special
Collect and obtained to carry out feature extraction to default face template by the first face training pattern, the first face training pattern is entered
Row upgrading, obtains the second face training pattern, carries out feature extraction to default face template according to the second face training pattern, obtains
TA is stored in the second face characteristic collection, and by the second face characteristic collection, the second face characteristic collection is used in face recognition process
Middle progress aspect ratio pair, so as to which after the upgrading of face training pattern, the face training pattern after upgrading can be utilized to face
Template carries out feature extraction, obtains face characteristic collection, and such face characteristic collection is more beneficial for ensureing face recognition success rate.
Based on the mobile terminal described by Figure 1A-Figure 1B, Fig. 1 C are referred to, a kind of image provided for the embodiment of the present application
The embodiment schematic flow sheet of processing method.Image processing method described in the present embodiment, it may include following steps:
101st, the first face training pattern is obtained, the first face training pattern corresponds to trusted application TA, the TA use
In preserving the first face feature set and default face template, the first face feature set is to train mould by first face
Type carries out feature extraction to the default face template and obtained.
Wherein, the first face training pattern can be stored in mobile terminal, the first face training pattern is used for camera
The facial image collected carries out feature extraction, and trusted application TA is then used to preserve the first face feature set and default face mould
Plate, default face template can pre-save in the terminal before the embodiment of the present application is implemented, and above-mentioned first face is special
Collection can carry out feature extraction to default face template by the first face training pattern and obtain, and the first face feature set can be with
For following at least one:Feature point set or profile collection.
102nd, the first face training pattern is upgraded, obtains the second face training pattern.
Wherein, from the foregoing, as sample data volume is bigger, then the face training pattern precision obtained is higher, so,
The second face training pattern must be obtained, in this way, the second face to the first face training pattern by having in the terminal after upgrading
Training pattern possesses higher recognition of face precision, can extract more features.If carried out to the first face training pattern
During upgrading, if only upgrading to the first face training pattern, then, rear extended meeting uses the second face training pattern
Feature extraction is carried out to the facial image collected, the second face training pattern, can for the first face training pattern
To collect more accurately characteristic point, if in this way, the characteristic point at this time collected and the first face feature set are compared
To if, on the contrary, face identification rate is reduced.
For example, the first face training pattern is A, the first face feature set is a, is after the first face training pattern A upgradings
Second face training pattern B, facial image is gathered, feature extraction is carried out to facial image by B, obtains the second face characteristic collection
Comparison value between C2, C2 and a is k2, if if the first face training pattern A does not upgrade, being carried out by A to facial image special
Sign extraction, obtains third party face feature set C1, the comparison value between C1 and a is k1, then k2<k1.
Alternatively, in above-mentioned steps 102, the first face training pattern is upgraded, it may include following steps:
21st, obtain user and be accustomed to parameter;
22nd, it is accustomed to parameter according to the user to upgrade the first face training pattern.
Wherein, above-mentioned user, which is accustomed to parameter, to include but are not limited to:Between facial angle scope, face and camera
Distance range, human face expression, human face region etc..In the specific implementation, after the unblock of each face, user can be marked to practise
Used mapping relations between parameter and training data, training data can be understood as the operation of face unblock each time, and then, can be with
Formed and pre-set the mapping relations that user is accustomed between parameter and training data, and then, it can be determined according to the mapping relations
Training data corresponding to user's custom parameter, is upgraded according to this part training data the first face training pattern, in this way,
To the second training pattern in incorporated the personal characteristics of user, meet individual requirements, also improve recognition of face success rate.
Alternatively, between above-mentioned steps 101 and step 102, can also comprise the following steps:
Obtain the face unblock record of specified time section;Face unblock record is analyzed, obtains recognition of face
Evaluation of estimate;When the recognition of face evaluation of estimate is less than default Evaluation threshold, perform described to the first face training pattern
The step of being upgraded.
Wherein, above-mentioned specified time section can voluntarily be set by user, or, system default.Mobile terminal is in each people
When face unblock operation, a face unblock record can be produced, the face unblock record of specified time section can be obtained,
And then record can be unlocked to face and analyzed, at least one of following content can be analyzed:Face unlocks success rate, face
Unblock power consumption, face unlocked time, specifically, for example, average success rate that the face in the range of given luminance unlocks etc.,
Above-mentioned recognition of face evaluation of estimate can include following at least one dimension:Face unlocks success rate, face unblock power consumption, face solution
Lock time etc..For example, the face in specified time section can be unlocked to average success rate as recognition of face evaluation of estimate, and example
Such as, above-mentioned face unblock success rate corresponds to weights a1, and face unblock power consumption corresponds to weights a2, and face unlocked time corresponds to weights
A3, face unblock average success rate are b1, and face unblock average power consumption is b2, and face unblock average time is b3, a1+a2+a3
=1, can be by user voluntarily using a1*b1+a2*b2+a3*b3 as recognition of face evaluation of estimate, above-mentioned default Evaluation threshold
Set, or, system default, when recognition of face evaluation of estimate is less than default Evaluation threshold, perform to the first face training pattern
The step of being upgraded, i.e., less than default Evaluation threshold, then illustrate that the first face training pattern robustness is relatively low, it is necessary to make
Further upgrading.
Alternatively, between above-mentioned steps 101 and step 102, can also comprise the following steps:
The upgrade command sent by server is received, performs the step upgraded to the first face training pattern
Suddenly.
Wherein, mobile terminal can receive the upgrade command sent by server, and then, the first face training pattern is entered
Row upgrading, obtains the second face training pattern.First face training pattern is typically to be realized using complicated algorithm, and sample is got over
More, obtained training pattern precision is higher, in this way, server can collect the training data of different mobile terminals, and to this
A little training datas are integrated, and then, AKU is obtained, sends upgrade command to mobile terminal, the upgrade command carries upgrading
Bag, mobile terminal can be upgraded after upgrade command is received using AKU to the first face training pattern.
103rd, feature extraction is carried out to the default face template according to the second face training pattern, obtains the second people
Face feature set, and the second face characteristic collection is stored in the TA, the second face characteristic collection is used in recognition of face
During carry out aspect ratio pair.
Wherein, mobile terminal can be carried further according to the second face training pattern to default face template progress feature
Take, obtain the second face characteristic collection, and the second face characteristic collection is stored in TA, also enter equivalent to the feature set to originally storing
Upgrading is gone, in addition, the second face characteristic collection is used to carry out aspect ratio pair in face recognition process, in this way, can effectively protect
Witness's face recognition success rate.
As can be seen that the image processing method described in the embodiment of the present application, the first face training pattern of acquisition, first
Face training pattern corresponds to trusted application TA, TA and is used for the first face feature set of preservation and default face template, and the first face is special
Collect and obtained to carry out feature extraction to default face template by the first face training pattern, the first face training pattern is entered
Row upgrading, obtains the second face training pattern, carries out feature extraction to default face template according to the second face training pattern, obtains
TA is stored in the second face characteristic collection, and by the second face characteristic collection, the second face characteristic collection is used in face recognition process
Middle progress aspect ratio pair, so as to which after the upgrading of face training pattern, the face training pattern after upgrading can be utilized to face
Template carries out feature extraction, obtains face characteristic collection, and such face characteristic collection is more beneficial for ensureing face recognition success rate.
Consistent with the abovely, referring to Fig. 2, a kind of embodiment stream of the image processing method provided for the embodiment of the present application
Journey schematic diagram.Image processing method described in the present embodiment, it may include following steps:
201st, the first face training pattern is obtained, the first face training pattern corresponds to trusted application TA, the TA use
In preserving the first face feature set and default face template, the first face feature set is to train mould by first face
Type carries out feature extraction to the default face template and obtained.
202nd, the first face training pattern is upgraded, obtains the second face training pattern.
203rd, feature extraction is carried out to the default face template according to the second face training pattern, obtains the second people
Face feature set, and the second face characteristic collection is stored in the TA, the second face characteristic collection is used in recognition of face
During carry out aspect ratio pair.
Wherein, the specific descriptions of above-mentioned steps 201- steps 203 can refer to pair of the image processing method described by Fig. 1 C
Step is answered, will not be repeated here.
204th, target facial image is obtained.
Wherein it is possible to by being focused to face, and shot, obtain target facial image, target facial image
Can be the image comprising human face region, in this way, a part is face administrative division map in target facial image in the embodiment of the present application
Picture, another part are background image.
Wherein, before above-mentioned steps 204, may include steps of:
A1, obtain target environment parameter;
A2, determine target acquisition parameters corresponding with the target environment parameter;
Then, above-mentioned steps 204, target facial image is obtained, can be implemented as follows:
Face is shot according to the target acquisition parameters, obtains the target facial image.
Wherein, above-mentioned target environment parameter can be detected to obtain by environmental sensor, and above-mentioned environmental sensor can be used for
Ambient parameter is detected, environmental sensor can be following at least one:Breathing detection sensor, ambient light sensor, electromagnetism inspection
Survey sensor, ambient color temperature detection sensor, alignment sensor, temperature sensor, humidity sensor etc., ambient parameter can be with
For following at least one:Respiration parameter, ambient brightness, environment colour temperature, environmental magnetic field interference coefficient, weather condition, environment light source
Number, geographical position etc., respiration parameter can be following at least one:Respiration rate, respiratory rate, Breathiness, breathing
Curve etc..
Further, the corresponding relation between acquisition parameters and ambient parameter can be prestored in mobile terminal, and then,
Corresponding with target environment parameter target acquisition parameters are determined according to the corresponding relation, above-mentioned acquisition parameters can be included but not only
It is limited to:Focal length, exposure time, aperture size, exposal model, sensitivity ISO, white balance parameter etc..In this way, it can obtain
Optimal image under the environment.
Alternatively, in above-mentioned steps 204, target facial image is obtained, it may include following steps:
B1, shot according to default acquisition parameters set pair face, obtain N facial images, the default acquisition parameters
Collection includes N group acquisition parameters, and the N facial images correspond with the N groups acquisition parameters, and the N is whole more than 1
Number;
B2, image quality evaluation is carried out to the N facial images, obtain N number of image quality evaluation values;
B3, facial image conduct corresponding to maximum image quality evaluation value is chosen from N number of image quality evaluation values
The target facial image.
Wherein, above-mentioned acquisition parameters can include but are not limited to:Focal length, exposure time, aperture size, exposal model,
Sensitivity ISO, white balance parameter etc..In this way, image optimal in the present context can be obtained.Above-mentioned default acquisition parameters collection
Pre-save in memory, it can include N group acquisition parameters, and N is the integer more than 1.In this way, default shooting can be used
Each group of acquisition parameters in parameter set are shot to face, obtain N facial images, and carry out figure to N facial images
As quality evaluation, N number of image quality evaluation values are obtained, maximum image quality evaluation value is chosen from N number of image quality evaluation values
Corresponding facial image is as target facial image, in this way, can be filtered out by different acquisition parameters suitable with environment
Optimal facial image, be advantageous to the accuracy rate that lifting determines human face region image in target facial image.
Wherein, in above-mentioned steps B2, image quality evaluation is carried out to the N facial images, can be as follows
Implement:
Image is carried out to each facial image in the N facial images using at least one image quality evaluation index
Quality evaluation, so as to obtain N number of image quality evaluation values.
Specifically, when evaluating facial image, multiple images quality evaluation index can be included, each picture quality is commented
Valency index also corresponds to a weight, in this way, when each image quality evaluation index carries out image quality evaluation to facial image,
An evaluation result is can obtain, finally, is weighted, also just obtains final image quality evaluation values.Picture quality is commented
Valency index may include but be not limited only to:Average, standard deviation, entropy, definition, signal to noise ratio etc..
It should be noted that due to when use single evaluation index is evaluated picture quality, there is certain limitation
Property, therefore, picture quality can be evaluated using multiple images quality evaluation index, certainly, picture quality is evaluated
When, not image quality evaluation index is The more the better, because image quality evaluation index is more, the meter of image quality assessment process
It is higher to calculate complexity, it is better also to may not be certain image quality evaluation effect, therefore, higher situation is being required to image quality evaluation
Under, picture quality can be evaluated using 2~10 image quality evaluation indexs.Specifically, image quality evaluation is chosen to refer to
Target number and which index, according to depending on specific implementation situation.Certainly, specifically scene selection picture quality must be also combined to comment
Valency index, carry out carrying out the image quality index of image quality evaluation selection under dark situation under image quality evaluation and bright ring border
Can be different.
Alternatively, in the case of not high to image quality evaluation required precision, an image quality evaluation index can be used
Evaluated, for example, carrying out image quality evaluation values to pending image with entropy, it is believed that entropy is bigger, then illustrates picture quality
It is better, on the contrary, entropy is smaller, then illustrate that picture quality is poorer.
Alternatively, in the case of higher to image quality evaluation required precision, multiple images quality evaluation can be used
Index is evaluated image, and when multiple images quality evaluation index carries out image quality evaluation to image, it is more that this can be set
The weight of each image quality evaluation index in individual image quality evaluation index, can obtain multiple images quality evaluation value, according to
The plurality of image quality evaluation values and its corresponding weight can obtain final image quality evaluation values, for example, three image matter
Measuring evaluation index is respectively:A indexs, B indexs and C indexs, A weight is a1, and B weight is a2, and C weight is a3, is used
A, when B and C carries out image quality evaluation to a certain image, image quality evaluation values corresponding to A are b1, picture quality corresponding to B
Evaluation of estimate is b2, and image quality evaluation values corresponding to C are b3, then, last image quality evaluation values=a1b1+a2b2+
a3b3.Under normal circumstances, image quality evaluation values are bigger, illustrate that picture quality is better.
205th, feature extraction is carried out to the target facial image according to the second face training pattern, obtains the third party
Face feature set.
206th, the second face characteristic collection and third party's face feature set are subjected to aspect ratio pair.
Wherein, above-mentioned second face characteristic collection can include following at least one:Feature point set and profile collection.Above-mentioned 3rd
Face characteristic collection can include following at least one:Feature point set and profile collection.
Alternatively, the second face characteristic collection includes fisrt feature point set and the first profile collection, and third party's face is special
Collection includes second feature point set and the second profile collection;It is above-mentioned by the second face characteristic collection and third party's face feature set
Carry out aspect ratio pair, it may include following steps:
C1, the first profile collection matched with the second profile collection, and by the fisrt feature point set with
The second feature point set is matched;
C2, in the first profile collection and the second profile collection, the match is successful and the fisrt feature point set and described the
Two feature point sets confirm that the match is successful when the match is successful;In the first profile collection, it fails to match with the second profile collection, or
Person, the fisrt feature point set and the default face template confirm that it fails to match when it fails to match.
Wherein, features described above point set can use following algorithm to realize:Harris Corner Detection Algorithms, scale invariant feature become
Change, SUSAN Corner Detection Algorithms etc., will not be repeated here.Above-mentioned profile collection can be following algorithm:Hough transformation, haar
Or canny etc..
Alternatively, between the second face characteristic collection and third party's face feature set are carried out into aspect ratio pair, also
It may include steps of:
Image enhancement processing is carried out to third party's face feature set;
Then, it is above-mentioned that the second face characteristic collection and third party's face feature set are subjected to aspect ratio pair, can be according to
Following manner is implemented:
Third party's face feature set after image enhancement processing is matched with the second face characteristic collection.
Wherein, above-mentioned image enhancement processing may include but be not limited only to:Image denoising is (for example, wavelet transformation carries out image
Denoising), image restoration (for example, Wiener filtering), noctovision enhancing algorithm (for example, histogram equalization, gray scale stretching etc.),
After image enhancement processing is carried out to third party's face feature set, the characteristic of characteristic point can be strengthened to a certain extent (or
Person, amplification).
207th, when the second face characteristic collection and third party's face feature set compare successfully, it is unlocked operation.
Wherein, when the second face characteristic collection and third party's face feature set compare successfully, operation is unlocked, for example, than
Can be to process:When comparison value between the second face characteristic collection and third party's face feature set is more than predetermined threshold value, carry out
Unblock operation, when the comparison value between the second face characteristic collection and third party's face feature set is less than or equal to predetermined threshold value, weight
New collection facial image, above-mentioned predetermined threshold value can voluntarily be set by user, or, system default.It is above-mentioned to be unlocked operation,
It can be following at least one situation:To be put out for example, mobile terminal is under screen state, unblock operation can light screen, and
Into the homepage of mobile terminal, or specified page;Mobile terminal is under bright screen state, and unblock operation can be entered
The homepage of mobile terminal, or specified page;The unblock page of a certain application of mobile terminal, unblock operation can be completed
Unblock, into the page after unblock, for example, mobile terminal may be at paying the page, unblock operation can be paid.
Above-mentioned specified page can be following at least one:The page of some application, or, the page that user voluntarily specifies.
As can be seen that the image processing method described in the embodiment of the present application, the first face training pattern of acquisition, first
Face training pattern corresponds to trusted application TA, TA and is used for the first face feature set of preservation and default face template, and the first face is special
Collect and obtained to carry out feature extraction to default face template by the first face training pattern, the first face training pattern is entered
Row upgrading, obtains the second face training pattern, carries out feature extraction to default face template according to the second face training pattern, obtains
TA is stored in the second face characteristic collection, and by the second face characteristic collection, the second face characteristic collection is used in face recognition process
Middle progress aspect ratio pair, obtains target facial image, and carrying out feature to target facial image according to the second face training pattern carries
Take, obtain third party's face feature set, the second face characteristic collection and third party's face feature set are subjected to aspect ratio pair, in the second face
When feature set compares successfully with third party's face feature set, operation is unlocked.So that after the upgrading of face training pattern, can
To carry out feature extraction to face template using the face training pattern after upgrading, face characteristic collection is obtained, such face is special
Collection is more beneficial for ensureing face recognition success rate.
Referring to Fig. 3, Fig. 3 is a kind of mobile terminal that the embodiment of the present application provides, including:Application processor AP and storage
Device;And one or more programs, one or more of programs are stored in the memory, and it is configured to by institute
AP execution is stated, described program includes being used for the instruction for performing following steps:
The first face training pattern is obtained, the first face training pattern corresponds to trusted application TA, the TA for protecting
The first face feature set and default face template are deposited, the first face feature set is to pass through the first face training pattern pair
The default face template carries out feature extraction and obtained;
The first face training pattern is upgraded, obtains the second face training pattern;
Feature extraction is carried out to the default face template according to the second face training pattern, obtains the second face spy
Collection, and the second face characteristic collection is stored in the TA, the second face characteristic collection is used in face recognition process
Middle progress aspect ratio pair.
In a possible example, it is described the first face training pattern is upgraded in terms of, described program
Including the instruction for performing following steps:
Obtain user and be accustomed to parameter;
It is accustomed to parameter according to the user to upgrade the first face training pattern.
In a possible example, described program also includes being used for the instruction for performing following steps:
Obtain the face unblock record of specified time section;
Face unblock record is analyzed, obtains recognition of face evaluation of estimate;
When the recognition of face evaluation of estimate is less than default Evaluation threshold, perform described to the first face training pattern
The step of being upgraded.
In a possible example, described program also includes being used for the instruction for performing following steps:
The upgrade command sent by server is received, performs the step upgraded to the first face training pattern
Suddenly.
In a possible example, described program also includes being used for the instruction for performing following steps:
Obtain target facial image;
Feature extraction is carried out to the target facial image according to the second face training pattern, obtains third party's face spy
Collection;
The second face characteristic collection and third party's face feature set are subjected to aspect ratio pair;
When the second face characteristic collection and third party's face feature set compare successfully, operation is unlocked.
It is the device for implementing above-mentioned image processing method below, it is specific as follows:
Fig. 4 A are referred to, Fig. 4 A are a kind of structural representations for image processing apparatus that the present embodiment provides.At the image
Managing device includes first acquisition unit 401, the extraction unit 403 of upgrade unit 402 and first, wherein,
First acquisition unit 401, for obtaining the first face training pattern, the first face training pattern corresponds to credible
Using TA, the TA is used to preserve the first face feature set and default face template, and the first face feature set is passes through
The first face training pattern is stated to obtain the default face template progress feature extraction;
Upgrade unit 402, for upgrading to the first face training pattern, obtain the second face training pattern;
First extraction unit 403, it is special for being carried out according to the second face training pattern to the default face template
Sign extraction, obtains the second face characteristic collection, and the second face characteristic collection is stored in into the TA, second face characteristic
Collect for carrying out aspect ratio pair in face recognition process.
Alternatively, if Fig. 4 B, Fig. 4 B are the detail knots of the upgrade unit 402 of the image processing apparatus described by Fig. 4 A
Structure, the upgrade unit 402 may include:Acquisition module 4021 and upgraded module 4022, it is specific as follows:
Acquisition module 4021, it is accustomed to parameter for obtaining user;
Upgraded module 4022, the first face training pattern is upgraded for being accustomed to parameter according to the user.
Alternatively, such as Fig. 4 C, the modification structures of image processing apparatus of Fig. 4 C described by Fig. 4 A, it is compared with Fig. 4 A
Compared with may also include:Second acquisition unit 404 and evaluation unit 405, it is specific as follows:
Second acquisition unit 404, the face for obtaining specified time section unlock record;
Evaluation unit 405, for analyzing face unblock record, recognition of face evaluation of estimate is obtained, by described
Upgrade unit 402 performs described to first face training when the recognition of face evaluation of estimate is less than default Evaluation threshold
The step of model is upgraded.
Alternatively, such as Fig. 4 D, the modification structures of image processing apparatus of Fig. 4 D described by Fig. 4 A, it is compared with Fig. 4 A
Compared with may also include:Receiving unit 406, it is specific as follows:
Receiving unit 406, for receiving the upgrade command sent by server, performed by the upgrade unit 402 described
The step of upgrading to the first face training pattern.
Alternatively, such as Fig. 4 D, the modification structures of image processing apparatus of Fig. 4 D described by Fig. 4 A, it is compared with Fig. 4 A
Compared with may also include:
3rd acquiring unit 407, for obtaining target facial image;
Second extraction unit 408, it is special for being carried out according to the second face training pattern to the target facial image
Sign extraction, obtains third party's face feature set;
Comparing unit 409, for the second face characteristic collection and third party's face feature set to be carried out into aspect ratio pair;
Unlocking unit 410, for when the second face characteristic collection and third party's face feature set compare successfully, entering
Row unblock operation.
As can be seen that the image processing apparatus described in the embodiment of the present application, the first face training pattern of acquisition, first
Face training pattern corresponds to trusted application TA, TA and is used for the first face feature set of preservation and default face template, and the first face is special
Collect and obtained to carry out feature extraction to default face template by the first face training pattern, the first face training pattern is entered
Row upgrading, obtains the second face training pattern, carries out feature extraction to default face template according to the second face training pattern, obtains
TA is stored in the second face characteristic collection, and by the second face characteristic collection, the second face characteristic collection is used in face recognition process
Middle progress aspect ratio pair, so as to which after the upgrading of face training pattern, the face training pattern after upgrading can be utilized to face
Template carries out feature extraction, obtains face characteristic collection, and such face characteristic collection is more beneficial for ensureing face recognition success rate.
It is understood that the function of each program module of the image processing apparatus of the present embodiment can be real according to the above method
The method specific implementation in example is applied, its specific implementation process is referred to the associated description of above method embodiment, herein no longer
Repeat.
The embodiment of the present application additionally provides another mobile terminal, as shown in figure 5, for convenience of description, illustrate only with
The related part of the embodiment of the present application, particular technique details do not disclose, refer to the embodiment of the present application method part.The movement
Terminal can be to include mobile phone, tablet personal computer, PDA (personal digital assistant, personal digital assistant), POS
Any terminal device such as (point of sales, point-of-sale terminal), vehicle-mounted computer, so that mobile terminal is mobile phone as an example:
Fig. 5 is illustrated that the block diagram of the part-structure of the mobile phone related to the mobile terminal of the embodiment of the present application offer.Ginseng
Fig. 5 is examined, mobile phone includes:Radio frequency (radio frequency, RF) circuit 910, memory 920, input block 930, sensor
950th, voicefrequency circuit 960, Wireless Fidelity (wireless fidelity, WiFi) module 970, application processor AP980 and
The grade part of power supply 990.It will be understood by those skilled in the art that the handset structure shown in Fig. 5 does not form the restriction to mobile phone,
It can include than illustrating more or less parts, either combine some parts or different parts arrangement.
Each component parts of mobile phone is specifically introduced with reference to Fig. 5:
Input block 930 can be used for the numeral or character information for receiving input, and produce with the user of mobile phone set with
And the key signals input that function control is relevant.Specifically, input block 930 may include touching display screen 933, face identification device
931 and other input equipments 932.Face identification device 931 can refer to said structure, and concrete structure composition can refer to above-mentioned retouch
State, do not repeat excessively herein.Input block 930 can also include other input equipments 932.Specifically, other input equipments 932
Physical button, function key (such as volume control button, switch key etc.), trace ball, mouse, operation can be included but is not limited to
One or more in bar etc..
Wherein, the AP980, for performing following steps:
The first face training pattern is obtained, the first face training pattern corresponds to trusted application TA, the TA for protecting
The first face feature set and default face template are deposited, the first face feature set is to pass through the first face training pattern pair
The default face template carries out feature extraction and obtained;
The first face training pattern is upgraded, obtains the second face training pattern;
Feature extraction is carried out to the default face template according to the second face training pattern, obtains the second face spy
Collection, and the second face characteristic collection is stored in the TA, the second face characteristic collection is used in face recognition process
Middle progress aspect ratio pair.
AP980 is the control centre of mobile phone, using various interfaces and the various pieces of connection whole mobile phone, passes through fortune
Row performs the software program and/or module being stored in memory 920, and calls the data being stored in memory 920,
The various functions and processing data of mobile phone are performed, so as to carry out integral monitoring to mobile phone.Optionally, AP980 may include one or
Multiple processing units, the processing unit can be artificial intelligent chip, quantum chip;Preferably, AP980 can integrate application processor
And modem processor, wherein, application processor mainly handles operating system, user interface and application program etc., modulatedemodulate
Processor is adjusted mainly to handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into AP980
In.
In addition, memory 920 can include high-speed random access memory, nonvolatile memory, example can also be included
Such as at least one disk memory, flush memory device or other volatile solid-state parts.
RF circuits 910 can be used for the reception and transmission of information.Generally, RF circuits 910 include but is not limited to antenna, at least one
Individual amplifier, transceiver, coupler, low-noise amplifier (Low Noise Amplifier, LNA), duplexer etc..In addition,
RF circuits 910 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication
Standard or agreement, including but not limited to global system for mobile communications (global system of mobile
Communication, GSM), general packet radio service (general packet radio service, GPRS), code division it is more
Location (code division multiple access, CDMA), WCDMA (wideband code division
Multiple Access, WCDMA), Long Term Evolution (Long Term Evolution, LTE), Email, Short Message Service
(Short Messaging Service, SMS) etc..
Mobile phone may also include at least one sensor 950, such as optical sensor, motion sensor and other sensors.
Specifically, optical sensor may include environmental sensor and proximity transducer, wherein, environmental sensor can be according to the bright of ambient light
Secretly adjust the brightness of touching display screen, proximity transducer can close touching display screen and/or the back of the body when mobile phone is moved in one's ear
Light.As one kind of motion sensor, accelerometer sensor can detect in all directions the size of (generally three axles) acceleration,
Size and the direction of gravity are can detect that when static, application (such as horizontal/vertical screen switching, related trip available for identification mobile phone posture
Play, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;The gyro that can also configure as mobile phone
The other sensors such as instrument, barometer, hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Voicefrequency circuit 960, loudspeaker 961, microphone 962 can provide the COBBAIF between user and mobile phone.Audio-frequency electric
Electric signal after the voice data received conversion can be transferred to loudspeaker 961, sound is converted to by loudspeaker 961 by road 960
Signal plays;On the other hand, the voice signal of collection is converted to electric signal by microphone 962, is turned after being received by voicefrequency circuit 960
It is changed to voice data, then after voice data is played into AP980 processing, through RF circuits 910 to be sent to such as another mobile phone, or
Voice data is played to memory 920 further to handle.
WiFi belongs to short range wireless transmission technology, and mobile phone can help user's transceiver electronicses postal by WiFi module 970
Part, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 5 is shown
WiFi module 970, but it is understood that, it is simultaneously not belonging to must be configured into for mobile phone, can not change as needed completely
Become in the essential scope of invention and omit.
Mobile phone also includes the power supply 990 (such as battery) to all parts power supply, it is preferred that power supply can pass through power supply pipe
Reason system and AP980 are logically contiguous, so as to realize the work(such as management charging, electric discharge and power managed by power-supply management system
Energy.
Although being not shown, mobile phone can also include camera, bluetooth module etc., will not be repeated here.
In embodiment shown in earlier figures 1C or Fig. 2, each step method flow can based on the mobile phone structure realize.
In embodiment shown in earlier figures 3, Fig. 4 A~Fig. 4 E, each unit function can based on the mobile phone structure realize.
The embodiment of the present application also provides a kind of computer-readable storage medium, wherein, the computer-readable storage medium is stored for electricity
The computer program that subdata exchanges, it is any as described in above-mentioned embodiment of the method that the computer program make it that computer performs
A kind of part or all of step of image processing method.
The embodiment of the present application also provides a kind of computer program product, and the computer program product includes storing calculating
The non-transient computer-readable recording medium of machine program, the computer program are operable to make computer perform side as described above
The part or all of step of any image processing method described in method embodiment.
It should be noted that for foregoing each method embodiment, in order to be briefly described, therefore it is all expressed as a series of
Combination of actions, but those skilled in the art should know, the application is not limited by described sequence of movement because
According to the application, some steps can use other orders or carry out simultaneously.Secondly, those skilled in the art should also know
Know, embodiment described in this description belongs to preferred embodiment, involved action and module not necessarily the application
It is necessary.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and does not have the portion being described in detail in some embodiment
Point, it may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed device, can be by another way
Realize.For example, device embodiment described above is only schematical, such as the division of the unit, it is only one kind
Division of logic function, can there is an other dividing mode when actually realizing, such as multiple units or component can combine or can
To be integrated into another system, or some features can be ignored, or not perform.Another, shown or discussed is mutual
Coupling direct-coupling or communication connection can be by some interfaces, the INDIRECT COUPLING or communication connection of device or unit,
Can be electrical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the application can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, can also be realized in the form of software program module.
If the integrated unit is realized in the form of software program module and is used as independent production marketing or use
When, it can be stored in a computer-readable access to memory.Based on such understanding, the technical scheme of the application substantially or
Person say the part to be contributed to prior art or the technical scheme all or part can in the form of software product body
Reveal and, the computer software product is stored in a memory, including some instructions are causing a computer equipment
(can be personal computer, server or network equipment etc.) performs all or part of each embodiment methods described of the application
Step.And foregoing memory includes:USB flash disk, read-only storage (read-only memory, ROM), random access memory
(random access memory, RAM), mobile hard disk, magnetic disc or CD etc. are various can be with the medium of store program codes.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can
To instruct the hardware of correlation to complete by program, the program can be stored in a computer-readable memory, memory
It can include:Flash disk, read-only storage (read-only memory, ROM), random access memory (random access
Memory, RAM), disk or CD etc..
The embodiment of the present application is described in detail above, specific case used herein to the principle of the application and
Embodiment is set forth, and the explanation of above example is only intended to help and understands the present processes and its core concept;
Meanwhile for those of ordinary skill in the art, according to the thought of the application, can in specific embodiments and applications
There is change part, in summary, this specification content should not be construed as the limitation to the application.
Claims (14)
- A kind of 1. mobile terminal, it is characterised in that including application processor AP, and the memory being connected with the AP, wherein,The memory, it is used for the first face of preservation for storing the first face training pattern and trusted application TA, the TA Feature set and default face template, the first face feature set are to the default people by the first face training pattern Face template carries out feature extraction and obtained;The AP, for obtaining the first face training pattern;The first face training pattern is upgraded, obtains Two face training patterns;And feature extraction is carried out to the default face template according to the second face training pattern, obtain The TA is stored in the second face characteristic collection, and by the second face characteristic collection, the second face characteristic collection is used for Aspect ratio pair is carried out in face recognition process.
- 2. mobile terminal according to claim 1, it is characterised in that carried out described to the first face training pattern In terms of upgrading, the AP is specifically used for:Obtain user and be accustomed to parameter;It is accustomed to parameter according to the user to upgrade the first face training pattern.
- 3. mobile terminal according to claim 1 or 2, it is characterised in that the AP also particularly useful for:Obtain the face unblock record of specified time section;Face unblock record is analyzed, obtains recognition of face evaluation of estimate;When the recognition of face evaluation of estimate is less than default Evaluation threshold, perform described to the first face training pattern progress The step of upgrading.
- 4. mobile terminal according to claim 1 or 2, it is characterised in that the AP also particularly useful for:The upgrade command sent by server is received, performs described the step of upgrading to the first face training pattern.
- 5. according to the mobile terminal described in any one of Claims 1-4, it is characterised in that the AP also particularly useful for:Obtain target facial image;Feature extraction is carried out to the target facial image according to the second face training pattern, obtains third party's face feature Collection;The second face characteristic collection and third party's face feature set are subjected to aspect ratio pair;When the second face characteristic collection and third party's face feature set compare successfully, operation is unlocked.
- 6. a kind of image processing method, it is characterised in that deposited applied to including application processor AP, and with what the AP was connected The mobile terminal of reservoir, methods described include:Memory storage the first face training pattern and the trusted application TA, the TA are used to preserve the first face feature set With default face template, the first face feature set is to the default face template by the first face training pattern Feature extraction is carried out to obtain;The AP obtains the first face training pattern;The first face training pattern is upgraded, obtains the second people Face training pattern;And feature extraction is carried out to the default face template according to the second face training pattern, obtain the Two face characteristic collection, and the second face characteristic collection is stored in the TA, the second face characteristic collection is used in face Aspect ratio pair is carried out in identification process.
- A kind of 7. image processing method, it is characterised in that including:Obtain the first face training pattern, the first face training pattern corresponds to trusted application TA, and the TA is used to preserving the One face characteristic collection and default face template, the first face feature set are to described by the first face training pattern Default face template carries out feature extraction and obtained;The first face training pattern is upgraded, obtains the second face training pattern;Feature extraction is carried out to the default face template according to the second face training pattern, obtains the second face characteristic Collection, and the second face characteristic collection is stored in the TA, the second face characteristic collection is used in face recognition process Carry out aspect ratio pair.
- 8. according to the method for claim 7, it is characterised in that it is described that the first face training pattern is upgraded, Including:Obtain user and be accustomed to parameter;It is accustomed to parameter according to the user to upgrade the first face training pattern.
- 9. the method according to claim 7 or 8, it is characterised in that methods described also includes:Obtain the face unblock record of specified time section;Face unblock record is analyzed, obtains recognition of face evaluation of estimate;When the recognition of face evaluation of estimate is less than default Evaluation threshold, perform described to the first face training pattern progress The step of upgrading.
- 10. the method according to claim 7 or 8, it is characterised in that methods described also includes:The upgrade command sent by server is received, performs described the step of upgrading to the first face training pattern.
- 11. according to the method described in any one of claim 7 to 10, it is characterised in that methods described also includes:Obtain target facial image;Feature extraction is carried out to the target facial image according to the second face training pattern, obtains third party's face feature Collection;The second face characteristic collection and third party's face feature set are subjected to aspect ratio pair;When the second face characteristic collection and third party's face feature set compare successfully, operation is unlocked.
- A kind of 12. image processing apparatus, it is characterised in that including:First acquisition unit, for obtaining the first face training pattern, the first face training pattern corresponds to trusted application TA, The TA is used to preserve the first face feature set and default face template, and the first face feature set is by described the first Face training pattern carries out feature extraction to the default face template and obtained;Upgrade unit, for upgrading to the first face training pattern, obtain the second face training pattern;First extraction unit, for carrying out feature extraction to the default face template according to the second face training pattern, The second face characteristic collection is obtained, and the second face characteristic collection is stored in the TA, the second face characteristic collection is used for Aspect ratio pair is carried out in face recognition process.
- A kind of 13. mobile terminal, it is characterised in that including:Application processor AP and memory;And one or more programs, One or more of programs are stored in the memory, and are configured to be performed by the AP, and described program includes Instruction for such as any one of claim 7-11 methods.
- A kind of 14. computer-readable recording medium, it is characterised in that it is used to store computer program, wherein, the computer Program causes computer to perform the method as described in claim any one of 7-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711035259.XA CN107862266A (en) | 2017-10-30 | 2017-10-30 | Image processing method and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711035259.XA CN107862266A (en) | 2017-10-30 | 2017-10-30 | Image processing method and related product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107862266A true CN107862266A (en) | 2018-03-30 |
Family
ID=61696988
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711035259.XA Pending CN107862266A (en) | 2017-10-30 | 2017-10-30 | Image processing method and related product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107862266A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108804900A (en) * | 2018-05-29 | 2018-11-13 | Oppo广东移动通信有限公司 | The generation method and generation system of validation template, terminal and computer equipment |
CN108965716A (en) * | 2018-08-01 | 2018-12-07 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN109145653A (en) * | 2018-08-01 | 2019-01-04 | Oppo广东移动通信有限公司 | Data processing method and device, electronic equipment, computer readable storage medium |
CN109683938A (en) * | 2018-12-26 | 2019-04-26 | 苏州思必驰信息科技有限公司 | Sound-groove model upgrade method and device for mobile terminal |
US10929988B2 (en) | 2018-08-01 | 2021-02-23 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and device for processing image, and electronic device |
CN112702751A (en) * | 2019-10-23 | 2021-04-23 | 中国移动通信有限公司研究院 | Method for training and upgrading wireless communication model, network equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106934364A (en) * | 2017-03-09 | 2017-07-07 | 腾讯科技(上海)有限公司 | The recognition methods of face picture and device |
CN107273871A (en) * | 2017-07-11 | 2017-10-20 | 夏立 | The training method and device of a kind of face characteristic model |
-
2017
- 2017-10-30 CN CN201711035259.XA patent/CN107862266A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106934364A (en) * | 2017-03-09 | 2017-07-07 | 腾讯科技(上海)有限公司 | The recognition methods of face picture and device |
CN107273871A (en) * | 2017-07-11 | 2017-10-20 | 夏立 | The training method and device of a kind of face characteristic model |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108804900A (en) * | 2018-05-29 | 2018-11-13 | Oppo广东移动通信有限公司 | The generation method and generation system of validation template, terminal and computer equipment |
US11210800B2 (en) | 2018-05-29 | 2021-12-28 | Shenzhen Heytap Technology Corp., Ltd. | Method, system and terminal for generating verification template |
CN108804900B (en) * | 2018-05-29 | 2022-04-15 | Oppo广东移动通信有限公司 | Verification template generation method and generation system, terminal and computer equipment |
CN108965716A (en) * | 2018-08-01 | 2018-12-07 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN109145653A (en) * | 2018-08-01 | 2019-01-04 | Oppo广东移动通信有限公司 | Data processing method and device, electronic equipment, computer readable storage medium |
US10929988B2 (en) | 2018-08-01 | 2021-02-23 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and device for processing image, and electronic device |
CN109683938A (en) * | 2018-12-26 | 2019-04-26 | 苏州思必驰信息科技有限公司 | Sound-groove model upgrade method and device for mobile terminal |
CN109683938B (en) * | 2018-12-26 | 2022-08-02 | 思必驰科技股份有限公司 | Voiceprint model upgrading method and device for mobile terminal |
CN112702751A (en) * | 2019-10-23 | 2021-04-23 | 中国移动通信有限公司研究院 | Method for training and upgrading wireless communication model, network equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107679482A (en) | Solve lock control method and Related product | |
CN107832675A (en) | Processing method of taking pictures and Related product | |
CN107862266A (en) | Image processing method and related product | |
CN107862265A (en) | Image processing method and related product | |
CN107480496A (en) | Solve lock control method and Related product | |
CN107590461B (en) | Face recognition method and related product | |
CN107423699A (en) | Biopsy method and Related product | |
CN109241908A (en) | Face identification method and relevant apparatus | |
CN107292285A (en) | Living iris detection method and Related product | |
CN107633499A (en) | Image processing method and related product | |
CN107506687A (en) | Biopsy method and Related product | |
CN107679481A (en) | Solve lock control method and Related product | |
CN108985212A (en) | Face identification method and device | |
CN107506696A (en) | Anti-fake processing method and related product | |
CN107609514A (en) | Face identification method and Related product | |
CN106558025A (en) | A kind for the treatment of method and apparatus of picture | |
CN107657218B (en) | Face recognition method and related product | |
CN105956564B (en) | A kind of fingerprint image processing method and equipment | |
CN107451446A (en) | Solve lock control method and Related product | |
CN109117725A (en) | Face identification method and device | |
CN107451449A (en) | Bio-identification unlocking method and Related product | |
CN107633235A (en) | Solve lock control method and Related product | |
CN108024065A (en) | A kind of method of terminal taking, terminal and computer-readable recording medium | |
CN107613550A (en) | Solve lock control method and Related product | |
CN107403147A (en) | Living iris detection method and Related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180330 |