CN106295468B - Face identification method and device - Google Patents

Face identification method and device Download PDF

Info

Publication number
CN106295468B
CN106295468B CN201510257299.3A CN201510257299A CN106295468B CN 106295468 B CN106295468 B CN 106295468B CN 201510257299 A CN201510257299 A CN 201510257299A CN 106295468 B CN106295468 B CN 106295468B
Authority
CN
China
Prior art keywords
image
area
facial image
face
band
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510257299.3A
Other languages
Chinese (zh)
Other versions
CN106295468A (en
Inventor
刘霖
陈小龙
冯静敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510257299.3A priority Critical patent/CN106295468B/en
Publication of CN106295468A publication Critical patent/CN106295468A/en
Application granted granted Critical
Publication of CN106295468B publication Critical patent/CN106295468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure is directed to a kind of face identification method and devices, belong to field of face identification.The described method includes: the image in acquisition view-finder, obtains preview image;Facial image that the preview image the includes region the location of in the view-finder is obtained, at least one band of position is obtained;Facial image area shared at least one described band of position is obtained, at least one image area is obtained;Based at least one described band of position and at least one described image area, recognition of face is carried out to the facial image.The embodiment of the present disclosure when without recognition of face, avoids the interference to image taking to determine whether identifying to the facial image by least one band of position and at least one image area of the facial image.

Description

Face identification method and device
Technical field
This disclosure relates to field of face identification more particularly to a kind of face identification method and device.
Background technique
With the fast development of technology, there are more and more mobile terminals with camera function.When by mobile whole When end is taken pictures, the image in the mobile terminal view-finder can be shot, thus the image shot.And it is current The image shot by mobile terminal all includes substantially facial image, in order to improve the clarity of facial image, in view-finder Image when being shot, need to identify the facial image in view-finder.In the related technology, the operation of recognition of face can With are as follows: mobile terminal acquires the image in view-finder, carries out recognition of face to the image of acquisition, obtains facial image, and determine The size of position and the facial image of the facial image in the view-finder.
Summary of the invention
To overcome the problems in correlation technique, the disclosure provides a kind of face identification method and device.
According to the first aspect of the embodiments of the present disclosure, a kind of face identification method is provided, comprising:
The image in view-finder is acquired, preview image is obtained;
Facial image that the preview image the includes region the location of in the view-finder is obtained, obtains at least one A band of position;
Facial image area shared at least one described band of position is obtained, at least one image surface is obtained Product;
Based at least one described band of position and at least one described image area, face is carried out to the facial image Identification.
With reference to first aspect, in the first possible implementation of above-mentioned first aspect, it is described based on it is described at least One band of position and at least one described image area carry out recognition of face to the facial image, comprising:
Based at least one described band of position and at least one described image area, the weighting of the facial image is determined Value;
When the weighted value of the facial image is greater than or equal to specified Weighted Threshold, face is carried out to the facial image Identification.
The possible implementation of with reference to first aspect the first, in second of possible implementation of above-mentioned first aspect In, it is described based at least one described band of position and at least one described image area, determine the weighting of the facial image Value, comprising:
The corresponding face weight at least one described band of position is obtained, at least one face weight is obtained;
The product for determining described at least one image area and at least one face weight respectively, obtains at least one Sub-region right value;
It determines the sum of at least one described sub-region right value, obtains the weighted value of the facial image.
The possible implementation of with reference to first aspect the first, in the third possible realization side of above-mentioned first aspect It is described based at least one described band of position and at least one described image area in formula, determine adding for the facial image Weight, comprising:
From at least one described image area, maximum image area is selected, obtains target image area;
From at least one described band of position, the corresponding band of position of the target image area is obtained, target is obtained The band of position;
The corresponding face weight in the target position region is obtained, target face weight is obtained;
The product for determining the target image area and the target face weight, obtains the weighting of the facial image Value.
The possible implementation of with reference to first aspect the first, first aspect second of possible implementation or The third possible implementation of first aspect, in the 4th kind of possible implementation of above-mentioned first aspect, the base In at least one described band of position and at least one described image area, before the weighted value for determining the facial image, also Include:
Judge whether at least one described band of position is located at the central area of the view-finder;
When at least one described band of position is respectively positioned in the central area or at least one described band of position When a part of band of position is located at the central area, execute described based on described at least one band of position and described at least one A image area, the step of determining the weighted value of the facial image.
The 4th kind of possible implementation with reference to first aspect, in the 5th kind of possible realization side of above-mentioned first aspect In formula, after whether at least one band of position described in the judgement is located in the central area of the view-finder, further includes:
When at least one described band of position is respectively positioned on the fringe region of the view-finder, at least one described figure is determined The sum of image planes product, obtains facial image area, and the fringe region is the area in the view-finder in addition to the central area Domain;
It determines facial image area ratio shared in the area of the preview image, obtains area ratio;
When the area ratio is greater than or equal to designated ratio threshold value, recognition of face is carried out to the facial image.
According to the second aspect of an embodiment of the present disclosure, a kind of face identification device is provided, comprising:
Acquisition module obtains preview image for acquiring the image in view-finder;
First obtains module, for obtaining facial image that the preview image includes position locating in the view-finder Region is set, at least one band of position is obtained;
Second obtains module, for obtaining facial image area shared at least one described band of position, Obtain at least one image area;
Face recognition module, at least one band of position based on described in and at least one described image area, to institute It states facial image and carries out recognition of face.
In conjunction with second aspect, in the first possible implementation of above-mentioned second aspect, the face recognition module Include:
First determination unit, for determining based at least one described band of position and at least one described image area The weighted value of the facial image;
First face identification unit, for when the weighted value of the facial image is greater than or equal to specified Weighted Threshold, Recognition of face is carried out to the facial image.
In conjunction with the first possible implementation of second aspect, in second of possible realization side of above-mentioned second aspect In formula, the determination unit includes:
First obtains subelement, for obtaining the corresponding face weight at least one band of position, obtain to A few face weight;
First determines subelement, for determining at least one described image area and at least one described face weight respectively Product, obtain at least one sub-region right value;
Second determines subelement, for determining the sum of at least one described sub-region right value, obtains the facial image Weighted value.
In conjunction with the first possible implementation of second aspect, in the third possible realization side of above-mentioned second aspect In formula, the determination unit includes:
Subelement is selected, for maximum image area being selected, obtaining target figure from least one described image area Image planes product;
Second obtains subelement, corresponding for from least one described band of position, obtaining the target image area The band of position, obtain target position region;
Third obtains subelement, for obtaining the corresponding face weight in the target position region, obtains target face power Weight;
Third determines subelement, for determining the product of the target image area and the target face weight, obtains The weighted value of the facial image.
In conjunction with the first possible implementation of second aspect, second of possible implementation of second aspect or The third possible implementation of second aspect, in the 4th kind of possible implementation of above-mentioned second aspect, the people Face identification module further include:
Judging unit, for judging whether at least one described band of position is located at the central area of the view-finder;
Execution unit, for being respectively positioned on the central area or described at least one when at least one described band of position When a part of band of position in a band of position is located at the central area, execute described based at least one described position area Domain and at least one described image area, the step of determining the weighted value of the facial image.
In conjunction with the 4th kind of possible implementation of second aspect, in the 5th kind of possible realization side of above-mentioned second aspect In formula, the face recognition module further include:
Second determination unit, for when at least one described band of position is respectively positioned on the fringe region of the view-finder, It determines the sum of at least one described image area, obtains facial image area, the fringe region is in the view-finder except institute State the region except central area;
Third determination unit, for determining facial image area ratio shared in the area of the preview image Example, obtains area ratio;
Second face identification unit is used for when the area ratio is greater than or equal to designated ratio threshold value, to the people Face image carries out recognition of face.
According to the third aspect of an embodiment of the present disclosure, a kind of face identification device is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
The image in view-finder is acquired, preview image is obtained;
Facial image that the preview image the includes region the location of in the view-finder is obtained, obtains at least one A band of position;
Facial image area shared at least one described band of position is obtained, at least one image surface is obtained Product;
Based at least one described band of position and at least one described image area, face is carried out to the facial image Identification.
The technical scheme provided by this disclosed embodiment can include the following benefits: in the embodiments of the present disclosure, when When shooting image by mobile terminal, the location of the facial image that available preview image includes is in view-finder area Domain obtains at least one band of position, and obtains facial image area shared at least one band of position, obtains At least one image area, to determine facial image based at least one band of position and at least one image area Weighted value face knowledge just is carried out to facial image and when the weighted value of facial image is greater than or equal to specified Weighted Threshold Not, to show the face location and face size for carrying out recognition of face in the terminal, and in the weighted value of facial image When less than specified Weighted Threshold, recognition of face is not carried out to facial image, image taking is done so as to avoid recognition of face It disturbs.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention Example, and be used to explain the principle of the present invention together with specification.
Fig. 1 is a kind of flow chart of face identification method shown according to an exemplary embodiment.
Fig. 2 is the flow chart of another face identification method shown according to an exemplary embodiment.
Fig. 3 (a) is the schematic diagram that a kind of multiple bands of position of view-finder shown according to an exemplary embodiment divide.
Fig. 3 (b) is the signal that multiple bands of position of another view-finder shown according to an exemplary embodiment divide Figure.
Fig. 4 (a) is a kind of central area of view-finder shown according to an exemplary embodiment and the position of fringe region Schematic diagram.
Fig. 4 (b) is another central area of view-finder and the position of fringe region shown according to an exemplary embodiment Set schematic diagram.
Fig. 5 is a kind of block diagram of face identification device shown according to an exemplary embodiment.
Fig. 6 is a kind of block diagram of face identification device shown according to an exemplary embodiment.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all embodiments consistented with the present invention.On the contrary, they be only with it is such as appended The example of device and method being described in detail in claims, some aspects of the invention are consistent.
Before to the embodiment of the present disclosure carrying out that explanation is explained in detail, first the application scenarios of the embodiment of the present disclosure are given It introduces.When shooting image by mobile terminal, which acquires the image in view-finder, obtains preview image, if It include facial image in the preview image, it in the related technology, should regardless of where the facial image is located at the preview image Mobile terminal can all identify the facial image, to show the position of the facial image and big in the preview image It is small.But when the preview image is originally for photographed, if the edge of the preview image includes facial image, than Such as, passerby appears in the edge of the preview image, at this point, the facial image is just not required to the emphasis of shooting, if right again The facial image carries out recognition of face, will interfere to the image of shooting.Therefore, the embodiment of the present disclosure provides a kind of people Face recognition method can interfere the image of shooting to avoid the facial image for being not necessarily to recognition of face.
Fig. 1 is a kind of flow chart of face identification method shown according to an exemplary embodiment, as shown in Figure 1, the people Face recognition method is for including the following steps in terminal.
In a step 101, the image in view-finder is acquired, preview image is obtained.
In a step 102, facial image that the preview image the includes region the location of in the view-finder is obtained, is obtained To at least one band of position.
In step 103, facial image area shared at least one band of position is obtained, obtains at least one A image area.
At step 104, based at least one band of position and at least one image area, to the facial image into Row recognition of face.
In the embodiments of the present disclosure, when shooting image by mobile terminal, face that available preview image includes Image region the location of in view-finder obtains at least one band of position, and obtain facial image this at least one Shared area, obtains at least one image area in a band of position, thus based at least one band of position and this extremely A few image area determines the weighted value of facial image, and is greater than or equal to specified weighting threshold in the weighted value of facial image When value, recognition of face just is carried out to facial image, to show the face location for carrying out recognition of face and people in the terminal It is bold small, and when the weighted value of facial image is less than specified Weighted Threshold, recognition of face is not carried out to facial image, to keep away Interference of the recognition of face to image taking is exempted from.
It is right based at least one band of position and at least one image area in another embodiment of the present disclosure The facial image carries out recognition of face, comprising:
Based at least one band of position and at least one image area, the weighted value of the facial image is determined;
When the weighted value of the facial image is greater than or equal to specified Weighted Threshold, face knowledge is carried out to the facial image Not.
In another embodiment of the present disclosure, based at least one band of position and at least one image area, really The weighted value of the fixed facial image, comprising:
The corresponding face weight at least one band of position is obtained, at least one face weight is obtained;
The product for determining at least one image area and at least one face weight respectively obtains at least one region Weighted value;
It determines the sum of at least one sub-region right value, obtains the weighted value of the facial image.
In another embodiment of the present disclosure, based at least one band of position and at least one image area, really The weighted value of the fixed facial image, comprising:
From at least one image area, maximum image area is selected, obtains target image area;
From at least one band of position, the corresponding band of position of target image area is obtained, target position area is obtained Domain;
The corresponding face weight in target position region is obtained, target face weight is obtained;
The product for determining target image area and target face weight, obtains the weighted value of the facial image.
In another embodiment of the present disclosure, based at least one band of position and at least one image area, really Before the weighted value of the fixed facial image, further includes:
Judge whether at least one band of position is located at the central area of the view-finder;
When in the central area or at least one band of position that at least one band of position is respectively positioned on the view-finder A part of band of position when being located at the central area of the view-finder, execute based at least one band of position and this at least one A image area, the step of determining the weighted value of the facial image.
In another embodiment of the present disclosure, judge whether at least one band of position is located at the center of the view-finder After in domain, further includes:
When at least one band of position is respectively positioned on the fringe region of the view-finder, at least one image area is determined The sum of, facial image area is obtained, the fringe region of the view-finder is the region in the view-finder in addition to central area;
It determines facial image area ratio shared in the area of the preview image, obtains area ratio;
When the area ratio is greater than or equal to designated ratio threshold value, recognition of face is carried out to the facial image.
All the above alternatives, can form the alternative embodiment of the disclosure according to any combination, and the disclosure is real It applies example and this is no longer repeated one by one.
Fig. 2 is a kind of flow chart of face identification method shown according to an exemplary embodiment.Referring to fig. 2, this method For in terminal, this approach includes the following steps.
In step 201, the image in view-finder is acquired, preview image is obtained.
View-finder is the display box of the acquired image of camera of the mobile terminal, be that is to say, the figure shown in view-finder As being preview image, and when the mobile terminal receives shooting instruction, image and the preview which shoots Image is identical, and therefore, which can acquire the image in view-finder, obtains preview image.
It should be noted that shooting instruction is for shooting camera acquired image, and shooting instruction can be with It is triggered, can also be triggered by the mobile terminal by user.
When the shooting instruction is triggered by user, which can be triggered by specified operation, which can be with For clicking operation, slide, whipping operation, tapping operation, voice operating etc., the embodiment of the present disclosure does not do specific limit to this It is fixed.
When the shooting instruction is triggered by the mobile terminal, which can start to count in camera collection image When, and triggered when timing duration is specified duration, which can be set in advance, and the specified duration can be with It is 5 seconds, 10 seconds, 20 seconds etc., the embodiment of the present disclosure is equally not specifically limited in this embodiment.
In step 202, facial image that the preview image the includes region the location of in the view-finder is obtained, is obtained To at least one band of position.
The view-finder can be carried out position division in advance by the mobile terminal, obtain multiple bands of position, later, the movement Terminal can detect the preview image, and when detecting in the preview image includes facial image, which can To obtain facial image that the preview image the includes region the location of in the view-finder, at least one position area is obtained Domain.
Wherein, which carries out position division for the view-finder, and the operation for obtaining multiple bands of position can be with are as follows: should The view-finder is carried out average division, obtains multiple bands of position by area of the mobile terminal based on the view-finder.Alternatively, the shifting Dynamic terminal can divide the view-finder in the width direction of the view-finder, obtain multiple rectangular areas, multiple square The height in shape region is the height of the view-finder.Since the view-finder is carried out position division by the mobile terminal, multiple positions are obtained The method for setting region is more, and the embodiment of the present disclosure is to this without enumerating.
For example, working as area of the mobile terminal based on the view-finder, which is subjected to average division, obtains nine positions Region is set, as shown in Fig. 3 (a).Alternatively, the mobile terminal divides the view-finder in the width direction of the view-finder, Three rectangular areas are obtained, as shown in Fig. 3 (b), the height of three rectangular areas is the height of the view-finder.
In step 203, facial image area shared at least one band of position is obtained, obtains at least one A image area.
After determining the facial image at least one band of position locating in the view-finder, in order to determine the face The weighted value of image is needed from least one band of position, obtains the facial image at least one band of position Shared area obtains at least one image area.
In step 204, based at least one band of position and at least one image area, the facial image is determined Weighted value.
In the embodiments of the present disclosure, which is based at least one band of position and at least one image surface Product, determines that the mode of the weighted value of the facial image may include two kinds:
The corresponding face weight in first way, the acquisition for mobile terminal at least one band of position, obtain to A few face weight;At least one image area is multiplied respectively at least one face weight, obtains at least one Sub-region right value;At least one sub-region right value is added, the weighted value of the facial image is obtained.
For example, at least one band of position is respectively region 1, region 3 and region 4, the corresponding face weight in region 1 is 0.5, the corresponding face weight in region 3 is 0.8, and the corresponding face weight in region 4 is 0.9, and the corresponding image area in region 1 is 0.2 square centimeter, the corresponding image area in region 3 is 0.3 square centimeter, and the corresponding image area in region 4 is 0.6 square li Rice, therefore, 1 0.2 square centimeter of corresponding image area of region face weight 0.5 corresponding with region 1 is multiplied, region is obtained 1 corresponding sub-region right value is 0.1, by 3 0.3 square centimeter of corresponding image area of region face weight corresponding with region 3 0.8 is multiplied, and obtaining the corresponding sub-region right value in region 3 is 0.24,0.6 square centimeter of 4 corresponding image area of region and region 4 Corresponding face weight 0.9 is multiplied, and obtaining the corresponding sub-region right value in region 4 is 0.54, by the corresponding sub-region right value in region 1 0.1, the corresponding sub-region right value 0.24 in region 3 sub-region right value 0.54 corresponding with region 4 is added, and obtains the facial image Weighted value is 0.88.
Wherein, which can carry out position division, after obtaining multiple bands of position, Ke Yiwei to the view-finder A face weight is respectively set in multiple band of position, and the face weight is for measuring the corresponding band of position of face weight In facial image significance level.Later, which can be by the mark of multiple band of position and multiple position The corresponding face weight in region, is stored in the corresponding relationship between location area and face weight.Therefore, when mobile whole When holding at least one band of position where getting the facial image that preview image includes, which can be based on this extremely The mark of few band of position is obtained and is corresponded to from the corresponding relationship between the location area and face weight of storage Face weight, obtain at least one face weight.
It should be noted that the mark of the band of position multiple band of position can be numbered for the mobile terminal or Person names to obtain, and that is to say, the mark of the band of position can be the number of the band of position, or the title etc. for the band of position Deng the embodiment of the present disclosure is not specifically limited in this embodiment.
The second way, the mobile terminal select maximum image area, obtain mesh from least one image area Logo image area;From at least one band of position, the corresponding band of position of target image area is obtained, target position is obtained Region;The corresponding face weight in target position region is obtained, target face weight is obtained;By target image area and target face Multiplied by weight obtains the weighted value of the facial image.
For example, at least one band of position is respectively region 1, region 3 and region 4, the corresponding image area in region 1 is 0.2 square centimeter, the corresponding image area in region 3 is 0.3 square centimeter, and the corresponding image area in region 4 is 0.6 square li Rice, from three image areas, selecting maximum image area is 0.6 square centimeter, and by 0.6 square li of the image area Rice is determined as target image area, from least one band of position, obtains the corresponding band of position of target image area, obtains It is region 4 to target position region;The corresponding face weight in target position region is obtained, obtaining target face weight is 0.9;It will 0.6 square centimeter of target image area is multiplied with target face weight 0.9, and the weighted value for obtaining the facial image is 0.54.
For the second way, since target image area is the maximum image area at least one image area, And maximum image area can most represent the position of the facial image, therefore, be based on target image area and target face weight, When determining the weighted value of the facial image, the complexity of calculating is reduced.
Further, when which determines the weighted value of the facial image based on the second way, the mobile terminal At least one image area can also be ranked up, it is suitable to obtain image area according to the sequence of image area from big to small Sequence, and according to the image area sequence, from least one image area, select two image areas, that is to say, from this to In a few image area, maximum two image areas are selected.If the difference of two image areas of selection is larger, table Improving eyesight logo image area determines the accurate of the facial image weighted value to improve more representative of the position of the facial image Property.
In step 205, when the weighted value of the facial image is greater than specified Weighted Threshold, people is carried out to the facial image Face identification.
When the mobile terminal determines the weighted value of the facial image, which can be by the weighting of the facial image Value is compared with specified Weighted Threshold, when the weighted value of the facial image is greater than specified Weighted Threshold, determines the face figure As being the facial image for needing to carry out recognition of face, at this point, the mobile terminal can carry out recognition of face to the facial image, obtain To the position of the facial image and the size of the facial image, to show the position of the facial image in the view-finder and be somebody's turn to do The size of facial image.
Further, when the weighted value of the facial image is less than or equal to specified Weighted Threshold, the facial image is determined It is not the shooting emphasis of the preview image, that is to say, which does not need to carry out recognition of face, at this point it is possible to save people The step of face identifies, so that position and the size of the facial image will not be shown in the view-finder, and then avoids to image The interference of shooting.
In the embodiments of the present disclosure, when the acquisition for mobile terminal at least one corresponding band of position of the facial image and When at least one image area, it can also not only lead to directly by carrying out recognition of face the step of above-mentioned 204 to 205 Other modes are crossed to carry out recognition of face, comprising: the mobile terminal judges whether at least one band of position is located at and finds a view In the central area of frame;When at least one band of position is respectively positioned on the central area or at least one position of the view-finder When setting a part of band of position in region and being located at the central area of the view-finder, which is based at least one position again Region and at least one image area are set, determines the weighted value of the facial image, so that the weighted value when the facial image is big When specified Weighted Threshold, recognition of face is carried out to the facial image.And it finds a view when at least one band of position is respectively positioned on this When the fringe region of frame, which is added at least one image area, obtains facial image area, the fringe region For the region in the view-finder in addition to central area;Determine facial image area ratio shared in the area of preview image Example, obtains area ratio;When the area ratio is greater than or equal to designated ratio threshold value, face knowledge is carried out to the facial image Not.
When at least one band of position is respectively positioned on the fringe region of the view-finder, determine that the facial image may not be The shooting emphasis of the preview image, at this point, the mobile terminal by the facial image area divided by the area of preview image, obtain face Product ratio, when the area ratio is less than designated ratio threshold value, may further determine that the facial image not is the preview image Shooting emphasis, at this point, the mobile terminal can not to the facial image carry out recognition of face, thus will not be in the view-finder It shows position and the size of the facial image, and then avoids the interference to image taking.And when the area ratio is greater than or waits When designated ratio threshold value, determine that the facial image is the shooting emphasis of the preview image, therefore, which needs to this Facial image carries out recognition of face.
In order to further increase the efficiency of recognition of face, when the mobile terminal judges the equal position at least one band of position When the central area of the view-finder, which can directly carry out recognition of face, without determining the facial image Weighted value.
It should be noted that it is set in advance to specify Weighted Threshold and designated ratio threshold value can be, for example, specified weighting Threshold value can be 0.5,1,2,3 etc., and designated ratio threshold value can be 0.45,0.5 etc., and the embodiment of the present disclosure does not do specific limit to this It is fixed.
In addition, the central area of view-finder and fringe region divide in advance, it should for example, the mobile terminal is available The central point of view-finder, using the first designated length as radius, obtains a border circular areas, and will justify using the central point as the center of circle Shape region is determined as the central area of the view-finder, and the region in the view-finder in addition to central area is determined as marginal zone Domain, as shown in Fig. 4 (a).For another example, the central point of the available view-finder of the mobile terminal, the central point are located at second and refer to The middle position of measured length is height with the height of view-finder, obtains a rectangular area using the second designated length as width, The rectangular area is determined as to the central area of the view-finder, the region in the view-finder in addition to central area is determined as side Edge region, as shown in Fig. 4 (b).
Wherein, the first designated length and the second designated length can be set in advance, and the first designated length can be less than Second designated length, the embodiment of the present disclosure are not specifically limited in this embodiment.
In the embodiments of the present disclosure, when shooting image by mobile terminal, face that available preview image includes Image region the location of in view-finder obtains at least one band of position, and obtain facial image this at least one Shared area, obtains at least one image area in a band of position, thus based at least one band of position and this extremely A few image area determines the weighted value of facial image, and is greater than or equal to specified weighting threshold in the weighted value of facial image When value, recognition of face just is carried out to facial image, to show the face location for carrying out recognition of face and people in the terminal It is bold small, and when the weighted value of facial image is less than specified Weighted Threshold, recognition of face is not carried out to facial image, to keep away Interference of the recognition of face to image taking is exempted from.
Fig. 5 is a kind of face identification device block diagram shown according to an exemplary embodiment.Referring to Fig. 5, which includes Acquisition module 501, first obtains module 502, and second obtains module 503 and face recognition module 504.
Acquisition module 501 obtains preview image for acquiring the image in view-finder;
First obtains module 502, for obtaining facial image that the preview image includes position locating in the view-finder Region is set, at least one band of position is obtained;
Second obtains module 503, for obtaining facial image area shared at least one band of position, obtains To at least one image area;
Face recognition module 504, for being based at least one band of position and at least one image area, to the people Face image carries out recognition of face.
In another embodiment of the present disclosure, face recognition module 504 includes:
First determination unit, for determining the people based at least one band of position and at least one image area The weighted value of face image;
First face identification unit is right for when the weighted value of the facial image is greater than or equal to specified Weighted Threshold The facial image carries out recognition of face.
In another embodiment of the present disclosure, determination unit includes:
First acquisition subelement obtains at least for obtaining the corresponding face weight at least one band of position One face weight;
First determines subelement, for determining multiplying at least one image area and at least one face weight respectively Product, obtains at least one sub-region right value;
Second determines that subelement obtains the weighting of the facial image for determining the sum of at least one sub-region right value Value.
In another embodiment of the present disclosure, determination unit includes:
Subelement is selected, for maximum image area being selected, obtaining target image from least one image area Area;
Second obtains subelement, for from least one band of position, obtaining the corresponding position of target image area Region obtains target position region;
Third obtains subelement and obtains target face weight for obtaining the corresponding face weight in the target position region;
Third determines subelement, for determining the product of the target image area He the target face weight, obtains the people The weighted value of face image.
In another embodiment of the present disclosure, face recognition module 504 further include:
Judging unit, for judging whether at least one band of position is located at the central area of the view-finder;
Execution unit, for when at least one band of position be respectively positioned on the view-finder central area or this at least When a part of band of position in one band of position is located at the central area of the view-finder, executes and be based at least one position Region and at least one image area, the step of determining the weighted value of the facial image.
In another embodiment of the present disclosure, face recognition module 504 further include:
Second determination unit, for determining when at least one band of position is respectively positioned on the fringe region of the view-finder The sum of at least one image area obtains facial image area, the fringe region be the view-finder in addition to central area Region;
Third determination unit is obtained for determining facial image area ratio shared in the area of the preview image To area ratio;
Second face identification unit is used for when the area ratio is greater than or equal to designated ratio threshold value, to the face figure As carrying out recognition of face.
In the embodiments of the present disclosure, when shooting image by mobile terminal, face that available preview image includes Image region the location of in view-finder obtains at least one band of position, and obtain facial image this at least one Shared area, obtains at least one image area in a band of position, thus based at least one band of position and this extremely A few image area determines the weighted value of facial image, and is greater than or equal to specified weighting threshold in the weighted value of facial image When value, recognition of face just is carried out to facial image, to show the face location for carrying out recognition of face and people in the terminal It is bold small, and when the weighted value of facial image is less than specified Weighted Threshold, recognition of face is not carried out to facial image, to keep away Interference of the recognition of face to image taking is exempted from.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, no detailed explanation will be given here.
Fig. 6 is a kind of block diagram of device 600 for recognition of face shown according to an exemplary embodiment.For example, dress Setting 600 can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, medical treatment Equipment, body-building equipment, personal digital assistant etc..
Referring to Fig. 6, device 600 may include following one or more components: processing component 602, memory 604, power supply Component 606, multimedia component 608, audio component 610, the interface 612 of input/output (I/O), sensor module 614, and Communication component 616.
The integrated operation of the usual control device 600 of processing component 602, such as with display, telephone call, data communication, phase Machine operation and record operate associated operation.Processing element 602 may include that one or more processors 620 refer to execute It enables, to perform all or part of the steps of the methods described above.In addition, processing component 602 may include one or more modules, just Interaction between processing component 602 and other assemblies.For example, processing component 602 may include multi-media module, it is more to facilitate Interaction between media component 608 and processing component 602.
Memory 604 is configured as storing various types of data to support the operation in equipment 600.These data are shown Example includes the instruction of any application or method for operating on device 600, contact data, and telephone book data disappears Breath, picture, video etc..Memory 604 can be by any kind of volatibility or non-volatile memory device or their group It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash Device, disk or CD.
Electric power assembly 606 provides electric power for the various assemblies of device 600.Electric power assembly 606 may include power management system System, one or more power supplys and other with for device 600 generate, manage, and distribute the associated component of electric power.
Multimedia component 608 includes the screen of one output interface of offer between described device 600 and user.One In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers Body component 608 includes a front camera and/or rear camera.When equipment 600 is in operation mode, such as screening-mode or When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 610 is configured as output and/or input audio signal.For example, audio component 610 includes a Mike Wind (MIC), when device 600 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched It is set to reception external audio signal.The received audio signal can be further stored in memory 604 or via communication set Part 616 is sent.In some embodiments, audio component 610 further includes a loudspeaker, is used for output audio signal.
I/O interface 612 provides interface between processing component 602 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock Determine button.
Sensor module 614 includes one or more sensors, and the state for providing various aspects for device 600 is commented Estimate.For example, sensor module 614 can detecte the state that opens/closes of equipment 600, and the relative positioning of component, for example, it is described Component is the display and keypad of device 600, and sensor module 614 can be with 600 1 components of detection device 600 or device Position change, the existence or non-existence that user contacts with device 600,600 orientation of device or acceleration/deceleration and device 600 Temperature change.Sensor module 614 may include proximity sensor, be configured to detect without any physical contact Presence of nearby objects.Sensor module 614 can also include optical sensor, such as CMOS or ccd image sensor, at As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 616 is configured to facilitate the communication of wired or wireless way between device 600 and other equipment.Device 600 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.In an exemplary implementation In example, communication component 616 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, the communication component 616 further includes near-field communication (NFC) module, to promote short range communication.Example Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology, Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 600 can be believed by one or more application specific integrated circuit (ASIC), number Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided It such as include the memory 604 of instruction, above-metioned instruction can be executed by the processor 620 of device 600 to complete the above method.For example, The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk With optical data storage devices etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processing of mobile terminal When device executes, so that mobile terminal is able to carry out a kind of face identification method, which comprises
The image in view-finder is acquired, preview image is obtained;
Facial image that the preview image the includes region the location of in the view-finder is obtained, at least one position is obtained Set region;
Facial image area shared at least one band of position is obtained, at least one image area is obtained;
Based at least one band of position and at least one image area, recognition of face is carried out to the facial image.
It is right based at least one band of position and at least one image area in another embodiment of the present disclosure The facial image carries out recognition of face, comprising:
Based at least one band of position and at least one image area, the weighted value of the facial image is determined;
When the weighted value of the facial image is greater than or equal to specified Weighted Threshold, face knowledge is carried out to the facial image Not.
In another embodiment of the present disclosure, based at least one band of position and at least one image area, really The weighted value of the fixed facial image, comprising:
The corresponding face weight at least one band of position is obtained, at least one face weight is obtained;
The product for determining at least one image area and at least one face weight respectively obtains at least one region Weighted value;
It determines the sum of at least one sub-region right value, obtains the weighted value of the facial image.
In another embodiment of the present disclosure, based at least one band of position and at least one image area, really The weighted value of the fixed facial image, comprising:
From at least one image area, maximum image area is selected, obtains target image area;
From at least one band of position, the corresponding band of position of target image area is obtained, target position area is obtained Domain;
The corresponding face weight in target position region is obtained, target face weight is obtained;
The product for determining target image area and target face weight, obtains the weighted value of the facial image.
In another embodiment of the present disclosure, based at least one band of position and at least one image area, really Before the weighted value of the fixed facial image, further includes:
Judge whether at least one band of position is located at the central area of the view-finder;
When in the central area or at least one band of position that at least one band of position is respectively positioned on the view-finder A part of band of position when being located at the central area of the view-finder, execute based at least one band of position and this at least one A image area, the step of determining the weighted value of the facial image.
In another embodiment of the present disclosure, judge whether at least one band of position is located at the center of the view-finder After in domain, further includes:
When at least one band of position is respectively positioned on the fringe region of the view-finder, at least one image area is determined The sum of, facial image area is obtained, the fringe region of the view-finder is the region in the view-finder in addition to central area;
It determines facial image area ratio shared in the area of the preview image, obtains area ratio;
When the area ratio is greater than or equal to designated ratio threshold value, recognition of face is carried out to the facial image.
In the embodiments of the present disclosure, when shooting image by mobile terminal, face that available preview image includes Image region the location of in view-finder obtains at least one band of position, and obtain facial image this at least one Shared area, obtains at least one image area in a band of position, thus based at least one band of position and this extremely A few image area determines the weighted value of facial image, and is greater than or equal to specified weighting threshold in the weighted value of facial image When value, recognition of face just is carried out to facial image, to show the face location for carrying out recognition of face and people in the terminal It is bold small, and when the weighted value of facial image is less than specified Weighted Threshold, recognition of face is not carried out to facial image, to keep away Interference of the recognition of face to image taking is exempted from.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to of the invention its Its embodiment.This application is intended to cover any variations, uses, or adaptations of the invention, these modifications, purposes or Person's adaptive change follows general principle of the invention and including the undocumented common knowledge in the art of the disclosure Or conventional techniques.The description and examples are only to be considered as illustrative, and true scope and spirit of the invention are by following Claim is pointed out.
It should be understood that the present invention is not limited to the precise structure already described above and shown in the accompanying drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present invention is limited only by the attached claims.

Claims (9)

1. a kind of face identification method, which is characterized in that the described method includes:
The image in view-finder is acquired, preview image is obtained;
Facial image that the preview image the includes region the location of in the view-finder is obtained, at least one position is obtained Set region;
Facial image area shared at least one described band of position is obtained, at least one image area is obtained;
Judge whether at least one described band of position is located at the central area of the view-finder;
When at least one described band of position is respectively positioned on one in the central area or at least one described band of position When the band of position being divided to be located at the central area, execute based at least one described band of position and at least one described image surface The step of accumulating, determining the weighted value of the facial image;
When the weighted value of the facial image is greater than or equal to specified Weighted Threshold, face knowledge is carried out to the facial image Not.
2. the method as described in claim 1, which is characterized in that it is described based at least one described band of position and it is described at least One image area, determines the weighted value of the facial image, comprising:
The corresponding face weight at least one described band of position is obtained, at least one face weight is obtained;
The product for determining described at least one image area and at least one face weight respectively, obtains at least one region Weighted value;
It determines the sum of at least one described sub-region right value, obtains the weighted value of the facial image.
3. the method as described in claim 1, which is characterized in that it is described based at least one described band of position and it is described at least One image area, determines the weighted value of the facial image, comprising:
From at least one described image area, maximum image area is selected, obtains target image area;
From at least one described band of position, the corresponding band of position of the target image area is obtained, target position is obtained Region;
The corresponding face weight in the target position region is obtained, target face weight is obtained;
The product for determining the target image area and the target face weight, obtains the weighted value of the facial image.
4. the method as described in claim 1, which is characterized in that whether at least one band of position described in the judgement is located at institute After the central area for stating view-finder, further includes:
When at least one described band of position is respectively positioned on the fringe region of the view-finder, at least one described image surface is determined The sum of product, obtains facial image area, and the fringe region is the region in the view-finder in addition to the central area;
It determines facial image area ratio shared in the area of the preview image, obtains area ratio;
When the area ratio is greater than or equal to designated ratio threshold value, recognition of face is carried out to the facial image.
5. a kind of face identification device, which is characterized in that described device includes:
Acquisition module obtains preview image for acquiring the image in view-finder;
First obtains module, for obtaining facial image that the preview image the includes area the location of in the view-finder Domain obtains at least one band of position;
Second obtains module, for obtaining facial image area shared at least one described band of position, obtains At least one image area;
Face recognition module, comprising:
Judging unit, for judging whether at least one described band of position is located at the central area of the view-finder;
Execution unit, for being respectively positioned on the central area or at least one described position when at least one described band of position When setting a part of band of position in region and being located at the central area, execute based at least one described band of position and described At least one image area, the step of determining the weighted value of the facial image;
First determination unit, described in determining based at least one described band of position and at least one described image area The weighted value of facial image;
First face identification unit, for when the weighted value of the facial image is greater than or equal to specified Weighted Threshold, to institute It states facial image and carries out recognition of face.
6. device as claimed in claim 5, which is characterized in that first determination unit includes:
First obtains subelement, for obtaining the corresponding face weight at least one described band of position, obtains at least one A face weight;
First determines subelement, for determining multiplying at least one described image area and at least one face weight respectively Product, obtains at least one sub-region right value;
Second determines subelement, for determining the sum of at least one described sub-region right value, obtains the weighting of the facial image Value.
7. device as claimed in claim 5, which is characterized in that first determination unit includes:
Subelement is selected, for maximum image area being selected, obtaining target image face from least one described image area Product;
Second obtains subelement, for obtaining the corresponding position of the target image area from least one described band of position Region is set, target position region is obtained;
Third obtains subelement and obtains target face weight for obtaining the corresponding face weight in the target position region;
Third determines subelement, for determining the product of the target image area and the target face weight, obtains described The weighted value of facial image.
8. device as claimed in claim 5, which is characterized in that the face recognition module further include:
Second determination unit, for determining when at least one described band of position is respectively positioned on the fringe region of the view-finder The sum of at least one described image area, obtains facial image area, and the fringe region is to remove in described in the view-finder Region except heart district domain;
Third determination unit is obtained for determining facial image area ratio shared in the area of the preview image To area ratio;
Second face identification unit is used for when the area ratio is greater than or equal to designated ratio threshold value, to the face figure As carrying out recognition of face.
9. a kind of face identification device, which is characterized in that described device includes:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
The image in view-finder is acquired, preview image is obtained;
Facial image that the preview image the includes region the location of in the view-finder is obtained, at least one position is obtained Set region;
Facial image area shared at least one described band of position is obtained, at least one image area is obtained;
Judge whether at least one described band of position is located at the central area of the view-finder;
When at least one described band of position is respectively positioned on one in the central area or at least one described band of position When the band of position being divided to be located at the central area, execute based at least one described band of position and at least one described image surface The step of accumulating, determining the weighted value of the facial image;
When the weighted value of the facial image is greater than or equal to specified Weighted Threshold, face knowledge is carried out to the facial image Not.
CN201510257299.3A 2015-05-19 2015-05-19 Face identification method and device Active CN106295468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510257299.3A CN106295468B (en) 2015-05-19 2015-05-19 Face identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510257299.3A CN106295468B (en) 2015-05-19 2015-05-19 Face identification method and device

Publications (2)

Publication Number Publication Date
CN106295468A CN106295468A (en) 2017-01-04
CN106295468B true CN106295468B (en) 2019-06-14

Family

ID=57632637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510257299.3A Active CN106295468B (en) 2015-05-19 2015-05-19 Face identification method and device

Country Status (1)

Country Link
CN (1) CN106295468B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259739A (en) * 2017-12-29 2018-07-06 维沃移动通信有限公司 A kind of method, device and mobile terminal of image taking
CN108875534B (en) * 2018-02-05 2023-02-28 北京旷视科技有限公司 Face recognition method, device, system and computer storage medium
CN111263066B (en) * 2020-02-18 2021-07-20 Oppo广东移动通信有限公司 Composition guiding method, composition guiding device, electronic equipment and storage medium
CN111401324A (en) * 2020-04-20 2020-07-10 Oppo广东移动通信有限公司 Image quality evaluation method, device, storage medium and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008156184A1 (en) * 2007-06-18 2008-12-24 Canon Kabushiki Kaisha Facial expression recognition apparatus and method, and image capturing apparatus
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5127686B2 (en) * 2008-12-11 2013-01-23 キヤノン株式会社 Image processing apparatus, image processing method, and imaging apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008156184A1 (en) * 2007-06-18 2008-12-24 Canon Kabushiki Kaisha Facial expression recognition apparatus and method, and image capturing apparatus
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images

Also Published As

Publication number Publication date
CN106295468A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN106454336B (en) The method and device and terminal that detection terminal camera is blocked
CN105094967B (en) Process operation method and device
CN105631803B (en) The method and apparatus of filter processing
CN105100634B (en) Image capturing method and device
CN105069426B (en) Similar pictures judgment method and device
CN105758319B (en) The method and apparatus for measuring target object height by mobile terminal
CN105046260B (en) Image pre-processing method and device
CN105205494B (en) Similar pictures recognition methods and device
CN106295468B (en) Face identification method and device
CN105938412B (en) Volume icon display methods and device
CN105208284B (en) Shoot based reminding method and device
CN104216525B (en) Method and device for mode control of camera application
CN105376410B (en) Alarm clock setting method and device
CN106210495A (en) Image capturing method and device
CN106303198A (en) Photographing information acquisition methods and device
CN108989687A (en) camera focusing method and device
CN108307308A (en) Localization method, device and the storage medium of WLAN devices
CN105242837B (en) Five application page acquisition methods and terminal
CN108848303A (en) Shoot reminding method and device
CN108881634A (en) Terminal control method, device and computer readable storage medium
CN109451813A (en) Cell accessing method and device
CN105487774B (en) Image group technology and device
CN107544686B (en) Operation execution method and device
CN109803051A (en) Ambient brightness value-acquiring method and device
CN106775240B (en) Triggering method, device and the terminal of application program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant