CN109034013A - A kind of facial image recognition method, device and storage medium - Google Patents

A kind of facial image recognition method, device and storage medium Download PDF

Info

Publication number
CN109034013A
CN109034013A CN201810750438.XA CN201810750438A CN109034013A CN 109034013 A CN109034013 A CN 109034013A CN 201810750438 A CN201810750438 A CN 201810750438A CN 109034013 A CN109034013 A CN 109034013A
Authority
CN
China
Prior art keywords
facial image
face
frame
multiframe
score value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810750438.XA
Other languages
Chinese (zh)
Other versions
CN109034013B (en
Inventor
陈志博
石楷弘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810750438.XA priority Critical patent/CN109034013B/en
Publication of CN109034013A publication Critical patent/CN109034013A/en
Application granted granted Critical
Publication of CN109034013B publication Critical patent/CN109034013B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a kind of facial image recognition method, device and storage medium, the embodiment of the present invention can acquire multiframe facial image, and determine human face region to be identified according to multiframe facial image;Attitude detection is carried out to the face in every frame facial image according to human face region, obtains attitude parameter;The movement speed for obtaining the facial image in multiframe facial image between every adjacent two field pictures, determines the corresponding clarity of every frame facial image according to movement speed;It is filtered out from multiframe facial image according to attitude parameter and clarity and meets facial image corresponding to preset condition, obtain target facial image;Recognition of face is carried out to target facial image.The program can be from collected multiframe facial image, it filters out the facial image that facial image is met the requirements and carries out recognition of face, it avoids and recognition of face is carried out to second-rate facial image, and the problem of recognition result mistake occur, improve the accuracy to facial image identification.

Description

A kind of facial image recognition method, device and storage medium
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of facial image recognition method, device and storage are situated between Matter.
Background technique
Growing day by day with application demands such as information security, access control and video monitorings, face recognition technology is more next It is more concerned by people, is widely used in various social life work.For example, face recognition technology can be applied to people In face gate inhibition's scene, automatic security detection service can be provided for householder and security personnel etc..
In the prior art, during identifying to facial image, gate inhibition's camera acquisition user is generally first passed through Facial image, then directly identify whether the user can be determined that by recognition result to the face in facial image There is the permission of opening gate, if had permission, opening gate, if being not turned on gate inhibition without permission.
In the research and practice process to the prior art, it was found by the inventors of the present invention that due to collected face figure It is that side face makes most of face area be blocked (such as posture is more inclined) or can only as may be vague perhaps user The face area etc. of very little is collected, therefore directly collected facial image is identified, is likely to result in recognition result It is very inaccurate, there may be the problems such as wrong identification even for second-rate image, be asked to cause some safety Topic etc..
Summary of the invention
The embodiment of the present invention provides a kind of facial image recognition method, device and storage medium, it is intended to improve to face figure As the accuracy of identification.
In order to solve the above technical problems, the embodiment of the present invention the following technical schemes are provided:
A kind of facial image recognition method, comprising:
Multiframe facial image is acquired, and human face region to be identified is determined according to the multiframe facial image;
Attitude detection is carried out to the face in every frame facial image according to the human face region, obtains attitude parameter;
The movement speed for obtaining the facial image in the multiframe facial image between every adjacent two field pictures, according to described Movement speed determines the corresponding clarity of every frame facial image;
It is filtered out and is met corresponding to preset condition from the multiframe facial image according to the attitude parameter and clarity Facial image, obtain target facial image;
Recognition of face is carried out to the target facial image.
A kind of facial image identification device, comprising:
Determination unit determines face to be identified for acquiring multiframe facial image, and according to the multiframe facial image Region;
Detection unit is obtained for carrying out attitude detection to the face in every frame facial image according to the human face region Attitude parameter;
Acquiring unit, for obtaining the movement of the facial image in the multiframe facial image between every adjacent two field pictures Speed determines the corresponding clarity of every frame facial image according to the movement speed;
Screening unit, it is pre- for filtering out satisfaction from the multiframe facial image according to the attitude parameter and clarity If facial image corresponding to condition obtains target facial image;
Recognition unit, for carrying out recognition of face to the target facial image.
Optionally, the determination unit includes:
Registration obtains subelement, for obtaining in multiframe facial image per human face region between adjacent two frames facial image Registration, obtain multiple registrations;
First screening subelement, for filtering out coincidence from the multiframe facial image according to the multiple registration Human face region corresponding to highest is spent, human face region to be identified is obtained.
Optionally, the registration obtains subelement and is specifically used for:
Obtain the intersection area of human face region between every adjacent two frames facial image in multiframe facial image;
Obtain the union area of human face region between every adjacent two frames facial image in multiframe facial image;
According to the intersection area and union areal calculation per adjacent two frames facial image between human face region registration, Obtain multiple registrations.
Optionally, the detection unit includes:
Score value obtains subelement, for obtaining the area score value of human face region described in every frame facial image;
Second screening subelement, for filtering out area score value from the multiframe facial image greater than corresponding to preset value Facial image, facial image after being screened;
Detection sub-unit obtains attitude parameter for carrying out attitude detection to the face in facial image after the screening.
Optionally, the score value obtains subelement and is specifically used for:
Obtain the area of human face region in every frame facial image;
Obtain the first mapping relations between area and area score value;
The area score value of human face region described in every frame facial image is determined according to first mapping relations.
Optionally, the detection sub-unit includes:
Parameter acquisition module, for obtaining after the screening in facial image face in the human face region of every frame facial image Deflection parameter;
Mapping relations obtain module, for obtaining the second mapping relations between deflection parameter and posture score value;
Determining module, for determining the posture of human face region described in every frame facial image according to second mapping relations The posture score value is set attitude parameter by score value.
Optionally, the parameter acquisition module is specifically used for:
It obtains after the screening in facial image, face is projected in two-dimensional surface in the human face region of every frame facial image The first projective parameter;
Obtain the second projective parameter that the default faceform is projected in two-dimensional surface;
The deflection parameter of face in the human face region is obtained according to first projective parameter and the second projective parameter.
Optionally, the acquiring unit is specifically used for:
It obtains in multiframe facial image per distance and time interval between adjacent two frames facial image;
The movement speed of every frame facial image is calculated according to the distance and time interval;
Obtain the third mapping relations between movement speed and clarity score value;
The clarity score value of every frame facial image is determined according to the third mapping relations, and according to the clarity score value Determine the corresponding clarity of every frame facial image.
Optionally, the screening unit includes:
Computation subunit, for according to the face quality of the attitude parameter and the every frame facial image of sharpness computation point Value;
Third screens subelement, for filtering out face quality score from the multiframe facial image greater than preset threshold Corresponding facial image obtains target facial image.
Optionally, the computation subunit is specifically used for:
The area score value, posture score value and clarity point of every frame facial image are determined according to the attitude parameter and clarity Value;
Corresponding weighted value is respectively set for the area score value, posture score value and clarity score value;
Every frame face is calculated separately according to the area score value, posture score value, clarity score value and its corresponding weighted value The corresponding face quality score of image.
Optionally, the screening unit further include:
4th screening subelement, is used for when being greater than preset threshold there is no face quality score, from the multiframe face Facial image corresponding to face quality score maximum is filtered out in image, obtains target facial image.
Optionally, the third screening subelement is specifically used for:
The face quality score greater than preset threshold is filtered out from the face quality score, obtains candidate face quality Score value;
When the candidate face quality score includes multiple, according to preset algorithm from the candidate face quality score One of candidate face quality is selected, target face quality is obtained;
Facial image corresponding with the target face quality is determined from the multiframe facial image, obtains target face Image.
Optionally, the third screening subelement is specifically used for:
Using the first frame facial image in the multiframe facial image as current face's image;
The face quality score of current face's image is compared with preset threshold;
If the face quality score of current face's image is less than preset threshold, by second in the multiframe facial image Frame facial image is returned and is executed the face quality score of current face's image and preset threshold progress as current face's image The step of comparing obtains target facial image until the face quality score of current face's image is greater than preset threshold.
A kind of storage medium, the storage medium are stored with a plurality of instruction, and described instruction is suitable for processor and is loaded, with Execute the step in any facial image recognition method provided in an embodiment of the present invention.
The embodiment of the present invention can acquire multiframe facial image, and determine face area to be identified according to multiframe facial image Then domain carries out attitude detection to the face in every frame facial image according to human face region, obtain attitude parameter, and obtain more The movement speed of facial image in frame facial image between every adjacent two field pictures, and every frame face is determined according to movement speed The corresponding clarity of image;At this point it is possible to which it is default to filter out satisfaction from multiframe facial image according to attitude parameter and clarity Facial image corresponding to condition obtains target facial image, for example, target facial image is that people's face image quality is preferably schemed Picture, and recognition of face is carried out to the target facial image.The program can filter out people from collected multiframe facial image Face image meets the top-quality facial image of condition, that is, filters out top-quality facial image and carry out recognition of face, keep away Exempt to carry out recognition of face to second-rate facial image, and the problem of recognition result mistake occurred, has improved to face figure As the accuracy of identification.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 is the schematic diagram of a scenario of Face Image Recognition System provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of facial image recognition method provided in an embodiment of the present invention;
Fig. 3 is the schematic diagram of the first mapping relations between area and area score value provided in an embodiment of the present invention;
Fig. 4 is the schematic diagram of the second mapping relations between deflection parameter provided in an embodiment of the present invention and posture score value;
Fig. 5 is the signal of the third mapping relations between movement speed and clarity score value provided in an embodiment of the present invention Figure;
Fig. 6 is another flow diagram of facial image recognition method provided in an embodiment of the present invention;
Fig. 7 is the schematic diagram of determination provided in an embodiment of the present invention human face region to be identified;
Fig. 8 is the schematic diagram of the area score value of determining facial image provided in an embodiment of the present invention;
Fig. 9 is the schematic diagram of the posture score value of determining facial image provided in an embodiment of the present invention;
Figure 10 is the schematic diagram of the clarity score value of determining facial image provided in an embodiment of the present invention;
Figure 11 is the structural schematic diagram of facial image identification device provided in an embodiment of the present invention;
Figure 12 is another structural schematic diagram of facial image identification device provided in an embodiment of the present invention;
Figure 13 is another structural schematic diagram of facial image identification device provided in an embodiment of the present invention;
Figure 14 is another structural schematic diagram of facial image identification device provided in an embodiment of the present invention;
Figure 15 is the structural schematic diagram of the network equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those skilled in the art's every other implementation obtained without creative efforts Example, shall fall within the protection scope of the present invention.
The embodiment of the present invention provides a kind of facial image recognition method, device and storage medium.
Referring to Fig. 1, Fig. 1 is the schematic diagram of a scenario of Face Image Recognition System provided by the embodiment of the present invention, the people Face image identifying system may include facial image identification device, which specifically can integrate sets in network In standby such as terminal or server equipment, for example, the multiframe face figure that the network equipment can receive gate inhibition or other terminals are sent Picture, the gate inhibition can be the setting of the places such as company, airport, market, school, cell or resident's door, and the form of the gate inhibition can To carry out flexible setting according to actual needs, which be can be when user is into and/or out of door through pre-set image Collector (such as camera) is in real time or interval preset time acquires facial image, and the facial image of acquisition is sent to network Equipment.The network equipment can determine human face region to be identified according to multiframe facial image, for example, can include to facial image One or more human face regions detected, it is to be identified so as to which human face region corresponding to registration highest to be determined as Human face region, which can be only one.Then, according to human face region in every frame facial image Face carries out attitude detection, obtains attitude parameter, which may include the area of human face region in every frame facial image Score value and posture score value etc., wherein area score value is related with the area of human face region, face in posture score value and human face region Deflect relating to parameters, for example, in available every frame facial image human face region area score value, and it is big to filter out area score value Facial image corresponding to preset value carries out attitude detection etc. to the face in the facial image filtered out.And it obtains more The movement speed of facial image in frame facial image between every adjacent two field pictures, determines every frame face figure according to movement speed As corresponding clarity, the clarity is related with clarity score value, and the movement speed of clarity score value and human face region is related (i.e. Clarity score value is related with the movement speed of facial image).At this point it is possible to according to attitude parameter and clarity from multiframe face figure It is filtered out as in and meets facial image corresponding to preset condition, obtain target facial image, which is quality Higher image, for example, face quality score can be filtered out from multiframe facial image greater than people corresponding to preset threshold Face image obtains target facial image, finally recognition of face can be carried out to target facial image, when the face identified and in advance When the face matching deposited, illustrate that the user has the permission of opening gate, the network equipment can send control instruction to door at this time Prohibit, so as to opening gate clearance user;Etc..
It should be noted that the schematic diagram of a scenario of Face Image Recognition System shown in FIG. 1 is only an example, this hair The Face Image Recognition System and scene of bright embodiment description are the technologies in order to more clearly illustrate the embodiment of the present invention Scheme does not constitute the restriction for technical solution provided in an embodiment of the present invention, those of ordinary skill in the art it is found that with The differentiation of Face Image Recognition System and the appearance of new business scene, technical solution provided in an embodiment of the present invention is for similar Technical problem, it is equally applicable.
It is described in detail separately below.
In the present embodiment, it will be described from the angle of facial image identification device, facial image identification device tool Body can integrate in the network equipments such as server or terminal.
A kind of facial image recognition method, comprising: acquisition multiframe facial image, and determined according to multiframe facial image wait know Other human face region;Attitude detection is carried out to the face in every frame facial image according to human face region, obtains attitude parameter;It obtains The movement speed of facial image in multiframe facial image between every adjacent two field pictures, determines every frame face according to movement speed The corresponding clarity of image;It is filtered out and is met corresponding to preset condition from multiframe facial image according to attitude parameter and clarity Facial image, obtain target facial image;Recognition of face is carried out to target facial image.
Referring to Fig. 2, Fig. 2 is the flow diagram for the facial image recognition method that one embodiment of the invention provides.The people Face image identifying method may include:
In step s101, multiframe facial image is acquired, and human face region to be identified is determined according to multiframe facial image.
Facial image identification device can pass through the continuous acquisitions multiframe such as pre-set camera, video camera or camera The facial image of user, either, facial image identification device can receive the multiframe face figure of the transmissions such as terminal or server As etc..
Wherein, multiframe facial image can be the figure obtained within a preset period of time at interval of preset time continuous acquisition Picture, for example, the 30 frame images obtained in 1 minute at interval of 2 seconds continuous acquisitions.It may include one or more in the facial image A face, can also include other objects, and particular content is not construed as limiting here.
In order to identify to required facial image, facial image identification device can be in multiframe facial image Face tracked, to determine human face region to be identified.
In some embodiments, the step of determining human face region to be identified according to multiframe facial image may include:
The registration for obtaining human face region between every adjacent two frames facial image in multiframe facial image, obtains multiple coincidences Degree;According to multiple registrations, human face region corresponding to registration highest is filtered out from multiframe facial image, is obtained to be identified Human face region.
Facial image identification device can use Face tracking algorithm, carry out Face datection for each frame facial image, And the registration of human face region between the adjacent two frames facial image in front and back in multiframe facial image is calculated, obtain each pair of adjacent two frame The registration of the corresponding registration of facial image, multipair adjacent facial image can be combined to multiple registrations, which can To be the overlapping area in adjacent two frames facial image between the human face region of the same person.It, can be with after obtaining multiple registrations The adjacent two frames facial image of a pair corresponding to registration highest is filtered out from multiframe facial image based on multiple registrations, it should To the human face region in adjacent two frames facial image, human face region as to be identified, which can also be with Referred to as track human faces region.The shape of the human face region can carry out flexible setting according to actual needs, for example, the human face region It can be rectangular area, square area or border circular areas etc..
For example, the first user people can be calculated separately when facial image includes first user's face and second user face Face region and the corresponding registration of second user human face region, if the registration of first user's human face region is greater than second user people First user's human face region is then determined as human face region to be identified by the registration in face region.
In some embodiments, the coincidence of human face region between every adjacent two frames facial image in multiple image is obtained Degree, the step of obtaining multiple registrations may include:
Obtain the intersection area of human face region between every adjacent two frames facial image in multiframe facial image;
Obtain the union area of human face region between every adjacent two frames facial image in multiframe facial image;
According to the registration of human face region between intersection area and the every adjacent two frames facial image of union areal calculation, obtain Multiple registrations.
Wherein, the calculation of the registration of human face region can following formula (1) between adjacent two frames facial image:
In formula (1), IOU (A, B) indicates the face region between adjacent A frame facial image and B frame facial image Registration, | A ∩ B | indicate the intersection area in the face region between adjacent A frame facial image and B frame facial image, The intersection area is the area that the human face region of the same person between A frame facial image and B frame facial image is overlapped, | AUB | the union area in the face region between adjacent A frame facial image and B frame facial image, the union area are The area of the human face region union of the same person between A frame facial image and B frame facial image.
For the adjacent two frames facial image of every a pair in multiframe facial image, can be calculated by formula (1) The registration of corresponding human face region.It, can adjacent two frame of every a pair respectively when in facial image including multiple faces The registration of everyone human face region in facial image.
In step s 102, attitude detection is carried out to the face in every frame facial image according to human face region, obtains posture Parameter.
In some embodiments, attitude detection is carried out to the face in every frame facial image according to human face region, obtained The step of attitude parameter may include:
Obtain the area score value of human face region in every frame facial image;It is big that area score value is filtered out from multiframe facial image Facial image corresponding to preset value, facial image after being screened;Posture is carried out to the face in facial image after screening Detection, obtains attitude parameter.
In order to improve the accuracy for carrying out attitude detection to the face in facial image, available multiframe facial image Preliminary screening is carried out, for example, the area that can filter out human face region meets the facial image of condition.Specifically, available Area (the i.e. size of human face region of the area score value of human face region in every frame facial image, the area score value and human face region Size) it is related, and direct proportionality between area score value and the area of human face region.Then it is screened from multiframe facial image Integrated value of appearing is greater than facial image corresponding to preset value, wherein and preset value can carry out flexible setting according to actual needs, The biggish facial image of area to filter out human face region, thus facial image after being screened, face figure after the screening As may include one or more, attitude detection can be carried out to the face in facial image after screening at this time, obtaining posture ginseng Number, which may include posture score value, deflection angle of the face in the posture score value and facial image relative to positive face It spends related.
In some embodiments, the step of obtaining the area score value of human face region in every frame facial image may include:
Obtain the area of human face region in every frame facial image;
Obtain the first mapping relations between area and area score value;
The area score value of human face region in every frame facial image is determined according to the first mapping relations.
Facial image identification device can preset the first mapping between the area of human face region and area score value and close System, the first mapping relations can be stored in local in the form of list or text etc. or be stored in server etc., first mapping Relationship can use sigmoid function to assess, which can be as follows shown in formula (2):
Wherein, x indicates that the area of human face region, f (x) indicate area score value.
For example, as shown in figure 3, the area of human face region is bigger, area score value is also bigger, conversely, the area of human face region Smaller, area score value is also smaller, when human face region area greatly to a certain extent after, area score value is no longer with human face region The increase of area and increase.The value range of area score value f (x) can carry out flexible setting, specific value according to actual needs It is not construed as limiting here, for example, the value range of area score value f (x) can be set to 0 to 1 range.
After result based on face tracking determines human face region to be identified, facial image identification device can be calculated often The area (i.e. size) of the human face region in frame facial image, for example, when human face region is rectangular area, it can basis The area of rectangular area is calculated in the length and width of rectangular area;In another example when human face region is border circular areas, it can basis The area of border circular areas is calculated in the radius of border circular areas.
At this point, facial image identification device can obtain first between area and area score value from local or server etc. Mapping relations, and according to the area of human face region in every frame facial image, corresponding Line Integral is inquired from the first mapping relations Value, obtains the area score value of human face region in every frame facial image.For example, the area of human face region is in A frame facial image A1, the area score value of human face region is a1 in the A frame facial image;The area of human face region is B1 in B frame facial image, The area score value of human face region is b1 in the B frame facial image.
In some embodiments, attitude detection is carried out to the face in facial image after screening, obtains attitude parameter Step may include:
Obtain after screening the deflection parameter of face in the human face region of every frame facial image in facial image;
Obtain the second mapping relations between deflection parameter and posture score value;
The posture score value that human face region in every frame facial image is determined according to the second mapping relations, sets posture score value to Attitude parameter.
Facial image identification device can preset the second mapping relations between deflection parameter and posture score value, can be with Second mapping relations are stored in local in the form of list or text etc. or are stored in server etc., which can be with It is assessed using standardized normal distribution, which can be as follows shown in formula (3):
Wherein, x indicates that deflection parameter, the deflection parameter are deflection angle of the face relative to positive face in facial image, just The deflection angle of face, which can be set to 0, g (x), indicates posture score value.
For example, the deflection parameter that can indicate that face deflects to the right is bigger as shown in figure 4, interior in first interval [a, b], Then posture score value g (x) is smaller;In second interval [b, c], the deflection parameter that can indicate that face deflects to the left is bigger, then appearance State score value g (x) is smaller, and when posture score value g (x) reaches maximum value, the face for illustrating at this time is positive face, i.e. user's face camera The collected facial image of institute.The value range of posture score value g (x) can carry out flexible setting according to actual needs, specifically take Value is not construed as limiting here, for example, the value range of posture score value g (x) can be set to 0 to 1 range.
Result based on face tracking determines human face region to be identified, and screens and appear from multiframe facial image It is every in facial image after the available screening of facial image identification device after integrated value is greater than facial image corresponding to preset value The deflection parameter of face in the human face region of frame facial image, the deflection parameter can be face in x-axis, y-axis and z-axis mode Deflection angle or the deflection parameter may include pitch angle, yaw angle and roll angle.And from local or server Middle the second mapping relations obtained between deflection parameter and posture score value, and it is corresponding according to human face region in every frame facial image The deflection parameter of face inquires corresponding posture score value from the second mapping relations, obtains human face region in every frame facial image Posture score value.For example, the deflection parameter of the corresponding face of human face region is θ 1, the A frame face figure in A frame facial image The posture score value of human face region is a2 as in;The deflection parameter of the corresponding face of human face region is θ 2 in B frame facial image, should The posture score value of human face region is b2 in B frame facial image.
In some embodiments, the inclined of face in the human face region of every frame facial image is obtained after screening in facial image The step of turning parameter may include:
It obtains after screening in facial image, face is projected in the in two-dimensional surface in the human face region of every frame facial image One projective parameter;
It obtains default faceform and is projected in the second projective parameter in two-dimensional surface;
The deflection parameter of face in human face region is obtained according to the first projective parameter and the second projective parameter.
Specifically, face is projected in two dimension in the human face region of the available every frame facial image of facial image identification device The first projective parameter in plane, first projective parameter can be coordinate points of the face in two-dimensional surface, and, it obtains pre- If faceform is projected in the second projective parameter in two-dimensional surface, which can be default faceform two Coordinate points in dimensional plane.Wherein, default faceform can be pre-set three-dimensional average face model, can be by changing Become the location information of the three-dimensional coordinate point of three-dimensional average face model to generate the three of the user of different expression and different identity Tie up faceform, that is to say, that all three-dimensional face models can be by the three-dimensional coordinate point of three-dimensional average face model Increase offset to indicate, the three-dimensional face model of the face of user can be indicated with following formula (4):
Wherein, the M in formula (4) indicates the three-dimensional face model of face,Indicate that default faceform is (i.e. three-dimensional average Faceform),N is the points for the three-dimensional coordinate point that three-dimensional average face model includes;AidPidIndicate that identity is inclined Transposition,AidFor midTie up identity base, PidFor identification parameters;AexpPexpIndicate expression shift term,AexpFor mexpDimension table feelings base, PexpFor expression parameter.
Obtain the first projective parameter that face is projected in two-dimensional surface and default faceform to be projected in two dimension flat After the second projective parameter in face, facial image identification device can be obtained according to the first projective parameter and the second projective parameter Three-dimensional coordinate point of the face in three-dimensional planar, for example, shown in following formula (5) and formula (6):
arg min||X3d-X2d|| (5)
X3d=p (Mkeypoint) (6)
Wherein, X3dIndicate the three-dimensional coordinate point of default faceform, MkeypointIndicate default faceform in two-dimensional surface The second interior projective parameter, p are projection function, X2dThe first projective parameter being projected in for face in two-dimensional surface.
Wherein, projection may include rectangular projection and perspective projection etc., can be with by taking rectangular projection carries out three-dimensional modeling as an example Rectangular projection is carried out by following formula (7):
X3d=S × R × Mkeypoint+T (7)
Wherein, S is scaling coefficient, and R is coefficient of rotary, and T is two-dimension translational component, therefore, can pass through formula (4), public affairs Formula (6) and formula (7) iterative solution formula (5), can be obtained identification parameters PidWith expression parameter Pexp, scaling coefficient S, rotation Coefficients R and two-dimension translational component T, according to parameter [Pid, Pexp, S, R, T] and three-dimensional coordinate point is obtained, according to three-dimensional coordinate point Generate three-dimensional face model.That is, can be with if the three-dimensional face model of all angles is projected to two-dimensional surface Face matches in two dimensional image, then this three-dimensional face model is exactly the three-dimensional face model for needing to obtain.
After obtaining threedimensional model, can in the human face region based on obtaining three-dimensional model facial image face deflection ginseng Number, which is the coefficient of rotary R in above-mentioned formula (7).
Optionally, facial image identification device also can use the people that human face modeling algorithm calculates every frame facial image The deflection parameter of face in face region, for example, human face in the human face region of available every frame facial image (including eye Eyeball, nose and mouth etc.) characteristic point, the proportionate relationship between human face is calculated according to characteristic point, and under a proportional relationship Determine the deflection parameter of face in the human face region of every frame facial image.Either, the certain angle of rotated three dimensional average face model Degree, until two of the three-dimensional feature point of human face (including eyes, eyebrow, nose and mouth etc.) in three-dimensional average face model Dimension projection, is overlapped with the two dimensional character point of human face on facial image or is overlapped as far as possible, obtain three-dimensional average face model Angle is rotated, which is the deflection parameter of face.
For example, left outside canthus 1, the right tail of the eye 2, nose 3, the left corners of the mouth 4 and right corners of the mouth 5 etc. are special in available human face region The position of point is levied, and obtains the midpoint 6 between left outside canthus 1 and the right tail of the eye 2, the midpoint between the left corners of the mouth 4 and the right corners of the mouth 5 8, the two midpoints progress line is obtained line segment 68 (the line abbreviation line segment 68 i.e. between midpoint 6 and midpoint 8), nose is obtained 3 intersection point 7 about line segment 68 calculates (the line abbreviation line segment 67 before midpoint 6 and intersection point 7, with lower line segment 78 and 37 of line segment 67 It is similar) with the ratio of line segment 78 determine pitch angle pitch, by calculating the ratio of line segment 37 and line segment 68 determine yaw angle Yaw, roll angle roll is determined by calculating the angle of 68 and vertical direction, and pitch angle, yaw angle and roll angle are deflection ginseng Number.
It should be noted that deflection parameter can also be logical other than it can obtain deflection parameter through the above way Other modes acquisition is crossed, specific acquisition modes are it is not limited here.
After obtaining deflection parameter, pre-set deflection parameter and posture score value can be obtained from local or server etc. Between the second mapping relations, and according to deflection parameter corresponding posture score value is inquired from the second mapping relations, obtain every frame Human face region in facial image, which is attitude parameter.
In step s 103, the mobile speed of the facial image in multiframe facial image between every adjacent two field pictures is obtained Degree, determines the corresponding clarity of every frame facial image according to movement speed.
For example, when including first frame facial image, the second frame facial image and third frame face figure in multiframe facial image It, can be using first frame facial image as origin, according to what is acquired between first frame facial image and the second frame facial image when picture Distance and time interval determine the movement speed of the second frame facial image, and according to the second frame facial image and third frame people The distance and time interval acquired between face image determines the movement speed of third frame facial image, or according to first frame people The distance and time interval acquired between face image and third frame facial image determines the movement speed of third frame facial image Deng.Wherein, which can be the movement speed of facial image, be also possible to the movement speed of human face region, the movement Speed is bigger, and clarity is lower, otherwise movement speed is smaller, and clarity is higher.
In some embodiments, the movement of the facial image in multiframe facial image between every adjacent two field pictures is obtained Speed, the step of determining every frame facial image corresponding clarity according to movement speed may include:
It obtains in multiframe facial image per distance and time interval between adjacent two frames facial image;
The movement speed of every frame facial image is calculated according to distance and time interval;
Obtain the third mapping relations between movement speed and clarity score value;
The clarity score value of every frame facial image is determined according to third mapping relations, and every frame is determined according to clarity score value The corresponding clarity of facial image.
Facial image identification device can preset the third mapping relations between movement speed and clarity score value, can Third mapping relations to be stored in local in the form of list or text etc. or are stored in server etc., which can To be assessed using inverse proportion function, which can be as follows shown in formula (8):
Wherein, y indicates the clarity score value of human face region, and k is constant, and the value of k can carry out spirit according to actual needs Setting living, t indicate the movement speed of facial image, the as movement speed of human face region.
For example, as shown in figure 5, movement speed (i.e. the movement speed of facial image) bigger, human face region of human face region Clarity score value it is bigger, conversely, the movement speed of human face region is smaller, the clarity score value of human face region is smaller, clarity The value range of score value can carry out flexible setting according to actual needs, and particular content is not construed as limiting here.For example, mobile speed When degree is B value, obtaining clarity score value is C.
It should be noted that can establish adjacent two when the time interval between every adjacent two frames facial image is consistent Mapping relations between the distance between frame facial image and clarity score value, bigger, the clarity score value of human face region of distance It is bigger, conversely, the clarity score value of human face region is smaller apart from smaller.
After result based on face tracking determines human face region to be identified, facial image identification device is available more Per distance and time interval between adjacent two frames facial image in frame facial image, then calculated according to distance and time interval every The movement speed of frame facial image, and reflected from the third between movement speed and clarity score value is obtained in local or server Relationship is penetrated, and according to the corresponding movement speed of every frame facial image, corresponding clarity score value is inquired from third mapping relations, Obtain the clarity score value of human face region in every frame facial image.For example, the corresponding face of human face region in A frame facial image Movement speed be v1, the clarity score value of human face region is a3 in the A frame facial image;Face in B frame facial image The movement speed of the corresponding face in region is v2, and the posture score value of human face region is b3 in B frame facial image.It at this time can root The corresponding clarity of every frame facial image is determined according to clarity score value, and clarity score value is bigger, and clarity is higher, otherwise clarity Score value is smaller, and clarity is lower, such as can determine and obtain according to the corresponding relationship between clarity score value and clarity The corresponding clarity of clarity score value of every frame facial image.
In step S104, is filtered out from multiframe facial image according to attitude parameter and clarity and meet preset condition institute Corresponding facial image obtains target facial image.
In step s105, recognition of face is carried out to target facial image.
Wherein, preset condition can carry out flexible setting according to actual needs, to screen the higher face figure of mass Picture.
In some embodiments, the default item of satisfaction is filtered out from multiframe facial image according to attitude parameter and clarity Facial image corresponding to part, the step of obtaining target facial image may include:
According to the face quality score of attitude parameter and the every frame facial image of sharpness computation;It is sieved from multiframe facial image Face quality score is selected greater than facial image corresponding to preset threshold, obtains target facial image.
In some embodiments, according to the face quality score of attitude parameter and the every frame facial image of sharpness computation Step may include:
The area score value, posture score value and clarity score value of every frame facial image are determined according to attitude parameter and clarity;
Corresponding weighted value is respectively set for area score value, posture score value and clarity score value;
Score value, posture score value, clarity score value and its corresponding weighted value calculate separately every frame facial image according to area Corresponding face quality score.
Since attitude parameter is related with area score value and posture score value etc., clarity is related with clarity score value, because of this person Face image identification device can obtain the area score value of human face region, appearance in every frame facial image according to attitude parameter and clarity State score value and clarity score value (area score value, posture score value and the clarity score value of i.e. every frame facial image) etc. are obtaining often In frame facial image after the corresponding area score value of human face region, posture score value and clarity score value, it can be weighted summation, obtained To the corresponding face quality score of every frame facial image, the corresponding face quality point of human face region in as every frame facial image Value.
For example, the calculation formula of face quality score can be following formula (9):
S=f × a+g × b+y × c (9)
Wherein, S indicates face quality score, and f indicates that area score value, a indicate that the corresponding weighted value of area score value, g indicate Posture score value, b indicate that the corresponding weighted value of posture score value, y indicate that clarity score value, c indicate the corresponding weight of clarity score value Value.Area score value, posture score value and the corresponding weighted value of clarity score value can carry out flexible setting, every frame according to actual needs Facial image can use formula (9) and corresponding face quality score be calculated.
It, can also be with it should be noted that for area score value, posture score value, clarity score value or face quality score etc. Using other functions, even neural network is calculated, so that the subsequent high quality facial image that can filter out carries out face knowledge Not.
In order to screen the higher facial image of mass, which can be face quality score and is greater than default threshold Value, for example, facial image identification device can filter out face quality score greater than preset threshold institute from multiframe facial image Corresponding facial image obtains a frame or multiframe target facial image, can be to the frame when obtaining a frame target facial image Target facial image, which carries out recognition of face, can choose frame target face therein when obtaining multiframe target facial image Image carries out recognition of face.
In some embodiments, face quality score is filtered out from multiframe facial image greater than corresponding to preset threshold Facial image, the step of obtaining target facial image may include:
The face quality score greater than preset threshold is filtered out from face quality score, obtains candidate face quality point Value;
When the face quality score that stands for election includes multiple, selected wherein from candidate face quality score according to preset algorithm One candidate face quality obtains target face quality;
Facial image corresponding with target face quality is determined from multiframe facial image, obtains target facial image.
Facial image identification device, which can be, obtains multiframe people at interval of preset time continuous acquisition within a preset period of time After face image, the corresponding face quality score of every frame facial image is calculated separately, face quality score set is obtained, then by people Each of face quality score set face quality score is compared with preset threshold, is screened from face quality score set It is greater than the face quality score of preset threshold out, obtains candidate face quality score.Wherein, preset threshold can be according to practical need Flexible setting is carried out, which may include one or more.
It, can be to the corresponding facial image of candidate face quality score when the face quality score that stands for election includes one (i.e. target facial image) carries out recognition of face.When the face quality score that stands for election includes multiple, can according to preset algorithm from One of candidate face quality is selected in candidate face quality score, obtains target face quality;The preset algorithm can root Flexible setting is carried out according to actual needs, for example, one of candidate can be randomly choosed from multiple candidate face quality scores Face quality perhaps selects the highest candidate face quality of score value or from more from multiple candidate face quality scores Selection candidate face quality corresponding to the preceding facial image frame, etc. in a candidate face quality score.Then, may be used To determine facial image corresponding with target face quality from multiframe facial image, target facial image is obtained, and to the mesh It marks facial image and carries out recognition of face.
In some embodiments, face quality score is filtered out from multiframe facial image greater than corresponding to preset threshold Facial image, the step of obtaining target facial image may include:
Using the first frame facial image in multiframe facial image as current face's image;
The face quality score of current face's image is compared with preset threshold;
If the face quality score of current face's image is less than preset threshold, by the second frame people in multiframe facial image Face image returns to execution and is compared the face quality score of current face's image with preset threshold as current face's image The step of, until the face quality score of current face's image is greater than preset threshold, obtain target facial image.
Facial image identification device, which can be, calculates facial image while acquiring facial image within a preset period of time Face quality score, for example, facial image after collecting at least two frame facial images, can first calculate this at least two frame people The face quality score of face image.The first frame that can will be calculated in the multiframe facial image after face quality score at this time The face quality score of current face's image is compared by facial image as current face's image with preset threshold, judgement Whether the face quality score of current face's image is greater than preset threshold.If the face quality of current face's image is greater than default threshold Value then carries out recognition of face to current face's image;If the face quality score of current face's image is less than preset threshold, sentence Whether disconnected detection time reaches preset time period, which can carry out flexible setting according to actual needs, if reaching Preset time period can terminate the process of facial image identification in order to avoid the detection of no limit at this time;If not up to default Period returns and executes current face then using the second frame facial image in multiframe facial image as current face's image The step of face quality score of image is compared with preset threshold, until the face quality score of current face's image is greater than Preset threshold obtains target facial image, and carries out recognition of face to target facial image.
In some embodiments, according to the face quality score of attitude parameter and the every frame facial image of sharpness computation After step, facial image recognition method can also include:
When being greater than preset threshold there is no face quality score, face quality score is filtered out from multiframe facial image Facial image corresponding to maximum obtains target facial image.
When the multiframe face that facial image identification device is obtained at interval of preset time continuous acquisition within a preset period of time In image, face quality score corresponding for every frame facial image, if it does not exist face quality be greater than preset threshold, then in order to Collected facial image can be identified, it is right that the maximum institute of face quality score can be filtered out from multiframe facial image The facial image answered obtains target facial image, and carries out recognition of face to target facial image.
Either, facial image identification device can be calculates face figure while acquiring image within a preset period of time During the face quality of picture, if detection time reaches preset time period and does not obtain face quality score also greater than preset value Facial image can filter out face from multiframe facial image then in order to identify to collected facial image Facial image corresponding to quality score maximum obtains target facial image, and carries out recognition of face to target facial image.Really Recognition of face can be carried out to quality preferably facial image by having protected, to guarantee the stability and reliability of face recognition result.
From the foregoing, it will be observed that the embodiment of the present invention can acquire multiframe facial image, and determined according to multiframe facial image wait know Then other human face region carries out attitude detection to the face in every frame facial image according to human face region, obtains attitude parameter, For example, the attitude parameter is related with area score value and posture score value etc., wherein area score value is related with the area of human face region, The deflection relating to parameters of posture score value and face, and obtain the face figure in multiframe facial image between every adjacent two field pictures The movement speed of picture, and the corresponding clarity of every frame facial image, the clarity and clarity score value are determined according to movement speed It is related;Meet corresponding to preset condition at this point it is possible to be filtered out from multiframe facial image according to attitude parameter and clarity Facial image obtains target facial image, for example, target facial image is the preferable image of people's face image quality, and to the mesh It marks facial image and carries out recognition of face.The program can filter out facial image satisfaction from collected multiframe facial image The top-quality facial image of condition filters out top-quality facial image and carries out recognition of face, avoid to it is fuzzy, The second-rate facial images such as face area is smaller or human face posture is poor carry out recognition of face, and recognition result mistake occur Accidentally the problem of, improves the accuracy to facial image identification.
Citing, is described in further detail by the method according to described in above-described embodiment below.
For the present embodiment by taking facial image identification device is the network equipment as an example, which is applied to company, airport, quotient The scenes such as field, school, cell or resident's door, the main recognition of face task for executing gate inhibition, provide for householder and security personnel etc. Automatic security detection service.For example, the network equipment can control the multiframe facial image in front of camera acquisition gate inhibition, and benefit With the modes such as face tracking and score value assessment in the collected multiframe facial image of face gate inhibition scene institute, height is quickly selected Quality facial image (i.e. face quality score is greater than facial image corresponding to preset threshold) is identified, be ensure that and is identified Facial image size is larger, posture preferably and clarity is higher etc., improve the effect of recognition of face.
Referring to Fig. 6, Fig. 6 is another flow diagram of facial image recognition method provided in an embodiment of the present invention.It should Method flow may include:
S201, the network equipment acquire multiframe facial image, obtain multiframe facial image in per adjacent two frames facial image it Between human face region registration, and human face region corresponding to registration highest is determined as human face region to be identified.
The network equipment can control the face of the continuous acquisitions multiframe user such as the pre-set camera of gate inhibition or video camera Image, wherein multiframe facial image can be the image obtained within a preset period of time at interval of preset time continuous acquisition, example Such as, the 20 frame images obtained in 1 minute at interval of 3 seconds continuous acquisitions.It may include one or more people in the facial image Face, can also include other objects, and particular content is not construed as limiting here.
Then, the network equipment can track the face in multiframe facial image, to determine people to be identified Face region, for example, per the friendship of human face region between adjacent two frames facial image in the available multiframe facial image of the network equipment Collect area and union area, according to the weight of human face region between intersection area and the every adjacent two frames facial image of union areal calculation It is right, multiple registrations are obtained, registration highest institute can be filtered out from multiframe facial image according to multiple registrations at this time Corresponding human face region obtains human face region to be identified.
For example, user A is the resident of cell A, user A needs the face identification system by gate inhibition when entering cell A Face is identified, at this point, user A goes to the doorway cell A, and when standing in front of the camera of gate inhibition, the network equipment can be with Control the facial image of camera continuous acquisition multiframe user A, at this point, passerby user B is by herein, user B process when Wait the facial image that certain customers B is just collected by camera.For example, wrapping as shown in fig. 7, collected n frame facial image Include a frame facial image, b frame facial image and c frame facial image etc., wherein the value of n can according to actual needs into Row flexible setting, obtained a frame facial image and b frame facial image etc. include user A (left side face in Fig. 7) and use The face of family B (in Fig. 7 the right face), the people for the user A for only including in subsequent facial image frame (such as c frame facial image) Face.Then the network equipment carries out tracing detection to face in collected n frame facial image, calculates adjacent a frame face figure The registration of the human face region of the registration and user B of the human face region of user A, Yi Jiji between picture and b frame facial image Calculate the human face region of the registration of the human face region of user A and user B between b frame facial image and next frame facial image Registration calculates the registration of the human face region of user A and user B between c frame facial image and previous frame facial image The registration etc. of human face region is being calculated between per adjacent two frames facial image after the registration of human face region, Ke Yicong The human face region that human face region corresponding to registration highest is user A, the face area of user A are filtered out in multiframe facial image Domain is human face region to be identified, subsequent progress area score value, posture score value, clarity score value and face quality score etc. It calculates, is calculated only for the human face region of user A, exclude the human face region of other users.
S202, the network equipment obtain the area of human face region in every frame facial image, and according to area with area score value it Between the first mapping relations determine the area score value of human face region in every frame facial image.
The network equipment can preset the first mapping relations between the area of human face region and area score value, can incite somebody to action First mapping relations are stored in local in the form of list or text etc. or are stored in server etc., which can be The area of human face region is bigger, and area score value is also bigger, conversely, the area of human face region is smaller, area score value is also smaller, works as people The area in face region greatly to a certain extent after, area score value no longer increases with the increase of the area of human face region.Line Integral The value range of value can carry out flexible setting according to actual needs, and specific value is not construed as limiting here.
After result based on face tracking determines human face region to be identified, the network equipment can calculate every frame face figure The area of the human face region as in, for example, when human face region is rectangular area, it can be according to the length and wide calculating of rectangular area Obtain the area of rectangular area.Then the first mapping that can be obtained from local or server etc. between area and area score value is closed System, and according to the area of human face region in every frame facial image, corresponding area score value is inquired from the first mapping relations, is obtained The area score value of human face region in every frame facial image.
If for example, as shown in figure 8, due to user A face not all the acquisition of camera within sweep of the eye, The face of the user A collected only includes part, if the face of user A all the acquisition of camera within sweep of the eye, Then cause the face of the user A collected including whole, therefore the human face region for including in the n frame facial image collected It can be partially or fully.At this point, the network equipment can calculate the area of the human face region of user A in every frame facial image, example Such as, the face of the human face regions of users A such as d frame facial image, e frame facial image and f frame facial image is calculated separately Product, then the first mapping relations between area score value according to area, determine the human face region of user A in every frame facial image Area score value, the area of the human face region of user A is bigger, corresponding area score value is higher, for example, comprising user A The facial image of whole human face regions is the highest facial image of area score value, i.e. the area score value of e frame facial image will Greater than the area score value of d frame facial image, it can also be greater than the area score value of f frame facial image.
S203, the network equipment obtain the deflection parameter of face in the human face region of every frame facial image, and are joined according to deflection The second mapping relations between several and posture score value determine the posture score value of human face region in every frame facial image.
The network equipment can preset the second mapping relations between deflection parameter and posture score value, second can be reflected It penetrates relationship to be stored in local in the form of list or text etc. or be stored in server etc., which can be first The more big then posture score value of the deflection parameter that face deflects to the right in section is smaller, conversely, the deflection parameter that face deflects to the right is got over Small (i.e. face gets over face camera) then posture score value is bigger;The deflection parameter that face deflects to the left in second interval more it is big then Posture score value is smaller, conversely, the smaller then posture score value of deflection parameter that face deflects to the left is bigger;Posture score value reaches maximum value When, the face for illustrating at this time is positive face, the i.e. collected facial image of user's face camera institute.The value range of posture score value Flexible setting can be carried out according to actual needs, and specific value is not construed as limiting here.
After result based on face tracking determines human face region to be identified, the available every frame face figure of the network equipment The deflection parameter of face in the human face region of picture, the deflection parameter can be deflection angle of the face in x-axis, y-axis and z-axis mode Degree or the deflection parameter may include pitch angle, yaw angle and roll angle, for example, can use human face modeling calculation Method calculates the deflection parameter of face in the human face region of every frame facial image.And deflection ginseng is obtained from local or server The second mapping relations between several and posture score value, and joined according to the deflection of the corresponding face of human face region in every frame facial image Number, corresponding posture score value is inquired from the second mapping relations, obtains the posture score value of human face region in every frame facial image.
For example, as shown in figure 9, since user A may be side face in face of the either positive face of camera to camera, The shaking of possible user A, so that collecting the side face that may include user A in the n frame facial image of user A or positive face.This When, the network equipment can calculate the deflection parameter of face in the human face region of user A in every frame facial image, for example, counting respectively Calculate the deflection ginseng of face in the human face regions of users A such as h frame facial image, h frame facial image and jth frame facial image Number determines the face of user A in every frame facial image then according to the second mapping relations between deflection parameter and posture score value The posture score value in region, the face of user A get over face camera, and posture score value is bigger, the facial image that deflection parameter is zero For the highest facial image of posture score value, for example, the posture score value of the i-th frame facial image will be greater than h frame facial image Posture score value can also be greater than the posture score value of jth frame facial image.
S204, the network equipment obtain the movement speed of every frame facial image, and according to movement speed and clarity score value it Between third mapping relations determine the clarity score value of human face region in every frame facial image.
The network equipment can preset the third mapping relations between movement speed and clarity score value, can be by third Mapping relations are stored in local in the form of list or text etc. or are stored in server etc., which can be face The movement speed in region is bigger, and the clarity score value of human face region is bigger, conversely, the movement speed of human face region is smaller, face The clarity score value in region is smaller, and the value range of clarity score value can carry out flexible setting according to actual needs, specific interior Appearance is not construed as limiting here.
After result based on face tracking determines human face region to be identified, the available multiframe face figure of the network equipment Per distance and time interval between adjacent two frames facial image as in, every frame face figure is then calculated according to distance and time interval The movement speed of picture, and from the third mapping relations obtained in local or server between movement speed and clarity score value, And according to the corresponding movement speed of every frame facial image, corresponding clarity score value is inquired from third mapping relations, is obtained every The clarity score value of human face region in frame facial image.
For example, as shown in Figure 10, may there is movement when standing in front of camera due to user A, user A's Movement can to collect in the n frame facial image of user A may some than more visible, some are relatively fuzzyyer, mobile speed Degree is faster, and obtained facial image is fuzzyyer.At this point, the network equipment can calculate the face area of user A in every frame facial image The movement speed (movement speed of i.e. every frame facial image) in domain, for example, calculating separately kth frame facial image, r frame face figure As and the human face region of users A such as s frame facial image movement speed, then according to movement speed and clarity score value it Between third mapping relations, determine the clarity score value of the human face region of user A in every frame facial image, the face of user A is not Mobile, clarity score value is bigger, and the facial image that movement speed is zero is the highest facial image of definition values, for example, r The clarity score value of frame facial image will be greater than the clarity score value of kth frame facial image, can also be greater than s frame facial image Clarity score value.
S205, the network equipment score value, posture score value and clarity score value according to area, calculate separately every frame facial image pair The face quality score answered.
After the corresponding area score value of human face region, posture score value and clarity score value divide in obtaining every frame facial image, The first weighted value can be arranged for Line Integral value in the network equipment, the second weighted value is arranged for posture score value, and be clarity point Value setting third weighted value, the corresponding area score value of every frame facial image then be multiplied to obtain first with the first weighted value respectively Posture score value is multiplied to obtain the second accumulated value with the second weighted value by accumulated value, and clarity score value is multiplied with third weighted value Third accumulated value is obtained, then the first accumulated value, the second accumulated value and third accumulated value are added to obtain every frame facial image correspondence Face quality score.
S206, the network equipment are using the first frame facial image in multiframe facial image as current face's image.
S207, the network equipment judge whether the face quality score of current face's image is greater than preset threshold;If so, holding Row step S208;If so, thening follow the steps S209.
The network equipment during detection, can after the face quality score of first frame facial image is calculated, First using first frame facial image as current face's image, the face quality score of current face's image and preset threshold are carried out Compare, judges whether the face quality score of current face's image is greater than preset threshold.
S208, the network equipment carry out recognition of face to current face's image.
If the face quality of current face's image is greater than preset threshold, the network equipment can be carried out current face's image Recognition of face.
S209, the network equipment are using the second frame facial image in multiframe facial image as current face's image.
If the face quality score of current face's image is less than preset threshold, the network equipment may determine that detection time is No to reach preset time, which can carry out flexible setting according to actual needs, if reaching preset time, can terminate Facial image identification process, if not up to preset time period, using the second frame facial image in multiframe facial image as Current face's image returns and executes the step of being compared the face quality score of current face's image with preset threshold, directly Face quality score to current face's image is greater than preset threshold, obtains required facial image, and to required face figure As carrying out recognition of face.
Alternatively, judging collected multiframe face if the face quality score of current face's image is less than preset threshold Whether image, which compares, finishes, and finishes if not comparing, using the second frame facial image in multiframe facial image as current face Image returns and executes the step of being compared the face quality score of current face's image with preset threshold, until working as forefathers The face quality score of face image is greater than preset threshold, obtains required facial image, and carry out people to required facial image Face identification.
When being greater than preset threshold there is no face quality score, the network equipment can be filtered out from multiframe facial image Facial image corresponding to face quality score maximum, and face is carried out to facial image corresponding to face quality score maximum Identification.
The face (referred to as permission face) of the user with opening gate permission can be stored in advance in the network equipment, can incite somebody to action The permission face stores in the database, and during carrying out recognition of face to facial image, the network equipment is from facial image In extract human face region, and compared with the human face region extracted is carried out one by one with the permission face in database, if data There are the similarities between permission face and the human face region extracted to be greater than default similarity threshold (default similarity in library Threshold value can carry out flexible setting according to actual needs), then illustrate permission face and face Region Matching success, the network equipment Can control gate inhibition's unlatching, with the user that lets pass, conversely, if in database there is no permission face and the human face region that extracts it Between similarity be greater than default similarity threshold, then illustrate that permission face and the face Region Matching fail, the network equipment can be with Access control remains off state, to forbid user current.
The embodiment of the present invention can acquire multiframe facial image, calculate human face region to be identified in every frame facial image Area score value, posture score value and clarity score value etc., and score value, posture score value and clarity score value according to area, calculate separately The corresponding face quality score of every frame facial image, the higher collected quality of human face image of explanation of the face quality score is more It is good, face quality preferably facial image can be filtered out from multiframe facial image according to face quality score at this time, and right Face in the facial image is precisely identified.Recognition of face is carried out for second-rate facial image and causes identification wrong Accidentally the problem of, improves and also improves user to improve the safety of access control to the accuracy of facial image identification Usage experience.
For convenient for better implementation facial image recognition method provided in an embodiment of the present invention, the embodiment of the present invention is also provided A kind of device based on above-mentioned facial image recognition method.The wherein meaning of noun and phase in above-mentioned facial image recognition method Together, specific implementation details can be with reference to the explanation in embodiment of the method.
Figure 11 is please referred to, Figure 11 is the structural schematic diagram of facial image identification device provided in an embodiment of the present invention, wherein The facial image identification device can include determining that unit 301, detection unit 302, acquiring unit 303, screening unit 304 and know Other unit 305 etc..
Wherein it is determined that unit 301, for acquiring multiframe facial image, and determined according to multiframe facial image to be identified Human face region.
Determination unit 301 can pass through the continuous acquisitions multiframe user such as pre-set camera, video camera or camera Facial image, either, receive the transmissions such as terminal or server multiframe facial image etc..
Wherein, multiframe facial image can be the figure obtained within a preset period of time at interval of preset time continuous acquisition Picture, for example, the 30 frame images obtained in 1 minute at interval of 2 seconds continuous acquisitions.It may include one or more in the facial image A face, can also include other objects, and particular content is not construed as limiting here.
In order to identify to required facial image, determination unit 301 can be to the people in multiframe facial image Face is tracked, to determine human face region to be identified.
In some embodiments, as shown in figure 12, determination unit 301 may include that registration obtains 3011 He of subelement First screening subelement 3012 etc., specifically can be such that
Registration obtains subelement 3011, for obtaining in multiframe facial image per face between adjacent two frames facial image The registration in region obtains multiple registrations;
First screening subelement 3012, for filtering out registration from multiframe facial image most according to multiple registrations Human face region corresponding to height obtains human face region to be identified.
Registration, which obtains subelement 3011, can use Face tracking algorithm, carry out face inspection for each frame facial image It surveys, and calculates the registration of human face region between the adjacent two frames facial image in front and back in multiframe facial image, obtain each pair of adjacent The registration of the corresponding registration of two frame facial images, multipair adjacent facial image can be combined to multiple registrations, the coincidence Degree can be the overlapping area in adjacent two frames facial image between the human face region of the same person.After obtaining multiple registrations, First screening subelement 3012 can be filtered out corresponding to registration highest from multiframe facial image based on multiple registrations A pair of adjacent two frames facial image, to the human face region in adjacent two frames facial image, human face region as to be identified should for this Human face region to be identified is referred to as track human faces region.The shape of the human face region can carry out spirit according to actual needs Setting living, for example, the human face region can be rectangular area, square area or border circular areas etc..
In some embodiments, registration obtains subelement 3011 and specifically can be used for: obtaining in multiframe facial image The intersection area of human face region between per adjacent two frames facial image;It obtains in multiframe facial image per adjacent two frames facial image Between human face region union area;According to face area between intersection area and the every adjacent two frames facial image of union areal calculation The registration in domain obtains multiple registrations.I.e. registration, which obtains subelement 3011, can use above-mentioned formula (1) calculating per adjacent The registration of human face region between two frame facial images.
Detection unit 302 obtains appearance for carrying out attitude detection to the face in every frame facial image according to human face region State parameter.
In some embodiments, as shown in figure 13, detection unit 302 may include that score value obtains subelement 3021, the Two screening subelements 3022 and detection sub-unit 3023 etc., specifically can be such that
Score value obtains subelement 3021, for obtaining the area score value of human face region in every frame facial image;
Second screening subelement 3022, for filtering out area score value from multiframe facial image greater than corresponding to preset value Facial image, facial image after being screened;
Detection sub-unit 3023 obtains attitude parameter for carrying out attitude detection to the face in facial image after screening.
In order to improve the accuracy for carrying out attitude detection to the face in facial image, available multiframe facial image Preliminary screening is carried out, for example, the area that can filter out human face region meets the facial image of condition.Specifically, score value obtains The area score value of human face region, the area of the area score value and human face region in the available every frame facial image of subelement 3021 It is related, and direct proportionality between area score value and the area of human face region.Then second subelement 3022 is screened from multiframe Area score value is filtered out in facial image greater than facial image corresponding to preset value, wherein preset value can be according to practical need Flexible setting is carried out, the biggish facial image of the area to filter out human face region, thus facial image after being screened, Facial image may include one or more after the screening, and detection sub-unit 3023 can be in facial image after screening at this time Face carries out attitude detection, obtains attitude parameter, which may include posture score value, the posture score value and facial image In face it is related relative to the deflection angle of positive face.
In some embodiments, score value obtains subelement 3021 and specifically can be used for:
Obtain the area of human face region in every frame facial image;
The first mapping relations between area and area score value are obtained, and determine every frame face figure according to the first mapping relations The area score value of human face region as in.
Score value obtains subelement 3021 can be preset between the area of human face region and area score value first and maps First mapping relations, local can be stored in the form of list or text etc. or is stored in server etc. by relationship, this first is reflected The relationship of penetrating can use sigmoid function to assess, which can be as shown in above-mentioned formula (2).Human face region Area is bigger, and area score value is also bigger, conversely, the area of human face region is smaller, area score value is also smaller, when the face of human face region Product it is big to a certain extent after, area score value no longer increases with the increase of the area of human face region.The value model of area score value Flexible setting can be carried out according to actual needs by enclosing, and specific value is not construed as limiting here.
After result based on face tracking determines human face region to be identified, score value obtains subelement 3021 and can calculate The area of the human face region in every frame facial image, for example, when human face region is rectangular area, it can be according to rectangular area The area of rectangular area is calculated in long and width;In another example when human face region is border circular areas, it can be according to border circular areas The area of border circular areas is calculated in radius.At this point, score value, which obtains subelement 3021, can obtain face from local or server etc. The first mapping relations between long-pending and area score value, and according to the area of human face region in every frame facial image, from the first mapping Corresponding area score value is inquired in relationship, obtains the area score value of human face region in every frame facial image.
In some embodiments, detection sub-unit 3023 may include parameter acquisition module, mapping relations acquisition module And determining module etc., specifically it can be such that
Parameter acquisition module, for obtaining after screening in facial image the inclined of face in the human face region of every frame facial image Turn parameter;
Mapping relations obtain module, for obtaining the second mapping relations between deflection parameter and posture score value;
Determining module will for determining the posture score value of human face region in every frame facial image according to the second mapping relations Posture score value is set as attitude parameter.
Detection sub-unit 3023 can preset the second mapping relations between deflection parameter and posture score value, can incite somebody to action Second mapping relations are stored in local in the form of list or text etc. or are stored in server etc., which can benefit It is assessed with standardized normal distribution, which can be as shown in above-mentioned formula (3), and face is to the right in first interval The deflection parameter of deflection is bigger, then posture score value is smaller;The deflection parameter that face deflects to the left in second interval is bigger, then appearance State score value is smaller, and when posture score value reaches maximum value, the face for illustrating at this time is positive face, i.e. user's face camera is collected Facial image.The value range of posture score value can carry out flexible setting according to actual needs, and specific value is not made here It limits.
Result based on face tracking determines human face region to be identified, and screens and appear from multiframe facial image After integrated value is greater than facial image corresponding to preset value, every frame people in facial image after the available screening of parameter acquisition module The deflection parameter of face in the human face region of face image, it is inclined in x-axis, y-axis and z-axis mode which can be face Gyration or the deflection parameter may include pitch angle, yaw angle and roll angle.At this point, mapping relations acquisition module can To deflect the second mapping relations between parameter and posture score value from acquisition in local or server, and according to every frame facial image The deflection parameter of the corresponding face of middle human face region, corresponding posture score value is inquired from the second mapping relations, obtains every frame people The posture score value of human face region in face image.
In some embodiments, parameter acquisition module specifically can be used for: obtain after the screening in facial image, often Face is projected in the first projective parameter in two-dimensional surface in the human face region of frame facial image;Obtain default faceform's projection The second projective parameter in two-dimensional surface;Face in human face region is obtained according to the first projective parameter and the second projective parameter Deflect parameter.
Face is projected in two-dimensional surface in the human face region of the available every frame facial image of parameter acquisition module One projective parameter, first projective parameter can be coordinate points of the face in two-dimensional surface, and, obtain default faceform The second projective parameter being projected in two-dimensional surface, second projective parameter can be default faceform in two-dimensional surface Coordinate points.Wherein, default faceform can be pre-set three-dimensional average face model, can be three-dimensional average by changing The location information of the three-dimensional coordinate point of faceform generates the three-dimensional face model of the user of different expression and different identity, That is all three-dimensional face models can by the three-dimensional coordinate point of three-dimensional average face model increase offset come It indicates, the three-dimensional face model of the face of user can be indicated with above-mentioned formula (4).
Obtain the first projective parameter that face is projected in two-dimensional surface and default faceform to be projected in two dimension flat After the second projective parameter in face, facial image identification device can use above-mentioned formula (5) and formula (6), throw according to first Shadow parameter and the second projective parameter, obtain three-dimensional coordinate point of the face in three-dimensional planar, and can by above-mentioned formula (7) into Row rectangular projection can iteratively solve formula (5) by formula (4), formula (6) and formula (7), three-dimensional coordinate can be obtained Point produces three-dimensional face model according to three-dimensional coordinate point.That is, if the three-dimensional face model of all angles is thrown Shadow can match to two-dimensional surface with face in two dimensional image, then this three-dimensional face model is exactly need to obtain three Tie up faceform.After obtaining threedimensional model, can in the human face region based on obtaining three-dimensional model facial image face it is inclined Turn parameter, which is the coefficient of rotary R in above-mentioned formula (7).
Optionally, parameter acquisition module also can use the face area that human face modeling algorithm calculates every frame facial image The deflection parameter of face in domain, for example, human face in the human face region of available every frame facial image (including eyes, nose Son and mouth etc.) characteristic point, the proportionate relationship between human face is calculated according to characteristic point, and determine under a proportional relationship every The deflection parameter of face in the human face region of frame facial image.Either, rotated three dimensional average face model certain angle, until The two-dimensional projection of the three-dimensional feature point of human face (including eyes, eyebrow, nose and mouth etc.) in three-dimensional average face model, It is overlapped with the two dimensional character point of human face on facial image or is overlapped as far as possible, obtain the rotation angle of three-dimensional average face model Degree, which is the deflection parameter of face.
It should be noted that deflection parameter can also be logical other than it can obtain deflection parameter through the above way Other modes acquisition is crossed, specific acquisition modes are it is not limited here.
After obtaining deflection parameter, mapping relations obtain module can be pre-set partially from the acquisition such as local or server Turn the second mapping relations between parameter and posture score value, and is looked into from the second mapping relations by determining module according to deflection parameter Corresponding posture score value is ask, human face region in every frame facial image is obtained, which is attitude parameter.
Acquiring unit 303, for obtaining the movement of the facial image in multiframe facial image between every adjacent two field pictures Speed determines the corresponding clarity of every frame facial image according to movement speed.
For example, when including first frame facial image, the second frame facial image and third frame face figure in multiframe facial image It, can be using first frame facial image as origin, according to what is acquired between first frame facial image and the second frame facial image when picture Distance and time interval determine the movement speed of the second frame facial image, and according to the second frame facial image and third frame people The distance and time interval acquired between face image determines the movement speed of third frame facial image, or according to first frame people The distance and time interval acquired between face image and third frame facial image determines the movement speed of third frame facial image Deng.Wherein, which can be the movement speed of facial image, be also possible to the movement speed of human face region, the movement Speed is bigger, and clarity is lower, otherwise movement speed is smaller, and clarity is higher.
In some embodiments, acquiring unit 303 specifically can be used for:
It obtains in multiframe facial image per distance and time interval between adjacent two frames facial image;
The movement speed of every frame facial image is calculated according to distance and time interval;
Obtain the third mapping relations between movement speed and clarity score value;
The clarity score value of every frame facial image is determined according to third mapping relations, and every frame is determined according to clarity score value The corresponding clarity of facial image.
Acquiring unit 303 can preset the third mapping relations between movement speed and clarity score value, can incite somebody to action Third mapping relations are stored in local in the form of list or text etc. or are stored in server etc., which can benefit It is assessed with inverse proportion function, which can be as shown in above-mentioned formula (8).The movement speed of human face region is bigger, The clarity score value of human face region is bigger, conversely, the movement speed of human face region is smaller, the clarity score value of human face region is got over Small, the value range of clarity score value can carry out flexible setting according to actual needs, and particular content is not construed as limiting here.
After result based on face tracking determines human face region to be identified, the available multiframe people of acquiring unit 303 Per distance and time interval between adjacent two frames facial image in face image, every frame people is then calculated according to distance and time interval The movement speed of face image, at this point, acquiring unit 303 can obtain movement speed and clarity score value from local or server Between third mapping relations inquire and correspond to from third mapping relations and according to the corresponding movement speed of every frame facial image Clarity score value, obtain the clarity score value of human face region in every frame facial image.It at this time can be true according to clarity score value Determine the corresponding clarity of every frame facial image, clarity score value is bigger, and clarity is higher, otherwise clarity score value is smaller, clearly Degree is lower, such as can be according to the corresponding relationship between clarity score value and clarity, determination and obtained every frame facial image The corresponding clarity of clarity score value.
Screening unit 304, for filtering out the default item of satisfaction from multiframe facial image according to attitude parameter and clarity Facial image corresponding to part obtains target facial image.
Recognition unit 305, for carrying out recognition of face to target facial image.
Wherein, preset condition can carry out flexible setting according to actual needs, to screen the higher face figure of mass Picture.
In some embodiments, as shown in figure 14, screening unit 304 may include computation subunit 3041 and third sieve Subelement 3042 etc. is selected, specifically can be such that
Computation subunit 3041, for according to the face quality of attitude parameter and the every frame facial image of sharpness computation point Value;
Third screens subelement 3042, for filtering out face quality score from multiframe facial image greater than preset threshold Corresponding facial image obtains target facial image.
In some embodiments, computation subunit 3041 specifically can be used for: be determined according to attitude parameter and clarity Area score value, posture score value and the clarity score value of every frame facial image;For area score value, posture score value and clarity score value point Corresponding weighted value is not set;Score value, posture score value, clarity score value and its corresponding weighted value calculate separately often according to area The corresponding face quality score of frame facial image.I.e. every frame facial image can use above-mentioned formula (9) and correspondence be calculated Face quality score.
It, can also be with it should be noted that for area score value, posture score value, clarity score value or face quality score etc. Using other functions, even neural network is calculated, so that the subsequent high quality facial image that can filter out carries out face knowledge Not.
In order to screen the higher facial image of mass, which can be face quality score and is greater than default threshold Value, for example, third screening subelement 3042 can filter out face quality score greater than preset threshold from multiframe facial image Corresponding facial image, obtains a frame or multiframe target facial image can be to this when obtaining a frame target facial image Frame target facial image, which carries out recognition of face, can choose frame target person therein when obtaining multiframe target facial image Face image carries out recognition of face.
In some embodiments, third screening subelement 3042 specifically can be used for: screen from face quality score It is greater than the face quality score of preset threshold out, obtains candidate face quality score;The face quality score that stands for election includes multiple When, one of candidate face quality is selected from candidate face quality score according to preset algorithm, obtains target face quality; Facial image corresponding with target face quality is determined from multiframe facial image, obtains target facial image.
Third screening subelement 3042, which can be, obtains multiframe at interval of preset time continuous acquisition within a preset period of time After facial image, the corresponding face quality score of every frame facial image is calculated separately, face quality score set is obtained, then will Each of face quality score set face quality score is compared with preset threshold, is sieved from face quality score set The face quality score greater than preset threshold is selected, candidate face quality score is obtained.Wherein, preset threshold can be according to reality It needs to carry out flexible setting, which may include one or more.
When the face quality score that stands for election includes one, third screens subelement 3042 can be to the candidate face quality point It is worth corresponding facial image and carries out recognition of face.When the face quality score that stands for election includes multiple, can according to preset algorithm from One of candidate face quality is selected in candidate face quality score, obtains target face quality;The preset algorithm can root Flexible setting is carried out according to actual needs, for example, one of candidate can be randomly choosed from multiple candidate face quality scores Face quality perhaps selects the highest candidate face quality of score value or from more from multiple candidate face quality scores Selection candidate face quality corresponding to the preceding facial image frame, etc. in a candidate face quality score.Then, may be used To determine facial image corresponding with target face quality from multiframe facial image, target facial image is obtained, and to the mesh It marks facial image and carries out recognition of face.
In some embodiments, third screening subelement 3042 specifically can be used for: by the in multiframe facial image One frame facial image is as current face's image;The face quality score of current face's image is compared with preset threshold; If the face quality score of current face's image is less than preset threshold, the second frame facial image in multiframe facial image is made For current face's image, returns and executes the step of being compared the face quality score of current face's image with preset threshold, Until the face quality score of current face's image is greater than preset threshold, target facial image is obtained.
Third screening subelement 3042 can be calculates face figure while acquiring facial image within a preset period of time The face quality score of picture, for example, facial image after collecting at least two frame facial images, can first calculate this at least two frame The face quality score of facial image.The multiframe people after face quality score can will be calculated in third screening subelement 3042 First frame facial image in face image is as current face's image, by the face quality score of current face's image and default threshold Value is compared, and judges whether the face quality score of current face's image is greater than preset threshold.If the people of current face's image Face quality is greater than preset threshold, then carries out recognition of face to current face's image;If the face quality score of current face's image Less than preset threshold, then judge whether detection time reaches preset time period, the preset time period can according to actual needs into Row flexible setting, in order to avoid the detection of no limit, can terminate facial image identification if reaching preset time period at this time Process;If not up to preset time period, using the second frame facial image in multiframe facial image as current face's image, return The step of face quality score that receipt is about to current face's image is compared with preset threshold, until current face's image Face quality score is greater than preset threshold, obtains target facial image, and carry out recognition of face to target facial image.
In some embodiments, facial image identification device further include:
4th screening subelement, is used for when being greater than preset threshold there is no face quality score, from the multiframe face Facial image corresponding to face quality score maximum is filtered out in image, obtains target facial image.
In the multiframe facial image obtained within a preset period of time at interval of preset time continuous acquisition, for every frame people The corresponding face quality score of face image, face quality is greater than preset threshold if it does not exist, then in order to collected people Face image is identified that the 4th screening subelement can be filtered out from multiframe facial image corresponding to face quality score maximum Facial image, obtain target facial image, and recognition of face is carried out to target facial image.Either, in preset time period During the face quality for inside calculating facial image while acquiring image, if detection time reaches preset time period also not The facial image that face quality score is greater than preset value is obtained, then the 4th screening subelement can be screened from multiframe facial image Facial image corresponding to face quality score maximum out obtains target facial image, and carries out face to target facial image Identification.Ensure can to quality preferably facial image carry out recognition of face, with guarantee face recognition result stability and Reliability.
From the foregoing, it will be observed that the embodiment of the present invention can acquire multiframe facial image by determination unit 301, and according to multiframe face Image determines human face region to be identified, then by detection unit 302 according to human face region to the face in every frame facial image Attitude detection is carried out, attitude parameter is obtained, for example, the attitude parameter is related with area score value and posture score value etc., wherein area Score value is related with the area of human face region, the deflection relating to parameters of posture score value and face, and is obtained by acquiring unit 303 more The movement speed of facial image in frame facial image between every adjacent two field pictures, and every frame face is determined according to movement speed The corresponding clarity of image, the clarity are related with clarity score value;At this point, can be according to attitude parameter by screening unit 304 It is filtered out from multiframe facial image with clarity and meets facial image corresponding to preset condition, obtain target facial image, For example, target facial image is the preferable image of people's face image quality, and the target facial image is carried out by recognition unit 305 Recognition of face.The program can filter out facial image and meet the best in quality of condition from collected multiframe facial image Facial image, that is, filter out top-quality facial image and carry out recognition of face, avoid to fuzzy, face area is smaller, Or the human face posture second-rate facial image such as poor carries out recognition of face, and there is the problem of recognition result mistake, it improves To the accuracy of facial image identification.
The embodiment of the present invention also provides a kind of network equipment, which can be the equipment such as server or terminal.Such as Shown in Figure 15, it illustrates the structural schematic diagrams of the network equipment involved in the embodiment of the present invention, specifically:
The network equipment may include one or more than one processing core processor 401, one or more The components such as memory 402, power supply 403 and the input unit 404 of computer readable storage medium.Those skilled in the art can manage It solves, network equipment infrastructure shown in Figure 15 does not constitute the restriction to the network equipment, may include more more or less than illustrating Component, perhaps combine certain components or different component layouts.Wherein:
Processor 401 is the control centre of the network equipment, utilizes various interfaces and connection whole network equipment Various pieces by running or execute the software program and/or module that are stored in memory 402, and are called and are stored in Data in reservoir 402 execute the various functions and processing data of the network equipment, to carry out integral monitoring to the network equipment. Optionally, processor 401 may include one or more processing cores;Preferably, processor 401 can integrate application processor and tune Demodulation processor processed, wherein the main processing operation system of application processor, user interface and application program etc., modulatedemodulate is mediated Reason device mainly handles wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 401 In.
Memory 402 can be used for storing software program and module, and processor 401 is stored in memory 402 by operation Software program and module, thereby executing various function application and data processing.Memory 402 can mainly include storage journey Sequence area and storage data area, wherein storing program area can the (ratio of application program needed for storage program area, at least one function Such as sound-playing function, image player function) etc.;Storage data area, which can be stored, uses created number according to the network equipment According to etc..In addition, memory 402 may include high-speed random access memory, it can also include nonvolatile memory, such as extremely A few disk memory, flush memory device or other volatile solid-state parts.Correspondingly, memory 402 can also wrap Memory Controller is included, to provide access of the processor 401 to memory 402.
The network equipment further includes the power supply 403 powered to all parts, it is preferred that power supply 403 can pass through power management System and processor 401 are logically contiguous, to realize management charging, electric discharge and power managed etc. by power-supply management system Function.Power supply 403 can also include one or more direct current or AC power source, recharging system, power failure monitor The random components such as circuit, power adapter or inverter, power supply status indicator.
The network equipment may also include input unit 404, which can be used for receiving the number or character of input Information, and generate keyboard related with user setting and function control, mouse, operating stick, optics or trackball signal Input.
Although being not shown, the network equipment can also be including display unit etc., and details are not described herein.Specifically in the present embodiment In, the processor 401 in the network equipment can be corresponding by the process of one or more application program according to following instruction Executable file be loaded into memory 402, and the application program being stored in memory 402 is run by processor 401, It is as follows to realize facial image recognition method provided in an embodiment of the present invention:
Multiframe facial image is acquired, and human face region to be identified is determined according to multiframe facial image;According to human face region Attitude detection is carried out to the face in every frame facial image, obtains attitude parameter;It obtains in multiframe facial image per adjacent two frame The movement speed of facial image between image determines the corresponding clarity of every frame facial image according to movement speed;According to appearance State parameter and clarity are filtered out from multiframe facial image meets facial image corresponding to preset condition, obtains target face Image;Recognition of face is carried out to target facial image.
Optionally, the step of determining human face region to be identified according to multiframe facial image may include: to obtain multiframe people Per the registration of human face region between adjacent two frames facial image in face image, multiple registrations are obtained;According to multiple registrations, Human face region corresponding to registration highest is filtered out from multiframe facial image, obtains human face region to be identified.
Optionally, attitude detection is carried out to the face in every frame facial image according to human face region, obtains attitude parameter Step may include: the area score value for obtaining human face region in every frame facial image;Area is filtered out from multiframe facial image Score value is greater than facial image corresponding to preset value, facial image after being screened;To the face in facial image after screening into Row attitude detection, obtains attitude parameter.
Optionally, it is filtered out and is met corresponding to preset condition from multiframe facial image according to attitude parameter and clarity Facial image, the step of obtaining target facial image may include: according to attitude parameter and the every frame facial image of sharpness computation Face quality score;Face quality score is filtered out from multiframe facial image greater than face figure corresponding to preset threshold Picture obtains target facial image.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment Point, it may refer to the detailed description above with respect to facial image recognition method, details are not described herein again.
From the foregoing, it will be observed that the embodiment of the present invention can acquire multiframe facial image, and determined according to multiframe facial image wait know Then other human face region carries out attitude detection to the face in every frame facial image according to human face region, obtains attitude parameter, And the movement speed of the facial image in multiframe facial image between every adjacent two field pictures is obtained, and true according to movement speed Determine the corresponding clarity of every frame facial image;At this point it is possible to be screened from multiframe facial image according to attitude parameter and clarity Meet facial image corresponding to preset condition out, obtain target facial image, for example, target facial image is people's face image matter Preferable image is measured, and recognition of face is carried out to the target facial image.The program can be from collected multiframe facial image In, the top-quality facial image that facial image meets condition is filtered out, that is, filters out top-quality facial image and carries out Recognition of face avoids and carries out recognition of face to second-rate facial image, and the problem of recognition result mistake occurs, improves To the accuracy of facial image identification.
It will appreciated by the skilled person that all or part of the steps in the various methods of above-described embodiment can be with It is completed by instructing, or relevant hardware is controlled by instruction to complete, which can store computer-readable deposits in one In storage media, and is loaded and executed by processor.
For this purpose, the embodiment of the present invention provides a kind of storage medium, wherein being stored with a plurality of instruction, which can be processed Device is loaded, to execute the step in any facial image recognition method provided by the embodiment of the present invention.For example, this refers to Order can execute following steps:
Multiframe facial image is acquired, and human face region to be identified is determined according to multiframe facial image;According to human face region Attitude detection is carried out to the face in every frame facial image, obtains attitude parameter;It obtains in multiframe facial image per adjacent two frame The movement speed of facial image between image determines the corresponding clarity of every frame facial image according to movement speed;According to appearance State parameter and clarity are filtered out from multiframe facial image meets facial image corresponding to preset condition, obtains target face Image;Recognition of face is carried out to target facial image.
Optionally, the step of determining human face region to be identified according to multiframe facial image may include: to obtain multiframe people Per the registration of human face region between adjacent two frames facial image in face image, multiple registrations are obtained;According to multiple registrations, Human face region corresponding to registration highest is filtered out from multiframe facial image, obtains human face region to be identified.
Optionally, attitude detection is carried out to the face in every frame facial image according to human face region, obtains attitude parameter Step may include: the area score value for obtaining human face region in every frame facial image;Area is filtered out from multiframe facial image Score value is greater than facial image corresponding to preset value, facial image after being screened;To the face in facial image after screening into Row attitude detection, obtains attitude parameter.
Optionally, it is filtered out and is met corresponding to preset condition from multiframe facial image according to attitude parameter and clarity Facial image, the step of obtaining target facial image may include: according to attitude parameter and the every frame facial image of sharpness computation Face quality score;Face quality score is filtered out from multiframe facial image greater than face figure corresponding to preset threshold Picture obtains target facial image.
The specific implementation of above each operation can be found in the embodiment of front, and details are not described herein.
Wherein, which may include: read-only memory (ROM, Read Only Memory), random access memory Body (RAM, Random Access Memory), disk or CD etc..
By the instruction stored in the storage medium, any face figure provided by the embodiment of the present invention can be executed As the step in recognition methods, it is thereby achieved that any facial image recognition method institute provided by the embodiment of the present invention The beneficial effect being able to achieve is detailed in the embodiment of front, and details are not described herein.
A kind of facial image recognition method, device and storage medium is provided for the embodiments of the invention above to have carried out in detail Thin to introduce, used herein a specific example illustrates the principle and implementation of the invention, and above embodiments are said It is bright to be merely used to help understand method and its core concept of the invention;Meanwhile for those skilled in the art, according to this hair Bright thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not manage Solution is limitation of the present invention.

Claims (15)

1. a kind of facial image recognition method characterized by comprising
Multiframe facial image is acquired, and human face region to be identified is determined according to the multiframe facial image;
Attitude detection is carried out to the face in every frame facial image according to the human face region, obtains attitude parameter;
The movement speed for obtaining the facial image in the multiframe facial image between every adjacent two field pictures, according to the movement Speed determines the corresponding clarity of every frame facial image;
It is filtered out from the multiframe facial image according to the attitude parameter and clarity and meets people corresponding to preset condition Face image obtains target facial image;
Recognition of face is carried out to the target facial image.
2. facial image recognition method according to claim 1, which is characterized in that described according to the multiframe facial image The step of determining human face region to be identified include:
The registration for obtaining human face region between every adjacent two frames facial image in multiframe facial image, obtains multiple registrations;
According to the multiple registration, human face region corresponding to registration highest is filtered out from the multiframe facial image, Obtain human face region to be identified.
3. facial image recognition method according to claim 2, which is characterized in that per adjacent in the acquisition multiple image The registration of human face region between two frame facial images, the step of obtaining multiple registrations include:
Obtain the intersection area of human face region between every adjacent two frames facial image in multiframe facial image;
Obtain the union area of human face region between every adjacent two frames facial image in multiframe facial image;
According to the registration of human face region between the intersection area and the every adjacent two frames facial image of union areal calculation, obtain Multiple registrations.
4. facial image recognition method according to claim 1, which is characterized in that it is described according to the human face region to every The step of face in frame facial image carries out attitude detection, obtains attitude parameter include:
Obtain the area score value of human face region described in every frame facial image;
Area score value is filtered out from the multiframe facial image greater than facial image corresponding to preset value, obtains screening descendant Face image;
Attitude detection is carried out to the face in facial image after the screening, obtains attitude parameter.
5. facial image recognition method according to claim 4, which is characterized in that described to obtain institute in every frame facial image The step of stating the area score value of human face region include:
Obtain the area of human face region in every frame facial image;
Obtain the first mapping relations between area and area score value;
The area score value of human face region described in every frame facial image is determined according to first mapping relations.
6. facial image recognition method according to claim 4, which is characterized in that described to facial image after the screening In face carry out attitude detection, the step of obtaining attitude parameter includes:
Obtain after the screening deflection parameter of face in the human face region of every frame facial image in facial image;
Obtain the second mapping relations between deflection parameter and posture score value;
The posture score value that human face region described in every frame facial image is determined according to second mapping relations, by the posture point Value is set as attitude parameter.
7. facial image recognition method according to claim 6, which is characterized in that the face figure after obtaining the screening Include: the step of the deflection parameter of face in the human face region of every frame facial image as in
It obtains after the screening in facial image, face is projected in two-dimensional surface in the human face region of every frame facial image One projective parameter;
Obtain the second projective parameter that the default faceform is projected in two-dimensional surface;
The deflection parameter of face in the human face region is obtained according to first projective parameter and the second projective parameter.
8. facial image recognition method according to claim 1, which is characterized in that described to obtain the multiframe facial image In movement speed per the facial image between adjacent two field pictures, determine that every frame facial image is corresponding according to the movement speed Clarity the step of include:
It obtains in multiframe facial image per distance and time interval between adjacent two frames facial image;
The movement speed of every frame facial image is calculated according to the distance and time interval;
Obtain the third mapping relations between movement speed and clarity score value;
The clarity score value of every frame facial image is determined according to the third mapping relations, and is determined according to the clarity score value The corresponding clarity of every frame facial image.
9. facial image recognition method according to any one of claims 1 to 8, which is characterized in that described according to the appearance State parameter and clarity are filtered out from the multiframe facial image meets facial image corresponding to preset condition, obtains target The step of facial image includes:
According to the face quality score of the attitude parameter and the every frame facial image of sharpness computation;
Face quality score is filtered out from the multiframe facial image greater than facial image corresponding to preset threshold, obtains mesh Mark facial image.
10. facial image recognition method according to claim 9, which is characterized in that it is described according to the attitude parameter and The step of face quality score of the every frame facial image of sharpness computation includes:
The area score value, posture score value and clarity score value of every frame facial image are determined according to the attitude parameter and clarity;
Corresponding weighted value is respectively set for the area score value, posture score value and clarity score value;
Every frame facial image is calculated separately according to the area score value, posture score value, clarity score value and its corresponding weighted value Corresponding face quality score.
11. facial image recognition method according to claim 9, which is characterized in that it is described according to the attitude parameter and After the step of face quality score of the every frame facial image of sharpness computation, the method also includes:
When being greater than preset threshold there is no face quality score, face quality score is filtered out from the multiframe facial image Facial image corresponding to maximum obtains target facial image.
12. facial image recognition method according to claim 9, which is characterized in that sieved from the multiframe facial image The step of face quality score is selected greater than facial image corresponding to preset threshold, obtains target facial image include:
The face quality score greater than preset threshold is filtered out from the face quality score, obtains candidate face quality point Value;
When the candidate face quality score includes multiple, selected from the candidate face quality score according to preset algorithm One of candidate face quality obtains target face quality;
Facial image corresponding with the target face quality is determined from the multiframe facial image, obtains target face figure Picture.
13. facial image recognition method according to claim 9, which is characterized in that described from the multiframe facial image In the step of filtering out face quality score greater than facial image corresponding to preset threshold, obtaining target facial image include:
Using the first frame facial image in the multiframe facial image as current face's image;
The face quality score of current face's image is compared with preset threshold;
If the face quality score of current face's image is less than preset threshold, by the second frame people in the multiframe facial image Face image returns to execution and is compared the face quality score of current face's image with preset threshold as current face's image The step of, until the face quality score of current face's image is greater than preset threshold, obtain target facial image.
14. a kind of facial image identification device characterized by comprising
Determination unit determines human face region to be identified for acquiring multiframe facial image, and according to the multiframe facial image;
Detection unit obtains posture for carrying out attitude detection to the face in every frame facial image according to the human face region Parameter;
Acquiring unit, for obtaining the mobile speed of the facial image in the multiframe facial image between every adjacent two field pictures Degree, determines the corresponding clarity of every frame facial image according to the movement speed;
Screening unit, for filtering out the default item of satisfaction from the multiframe facial image according to the attitude parameter and clarity Facial image corresponding to part obtains target facial image;
Recognition unit, for carrying out recognition of face to the target facial image.
15. a kind of storage medium, which is characterized in that the storage medium is stored with a plurality of instruction, and described instruction is suitable for processor It is loaded, the step in 1 to 13 described in any item facial image recognition methods is required with perform claim.
CN201810750438.XA 2018-07-10 2018-07-10 Face image recognition method, device and storage medium Active CN109034013B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810750438.XA CN109034013B (en) 2018-07-10 2018-07-10 Face image recognition method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810750438.XA CN109034013B (en) 2018-07-10 2018-07-10 Face image recognition method, device and storage medium

Publications (2)

Publication Number Publication Date
CN109034013A true CN109034013A (en) 2018-12-18
CN109034013B CN109034013B (en) 2023-06-13

Family

ID=64641054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810750438.XA Active CN109034013B (en) 2018-07-10 2018-07-10 Face image recognition method, device and storage medium

Country Status (1)

Country Link
CN (1) CN109034013B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871243A (en) * 2019-02-22 2019-06-11 苏州迈荣祥信息科技有限公司 The more application software control methods of intelligent terminal and system
CN110008673A (en) * 2019-03-06 2019-07-12 阿里巴巴集团控股有限公司 A kind of identification authentication method and apparatus based on recognition of face
CN110232323A (en) * 2019-05-13 2019-09-13 特斯联(北京)科技有限公司 A kind of parallel method for quickly identifying of plurality of human faces for crowd and its device
CN110263680A (en) * 2019-06-03 2019-09-20 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN110363126A (en) * 2019-07-04 2019-10-22 杭州视洞科技有限公司 A kind of plurality of human faces real-time tracking and out of kilter method
CN110472567A (en) * 2019-08-14 2019-11-19 旭辉卓越健康信息科技有限公司 A kind of face identification method and system suitable under non-cooperation scene
CN110532957A (en) * 2019-08-30 2019-12-03 北京市商汤科技开发有限公司 Face identification method and device, electronic equipment and storage medium
CN110740315A (en) * 2019-11-07 2020-01-31 杭州宇泛智能科技有限公司 Camera correction method and device, electronic equipment and storage medium
CN110740256A (en) * 2019-09-27 2020-01-31 深圳市大拿科技有限公司 ring camera cooperation method and related product
CN110796108A (en) * 2019-11-04 2020-02-14 北京锐安科技有限公司 Method, device and equipment for detecting face quality and storage medium
CN111814613A (en) * 2020-06-24 2020-10-23 浙江大华技术股份有限公司 Face recognition method, face recognition equipment and computer readable storage medium
CN112070739A (en) * 2020-09-03 2020-12-11 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112307817A (en) * 2019-07-29 2021-02-02 中国移动通信集团浙江有限公司 Face living body detection method and device, computing equipment and computer storage medium
CN112329638A (en) * 2020-11-06 2021-02-05 上海优扬新媒信息技术有限公司 Image scoring method, device and system
CN112560775A (en) * 2020-12-25 2021-03-26 深圳市商汤科技有限公司 Switch control method and device, computer equipment and storage medium
CN113283305A (en) * 2021-04-29 2021-08-20 百度在线网络技术(北京)有限公司 Face recognition method and device, electronic equipment and computer readable storage medium
CN113297423A (en) * 2021-05-24 2021-08-24 深圳市优必选科技股份有限公司 Pushing method, pushing device and electronic equipment
CN113313009A (en) * 2021-05-26 2021-08-27 Oppo广东移动通信有限公司 Method, device and terminal for continuously shooting output image and readable storage medium
CN113592874A (en) * 2020-04-30 2021-11-02 杭州海康威视数字技术股份有限公司 Image display method and device and computer equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN103942525A (en) * 2013-12-27 2014-07-23 高新兴科技集团股份有限公司 Real-time face optimal selection method based on video sequence
CN105550637A (en) * 2015-12-04 2016-05-04 小米科技有限责任公司 Contour point positioning method and contour point positioning device
CN106682619A (en) * 2016-12-28 2017-05-17 上海木爷机器人技术有限公司 Object tracking method and device
CN108062791A (en) * 2018-01-12 2018-05-22 北京奇虎科技有限公司 A kind of method and apparatus for rebuilding human face three-dimensional model
CN108256459A (en) * 2018-01-10 2018-07-06 北京博睿视科技有限责任公司 Library algorithm is built in detector gate recognition of face and face based on multiple-camera fusion automatically

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799877A (en) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 Method and system for screening face images
CN103942525A (en) * 2013-12-27 2014-07-23 高新兴科技集团股份有限公司 Real-time face optimal selection method based on video sequence
CN105550637A (en) * 2015-12-04 2016-05-04 小米科技有限责任公司 Contour point positioning method and contour point positioning device
CN106682619A (en) * 2016-12-28 2017-05-17 上海木爷机器人技术有限公司 Object tracking method and device
CN108256459A (en) * 2018-01-10 2018-07-06 北京博睿视科技有限责任公司 Library algorithm is built in detector gate recognition of face and face based on multiple-camera fusion automatically
CN108062791A (en) * 2018-01-12 2018-05-22 北京奇虎科技有限公司 A kind of method and apparatus for rebuilding human face three-dimensional model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
贾正峰等: ""活塞环缺陷检测实验的图像评价"", 《机械设计与制造工程》 *
贾正峰等: ""活塞环缺陷检测实验的图像评价"", 《机械设计与制造工程》, vol. 43, no. 10, 31 October 2014 (2014-10-31), pages 59 - 61 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871243A (en) * 2019-02-22 2019-06-11 苏州迈荣祥信息科技有限公司 The more application software control methods of intelligent terminal and system
CN109871243B (en) * 2019-02-22 2021-12-21 山东诺蓝信息科技有限公司 Intelligent terminal multi-application software control method and system
CN110008673A (en) * 2019-03-06 2019-07-12 阿里巴巴集团控股有限公司 A kind of identification authentication method and apparatus based on recognition of face
CN110008673B (en) * 2019-03-06 2022-02-18 创新先进技术有限公司 Identity authentication method and device based on face recognition
CN110232323A (en) * 2019-05-13 2019-09-13 特斯联(北京)科技有限公司 A kind of parallel method for quickly identifying of plurality of human faces for crowd and its device
CN110263680A (en) * 2019-06-03 2019-09-20 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN110263680B (en) * 2019-06-03 2022-01-28 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN110363126A (en) * 2019-07-04 2019-10-22 杭州视洞科技有限公司 A kind of plurality of human faces real-time tracking and out of kilter method
CN112307817A (en) * 2019-07-29 2021-02-02 中国移动通信集团浙江有限公司 Face living body detection method and device, computing equipment and computer storage medium
CN112307817B (en) * 2019-07-29 2024-03-19 中国移动通信集团浙江有限公司 Face living body detection method, device, computing equipment and computer storage medium
CN110472567A (en) * 2019-08-14 2019-11-19 旭辉卓越健康信息科技有限公司 A kind of face identification method and system suitable under non-cooperation scene
CN110532957A (en) * 2019-08-30 2019-12-03 北京市商汤科技开发有限公司 Face identification method and device, electronic equipment and storage medium
CN110740256A (en) * 2019-09-27 2020-01-31 深圳市大拿科技有限公司 ring camera cooperation method and related product
CN110740256B (en) * 2019-09-27 2021-07-20 深圳市海雀科技有限公司 Doorbell camera cooperation method and related product
CN110796108A (en) * 2019-11-04 2020-02-14 北京锐安科技有限公司 Method, device and equipment for detecting face quality and storage medium
CN110740315A (en) * 2019-11-07 2020-01-31 杭州宇泛智能科技有限公司 Camera correction method and device, electronic equipment and storage medium
CN110740315B (en) * 2019-11-07 2021-07-16 杭州宇泛智能科技有限公司 Camera correction method and device, electronic equipment and storage medium
CN113592874A (en) * 2020-04-30 2021-11-02 杭州海康威视数字技术股份有限公司 Image display method and device and computer equipment
CN111814613A (en) * 2020-06-24 2020-10-23 浙江大华技术股份有限公司 Face recognition method, face recognition equipment and computer readable storage medium
CN112070739A (en) * 2020-09-03 2020-12-11 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112329638A (en) * 2020-11-06 2021-02-05 上海优扬新媒信息技术有限公司 Image scoring method, device and system
CN112560775A (en) * 2020-12-25 2021-03-26 深圳市商汤科技有限公司 Switch control method and device, computer equipment and storage medium
CN113283305A (en) * 2021-04-29 2021-08-20 百度在线网络技术(北京)有限公司 Face recognition method and device, electronic equipment and computer readable storage medium
CN113283305B (en) * 2021-04-29 2024-03-26 百度在线网络技术(北京)有限公司 Face recognition method, device, electronic equipment and computer readable storage medium
CN113297423A (en) * 2021-05-24 2021-08-24 深圳市优必选科技股份有限公司 Pushing method, pushing device and electronic equipment
CN113313009A (en) * 2021-05-26 2021-08-27 Oppo广东移动通信有限公司 Method, device and terminal for continuously shooting output image and readable storage medium

Also Published As

Publication number Publication date
CN109034013B (en) 2023-06-13

Similar Documents

Publication Publication Date Title
CN109034013A (en) A kind of facial image recognition method, device and storage medium
USRE45768E1 (en) Method and system for enhancing three dimensional face modeling using demographic classification
CN106030661B (en) The independent 3D scene texture background in the visual field
DE602004005984T2 (en) FACE IDENTIFICATION VERIFICATION USING FRONT AND SIDE VIEWS
CN105518582B (en) Biopsy method and equipment
CN108475433A (en) Method and system for determining RGBD camera postures on a large scale
Boulay et al. Applying 3d human model in a posture recognition system
CN109461003A (en) Plurality of human faces scene brush face payment risk preventing control method and equipment based on multi-angle of view
Ancheta et al. FEDSecurity: implementation of computer vision thru face and eye detection
CN108090422A (en) Hair style recommends method, Intelligent mirror and storage medium
WO2022237026A1 (en) Plane information detection method and system
CN109409962A (en) Image processing method, device, electronic equipment, computer readable storage medium
CN109948439A (en) A kind of biopsy method, system and terminal device
US10791321B2 (en) Constructing a user's face model using particle filters
Makris et al. Robust 3d human pose estimation guided by filtered subsets of body keypoints
Boulay et al. Posture recognition with a 3d human model
Ba et al. Probabilistic head pose tracking evaluation in single and multiple camera setups
Leelasawassuk et al. 3D from looking: Using wearable gaze tracking for hands-free and feedback-free object modelling
Zhao Camera planning and fusion in a heterogeneous camera network
Swadzba et al. Categorizing perceptions of indoor rooms using 3D features
Farenzena et al. Towards a subject-centered analysis for automated video surveillance
CN111277745B (en) Target person tracking method and device, electronic equipment and readable storage medium
CN107403133A (en) Determine equipment and determination method
Lanz et al. Joint bayesian tracking of head location and pose from low-resolution video
Montenegro et al. Space carving with a hand-held camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant