CN109241908A - Face identification method and relevant apparatus - Google Patents

Face identification method and relevant apparatus Download PDF

Info

Publication number
CN109241908A
CN109241908A CN201811026785.4A CN201811026785A CN109241908A CN 109241908 A CN109241908 A CN 109241908A CN 201811026785 A CN201811026785 A CN 201811026785A CN 109241908 A CN109241908 A CN 109241908A
Authority
CN
China
Prior art keywords
target
image
facial image
face
obtains
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811026785.4A
Other languages
Chinese (zh)
Inventor
付妍文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yu Mo Technology Co Ltd
Original Assignee
Shenzhen Yu Mo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yu Mo Technology Co Ltd filed Critical Shenzhen Yu Mo Technology Co Ltd
Priority to CN201811026785.4A priority Critical patent/CN109241908A/en
Publication of CN109241908A publication Critical patent/CN109241908A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present application provides a kind of face identification method and relevant apparatus, which comprises under noctovision environment, passes through the infrared face image that infrared camera obtains target user;The infrared face image is analyzed, target human face region and target environment parameter are obtained;Focusing position is determined according to the target human face region, and determines target acquisition parameters according to the target environment parameter;It is shot according to the focusing position, target acquisition parameters control visible image capturing head, obtains target facial image;The target facial image is matched with default face template;In the target facial image and the default face template successful match, it is unlocked operation.The acquisition parameters for adjusting visible image capturing head according to infrared camera by the embodiment of the present application, obtain the preferable visible light facial image of quality, are conducive to promote face identification rate.

Description

Face identification method and relevant apparatus
Technical field
This application involves technical field of image processing, and in particular to a kind of face identification method and relevant apparatus.
Background technique
With electronic technology fast development, electronic equipment (such as mobile phone, tablet computer) increasingly penetrates into user's life, The convenience in life and work, especially face recognition technology are brought to user, recognition of face is increasingly closed by enterprise Note, for example, face unlocks, face payment, just like, recognition of face have become a part in user's life, still, in complicated ring Under border (for example, noctovision, exposure environment), then recognition of face efficiency can be reduced, therefore, how to be promoted under noctovision environment, is mentioned The problem of rising face identification rate is urgently to be resolved.
Summary of the invention
The embodiment of the present application provides a kind of face identification method and relevant apparatus, can under face noctovision environment, Promote face identification rate.
In a first aspect, the embodiment of the present application provides a kind of face identification method, comprising:
Under noctovision environment, the infrared face image of target user is obtained by infrared camera;
The infrared face image is analyzed, target human face region and target environment parameter are obtained;
Focusing position is determined according to the target human face region, and determines that target is shot according to the target environment parameter Parameter;
It is shot according to the focusing position, target acquisition parameters control visible image capturing head, obtains target person Face image;
The target facial image is matched with default face template;
In the target facial image and the default face template successful match, it is unlocked operation.
Second aspect, the embodiment of the present application provide a kind of face identification device, comprising:
Acquiring unit, for obtaining the infrared face image of target user by infrared camera under noctovision environment;
Analytical unit obtains target human face region and target environment ginseng for analyzing the infrared face image Number;
Determination unit for determining focusing position according to the target human face region, and is joined according to the target environment Number determines target acquisition parameters;
Shooting unit, for being clapped according to the focusing position, target acquisition parameters control visible image capturing head It takes the photograph, obtains target facial image;
Matching unit, for matching the target facial image with default face template;
Unlocking unit, for being unlocked in the target facial image and the default face template successful match Operation.
The third aspect, the embodiment of the present application provide a kind of face identification device, including processor, memory and one or Multiple programs, wherein said one or multiple programs are stored in above-mentioned memory, and are configured to be held by above-mentioned processor Row, above procedure are included the steps that for executing the instruction in the embodiment of the present application first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, wherein above-mentioned computer-readable Storage medium storage is used for the computer program of electronic data interchange, wherein above-mentioned computer program executes computer such as Step some or all of described in the embodiment of the present application first aspect.
5th aspect, the embodiment of the present application provide a kind of computer program product, wherein above-mentioned computer program product Non-transient computer readable storage medium including storing computer program, above-mentioned computer program are operable to make to calculate Machine executes the step some or all of as described in the embodiment of the present application first aspect.The computer program product can be one A software installation packet.
Implement the embodiment of the present application, has the following beneficial effects:
By face identification method described in the embodiment of the present application and relevant apparatus, under noctovision environment, by red Outer camera obtains the infrared face image of target user, analyzes infrared face image, obtain target human face region and Target environment parameter determines focusing position according to target human face region, and determines target shooting ginseng according to target environment parameter Number is shot according to focusing position, target acquisition parameters control visible image capturing head, target facial image is obtained, by target Facial image is matched with default face template, in target facial image and default face template successful match, is solved Lock operation, thus, the acquisition parameters of visible image capturing head are adjusted according to infrared camera, obtain the preferable visible light face of quality Image is conducive to promote face identification rate.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is some embodiments of the present application, for ability For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 is a kind of embodiment flow diagram of face identification method provided by the embodiments of the present application;
Fig. 2 is a kind of another embodiment flow diagram of face identification method provided by the embodiments of the present application;
Fig. 3 A is a kind of example structure schematic diagram of face identification device provided by the embodiments of the present application;
Fig. 3 B is a kind of another example structure schematic diagram of face identification device provided by the embodiments of the present application;
Fig. 4 is the example structure schematic diagram of a kind of electronic equipment provided by the embodiments of the present application;
Fig. 5 is the example structure schematic diagram of a kind of electronic equipment provided by the embodiments of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiment is some embodiments of the present application, instead of all the embodiments.Based on this Shen Please in embodiment, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall in the protection scope of this application.
The description and claims of this application and term " first ", " second ", " third " and " in the attached drawing Four " etc. are not use to describe a particular order for distinguishing different objects.In addition, term " includes " and " having " and it Any deformation, it is intended that cover and non-exclusive include.Such as it contains the process, method of a series of steps or units, be System, product or equipment are not limited to listed step or unit, but optionally further comprising the step of not listing or list Member, or optionally further comprising other step or units intrinsic for these process, methods, product or equipment.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments It is contained at least one embodiment of the application.It is identical that each position in the description shows that the phrase might not be each meant Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and Implicitly understand, embodiment described herein can be combined with other embodiments.
Face identification device described in the embodiment of the present application may include smart phone (such as Android phone, iOS hand Machine, Windows Phone mobile phone etc.), tablet computer, palm PC, laptop, mobile internet device (MID, Mobile Internet Devices) or wearable device (for example, bluetooth headset, VR equipment or IR equipment) etc., it is above-mentioned only It is citing, and it is non exhaustive, including but not limited to above-mentioned face identification device.
Referring to Fig. 1, being a kind of embodiment flow diagram of face identification method provided by the embodiments of the present application.This reality Apply face identification method described in example, comprising the following steps:
101, under noctovision environment, the infrared face image of target user is obtained by infrared camera.
Wherein, in the embodiment of the present application, noctovision environment refers to that ambient brightness is lower than the environment of preset threshold, preset threshold It can be by system default, alternatively, user's self-setting.Under noctovision environment, since ambient light is less, electronics is set The standby infrared face image that target user can be obtained by infrared camera.Infrared face image is the image for including face.It is red Outer camera due to by temperature imaging, even if still may be implemented to be imaged in the case where dull thread.
102, the infrared face image is analyzed, obtains target human face region and target environment parameter.
Wherein, in the embodiment of the present application, electronic equipment can be analyzed infrared face image, obtain target face area Domain and target environment parameter, wherein environmental parameter can be following at least one: environmental light brightness, environment colour temperature, environment Temperature, ambient humidity, environment electromagnetics interference strength etc., it is not limited here.
Optionally, above-mentioned steps 102 analyze the infrared face image, obtain target human face region and target Environmental parameter, it may include following steps:
21, image segmentation is carried out to the infrared face image, obtains human face region;
22, the target contrast of the human face region is obtained;
23, according to the mapping relations between preset contrast and environmental parameter, determine that the target contrast is corresponding Target environment parameter.
Wherein, electronic equipment can carry out image segmentation to infrared face image, obtain human face region, specifically, face point Cutting algorithm can be with are as follows: the image segmentation algorithm based on region, alternatively, the image segmentation algorithm based on edge, it is not limited here. Electronic equipment can also calculate the contrast of human face region, obtain target contrast, under different contrasts, corresponding environment Parameter is different, and the mapping relations between preset contrast and environmental parameter therefore, in electronic equipment can be stored in advance, into And the corresponding target environment parameter of target contrast is determined according to the mapping relations, in this way, can be by infrared face image It is analyzed, to obtain environmental parameter.
103, focusing position is determined according to the target human face region, and determine target according to the target environment parameter Acquisition parameters.
Wherein, electronic equipment can determine focusing position according to target human face region, for example, can be by target human face region Geometric center as focusing position.Acquisition parameters may include following at least one: focal length, exposure time, sensitivity, aperture Size, light filling parameter of light compensating lamp etc., are not limited thereto, and in the embodiment of the present application, electronic equipment can also include light filling Lamp, light filling parameter can be following at least one: operating current, operating voltage, operating power, light filling duration, light filling direction etc. Deng it is not limited here.The mapping relations between environmental parameter and acquisition parameters can be stored in advance in electronic equipment, in turn, can To determine the corresponding target acquisition parameters of target environment parameter according to the mapping relations, and clapped according to target acquisition parameters According to.A kind of mapping relations between environmental parameter and acquisition parameters are provided as follows, specific as follows:
Environmental parameter Acquisition parameters
Environmental parameter 1 Acquisition parameters 1
Environmental parameter 2 Acquisition parameters 2
Environmental parameter n Acquisition parameters n
104, it is shot according to the focusing position, target acquisition parameters control visible image capturing head, obtains mesh Mark facial image.
Wherein, electronic equipment can be obtained according to focusing position, the control visible image capturing head shooting of target acquisition parameters Target facial image.In the embodiment of the present application, electronic equipment may include infrared camera and visible image capturing head, infrared camera It can be registrated with visible image capturing head, i.e., infrared camera is as the field range of visible image capturing head.
105, the target facial image is matched with default face template.
Wherein, electronic equipment can match target facial image with default face template, in the two successful match When, then unlock operation can be executed, if unlock failure, can resurvey facial image.
Optionally, above-mentioned steps 105 match the target facial image with default face template, it may include such as Lower step:
51, image quality evaluation is carried out to the target facial image, obtains objective image quality evaluation of estimate;
52, according to the mapping relations between preset image quality evaluation values and matching threshold, the target image is determined Quality evaluation is worth corresponding object matching threshold value;
53, contours extract is carried out to the target facial image, obtains the first circumference;
54, feature point extraction is carried out to the target facial image, obtains fisrt feature point set;
55, first circumference is matched with the second circumference of the default face template, obtains One matching value;
56, the fisrt feature point set is matched with the second feature point set of the default face template, obtains Two matching values;
57, object matching value is determined according to first matching value, second matching value;
58, the object matching value be greater than the object matching threshold value when, confirm the target facial image with it is described Default face template successful match.
Wherein, default face template can be stored in advance in electronic equipment, in face recognition process, success or not very great Cheng Quality of human face image is depended on degree, therefore, in the embodiment of the present application, then it is contemplated that Dynamic Matching threshold value, even high-quality, Then matching threshold then can be improved, of poor quality, then matching threshold can reduce, since under noctovision environment, the image of shooting is not Must picture quality it is good, therefore, can suitably adjust matching threshold.It can store preset image quality evaluation values in electronic equipment Mapping relations between matching threshold determine the corresponding target of objective image quality evaluation of estimate according to the mapping relations in turn Matching threshold, on this basis, electronic equipment can carry out contours extract to target facial image, obtain the first circumference, right Target facial image carries out feature point extraction, obtains fisrt feature point set, by the of the first circumference and default face template Two circumferences are matched, and the first matching value is obtained, by the second feature point set of fisrt feature point set and default face template It is matched, obtains the second matching value, in turn, object matching value is determined according to the first matching value, the second matching value, for example, electric The mapping relations between environmental parameter and weighted value pair can be stored in advance in sub- equipment, obtain the first matching value corresponding first Weight coefficient and corresponding second weight coefficient of the second matching value, object matching value=first the first weight coefficient of matching value * + the second the second weight coefficient of matching value *, finally, confirming target face when object matching value is greater than the object matching threshold value Image and the default face template successful match, otherwise, confirmation recognition of face failure, in this way, dynamic regulation face matched Journey is conducive to promote recognition of face efficiency for specific environment.
In addition, the algorithm of contours extract can be following at least one: Hough transformation, canny operator etc. are not done herein Limit, the algorithm of feature point extraction can be following at least one: Harris angle point, scale invariant feature extract transformation (scale Invariant feature transform, SIFT) etc., it is not limited here.
Optionally, above-mentioned steps 51 carry out image quality evaluation to the target facial image, obtain objective image quality Evaluation of estimate can be implemented as follows:
Image quality evaluation is carried out to target facial image using at least one image quality evaluation index, obtains target figure Image quality amount evaluation of estimate.
Wherein, image quality evaluation index may include, but are not limited to: average gray, mean square deviation, entropy, edge conservation degree, Signal-to-noise ratio etc..The image quality evaluation values that may be defined as are bigger, then picture quality is better.
It should be noted that there is certain limitation when due to evaluating using single evaluation index picture quality Property, therefore, multiple images quality evaluation index, which can be used, evaluates picture quality, certainly, evaluates picture quality When, not image quality evaluation index is The more the better, because image quality evaluation index is more, the meter of image quality assessment process Calculation complexity is higher, and also not necessarily image quality evaluation effect is better, therefore, in the situation more demanding to image quality evaluation Under, 2~10 image quality evaluation indexs can be used, picture quality is evaluated.Specifically, image quality evaluation is chosen to refer to Target number and which index, depending on specific implementation situation.Certainly, it also obtains and is commented in conjunction with specifically scene selection picture quality Valence index carries out the image quality index that image quality evaluation selection is carried out under image quality evaluation and bright ring border under dark situation It can be different.
Optionally, in the case where not high to image quality evaluation required precision, an image quality evaluation index can be used It is evaluated, for example, carrying out image quality evaluation values to image to be processed with entropy, it is believed that entropy is bigger, then illustrates picture quality It is better, on the contrary, entropy is smaller, then illustrate that picture quality is poorer.
Optionally, in the higher situation of image quality evaluation required precision, multiple images quality evaluation can be used Index evaluates image to be evaluated, carries out image quality evaluation to image to be evaluated in multiple images quality evaluation index When, the weight of each image quality evaluation index, can be obtained multiple images matter in settable multiple image quality evaluation index Evaluation of estimate is measured, final image quality evaluation values, example can be obtained according to multiple image quality evaluation values and its corresponding weight Such as, three image quality evaluation indexs are respectively as follows: A index, B index and C index, and the weight of A is a1, and the weight of B is a2, C's Weight is a3, and when carrying out image quality evaluation to a certain image using A, B and C, the corresponding image quality evaluation values of A are b1, B Corresponding image quality evaluation values are b2, and the corresponding image quality evaluation values of C are b3, then, last image quality evaluation values =a1b1+a2b2+a3b3.Under normal conditions, image quality evaluation values are bigger, illustrate that picture quality is better.
106, in the target facial image and the default face template successful match, it is unlocked operation.
Wherein, unlock operation can be following at least one: enter homepage, into specified page, starting it is specified apply, Delivery operation etc., it is not limited here.Electronic equipment in target facial image and when default face template successful match, then into Row unlock operation, otherwise promotes unlock failure.
Wherein, between above-mentioned steps 104- step 105, can also include the following steps:
A1, the characteristic point distribution density for determining the target facial image;
A2, the characteristic point distribution density be less than pre-set density threshold value when, by the target facial image with it is described red Outer facial image carries out image co-registration, obtains fusion facial image;
Then, above-mentioned steps 105 match the target facial image with default face template, can be according to such as lower section Formula is implemented:
The fusion facial image is matched with the default face template.
Wherein, electronic equipment can carry out feature point extraction to target facial image, obtain feature point set, calculate characteristic point The face area of characteristic point total number and target facial image that collection includes, characteristic point distribution density=characteristic point total number/ Face area.Above-mentioned pre-set density threshold value can be by user's self-setting or system default.It is less than in characteristic point distribution density When pre-set density threshold value, then illustrate that target features of human face images is less, picture quality is poor, then can be by target facial image Image co-registration is carried out with infrared face image, obtains fusion facial image, in turn, by fusion facial image and default face template It is matched.
Optionally, the target facial image and the infrared face image are carried out image co-registration, obtained by above-mentioned steps A2 To fusion facial image, it may include following steps:
A21, the infrared face image is zoomed in and out into processing, obtains Infrared Targets facial image, the Infrared Targets The size of facial image is consistent with the size of the target facial image;
A22, multi-resolution decomposition is carried out to the Infrared Targets facial image, obtains the first low frequency component image and second high Frequency component image;
A23, the multi-resolution decomposition is carried out to the target facial image, obtains the second low frequency component image and second high Frequency component image;
A24, according to the mapping relations between preset environmental parameter and the first weight, determine the Infrared Targets face figure As corresponding first object weighted value;
A25, the corresponding second target weight value of the target facial image is calculated according to the first object weighted value;
A26, for the first low frequency component image, the second low frequency component image, the first object weighted value, The second target weight value is weighted, and obtains target low frequency component image;
A27, taken for the first high fdrequency component image, the second high fdrequency component image absolute value take big principle into Row operation obtains targeted high frequency component image;
A28, the target low frequency component image, the targeted high frequency component image are carried out and the multi-resolution decomposition pair The inverse transformation answered obtains the fusion facial image.
Wherein, since infrared face image is possible to not of uniform size with target facial image, it can be to infrared face Image zooms in and out processing, obtains Infrared Targets facial image, the size and target face figure of Infrared Targets facial image The size of picture is consistent, and the multi-resolution decomposition algorithm that above-mentioned multi-resolution decomposition uses can be following at least one: profile wave Transformation, non-down sampling contourlet transform, shearing wave conversion, pyramid transform etc., it is not limited here.In the specific implementation, electric Sub- equipment can carry out multi-resolution decomposition to Infrared Targets facial image, obtain the first low frequency component image and the second high fdrequency component figure Picture carries out multi-resolution decomposition to target facial image, obtains the second low frequency component image and the second high fdrequency component image.In addition, The mapping relations between preset environmental parameter and the first weight can be stored in advance in electronic equipment, in turn, according to preset Mapping relations between environmental parameter and the first weight determine the corresponding first object weighted value of Infrared Targets facial image, according to The corresponding second target weight value of target facial image, the second target weight value=the first mesh of 1- are calculated according to first object weighted value Mark weighted value, target low frequency component image=first low frequency component image * first object weighted value the+the second low frequency component image * Second target weight value if targeted high frequency component image, then can be directed to the first high fdrequency component image, the second high fdrequency component figure As taking absolute value that big principle is taken to carry out operation, targeted high frequency component image is obtained, by target low frequency component image, targeted high frequency Component image carries out inverse transformation corresponding with multi-resolution decomposition, obtains fusion facial image.
It is obtained under noctovision environment by infrared camera by face identification method described in the embodiment of the present application The infrared face image for taking target user, analyzes infrared face image, obtains target human face region and target environment ginseng Number determines focusing position according to target human face region, and determines target acquisition parameters according to target environment parameter, according to focusing Position, target acquisition parameters control visible image capturing head are shot, and target facial image are obtained, by target facial image and in advance If face template is matched, in target facial image and default face template successful match, it is unlocked operation, thus, The acquisition parameters that visible image capturing head is adjusted according to infrared camera, obtain the preferable visible light facial image of quality, are conducive to Promote face identification rate.
Consistent with the abovely, referring to Fig. 2, for a kind of second implementation of face identification method provided by the embodiments of the present application Example flow diagram.Face identification method as described in this embodiment, comprising the following steps:
201, under noctovision environment, the infrared face image of target user is obtained by infrared camera.
202, the infrared face image is analyzed, obtains target human face region and target environment parameter.
203, focusing position is determined according to the target human face region, and determine target according to the target environment parameter Acquisition parameters.
204, it is shot according to the focusing position, target acquisition parameters control visible image capturing head, obtains mesh Mark facial image.
205, the characteristic point distribution density of the target facial image is determined.
206, the characteristic point distribution density be less than pre-set density threshold value when, by the target facial image with it is described red Outer facial image carries out image co-registration, obtains fusion facial image.
207, the fusion facial image is matched with default face template.
208, in the fusion facial image and the default face template successful match, it is unlocked operation.
Wherein, the specific descriptions of above-mentioned steps 201-208 are referred to accordingly retouching for face identification method described in Fig. 1 It states, details are not described herein.
It is obtained under noctovision environment by infrared camera by face identification method described in the embodiment of the present application The infrared face image for taking target user, analyzes infrared face image, obtains target human face region and target environment ginseng Number determines focusing position according to target human face region, and determines target acquisition parameters according to target environment parameter, according to focusing Position, target acquisition parameters control visible image capturing head are shot, and are obtained target facial image, are determined target facial image Characteristic point distribution density, when characteristic point distribution density is less than pre-set density threshold value, by target facial image and infrared face figure As carrying out image co-registration, fusion facial image is obtained, fusion facial image is matched with default face template, in fusion people When face image and default face template successful match, it is unlocked operation, thus, visible image capturing is adjusted according to infrared camera The acquisition parameters of head, obtain the preferable visible light facial image of quality, are conducive to promote face identification rate.
Consistent with the abovely, specific as follows the following are the device for implementing above-mentioned face identification method:
Fig. 3 A is please referred to, is a kind of example structure schematic diagram of face identification device provided by the embodiments of the present application.This Face identification device described in embodiment, comprising: acquiring unit 301, analytical unit 302, determination unit 303, shooting are single Member 304, matching unit 305 and unlocking unit 306, specific as follows:
Acquiring unit 301, for obtaining the infrared face figure of target user by infrared camera under noctovision environment Picture;
Analytical unit 302 obtains target human face region and target environment for analyzing the infrared face image Parameter;
Determination unit 303, for determining focusing position according to the target human face region, and according to the target environment Parameter determines target acquisition parameters;
Shooting unit 304, for being carried out according to the focusing position, target acquisition parameters control visible image capturing head Shooting, obtains target facial image;
Matching unit 305, for matching the target facial image with default face template;
Unlocking unit 306, for being solved in the target facial image and the default face template successful match Lock operation.
Optionally, the infrared face image is analyzed described, obtains target human face region and target environment ginseng Number aspect, the analytical unit 302 are specifically used for:
Image segmentation is carried out to the infrared face image, obtains human face region;
Obtain the target contrast of the human face region;
According to the mapping relations between preset contrast and environmental parameter, the corresponding target of the target contrast is determined Environmental parameter.
Optionally, it is described the target facial image is matched with default face template in terms of, the matching is single Member 305 is specifically used for:
Image quality evaluation is carried out to the target facial image, obtains objective image quality evaluation of estimate;
According to the mapping relations between preset image quality evaluation values and matching threshold, the objective image quality is determined The corresponding object matching threshold value of evaluation of estimate;
Contours extract is carried out to the target facial image, obtains the first circumference;
Feature point extraction is carried out to the target facial image, obtains fisrt feature point set;
First circumference is matched with the second circumference of the default face template, obtains first With value;
The fisrt feature point set is matched with the second feature point set of the default face template, obtains second With value;
Object matching value is determined according to first matching value, second matching value;
When the object matching value is greater than the object matching threshold value, confirm that the target facial image is preset with described Face template successful match.
Optionally, such as Fig. 3 B, Fig. 3 B is the another modification structures of face identification device described in Fig. 3 A, with Fig. 3 A phase Compare, can also include: image fusion unit 307, specific as follows:
The determination unit 303, also particularly useful for the characteristic point distribution density of the determination target facial image;
Described image integrated unit 307 is used for when the characteristic point distribution density is less than pre-set density threshold value, will be described Target facial image and the infrared face image carry out image co-registration, obtain fusion facial image;
It is described the target facial image is matched with default face template in terms of, the matching unit 305 has Body is used for:
The fusion facial image is matched with the default face template.
Optionally, the target facial image and the infrared face image are subjected to image co-registration described, are melted In terms of closing facial image, the matching unit 305 is specifically used for:
The infrared face image is zoomed in and out into processing, obtains Infrared Targets facial image, the Infrared Targets face The size of image is consistent with the size of the target facial image;
Multi-resolution decomposition is carried out to the Infrared Targets facial image, obtains the first low frequency component image and the second high frequency division Spirogram picture;
The multi-resolution decomposition is carried out to the target facial image, obtains the second low frequency component image and the second high frequency division Spirogram picture;
According to the mapping relations between preset environmental parameter and the first weight, the Infrared Targets facial image pair is determined The first object weighted value answered;
The corresponding second target weight value of the target facial image is calculated according to the first object weighted value;
For the first low frequency component image, the second low frequency component image, the first object weighted value, described Second target weight value is weighted, and obtains target low frequency component image;
Take absolute value that big principle is taken to be transported for the first high fdrequency component image, the second high fdrequency component image It calculates, obtains targeted high frequency component image;
The target low frequency component image, the targeted high frequency component image are carried out corresponding with the multi-resolution decomposition Inverse transformation obtains the fusion facial image.
It is obtained under noctovision environment by infrared camera by face identification device described in the embodiment of the present application The infrared face image for taking target user, analyzes infrared face image, obtains target human face region and target environment ginseng Number determines focusing position according to target human face region, and determines target acquisition parameters according to target environment parameter, according to focusing Position, target acquisition parameters control visible image capturing head are shot, and target facial image are obtained, by target facial image and in advance If face template is matched, in target facial image and default face template successful match, it is unlocked operation, thus, The acquisition parameters that visible image capturing head is adjusted according to infrared camera, obtain the preferable visible light facial image of quality, are conducive to Promote face identification rate.
Consistent with the abovely, referring to Fig. 4, the example structure for a kind of electronic equipment provided by the embodiments of the present application is shown It is intended to.Electronic equipment as described in this embodiment, comprising: at least one input equipment 1000;At least one output equipment 2000;At least one processor 3000, such as CPU;With memory 4000, above-mentioned input equipment 1000, output equipment 2000, place Reason device 3000 and memory 4000 are connected by bus 5000.
Wherein, above-mentioned input equipment 1000 concretely touch panel, physical button or mouse.
Above-mentioned output equipment 2000 concretely display screen.
Above-mentioned memory 4000 can be high speed RAM memory, can also be nonvolatile storage (non-volatile ), such as magnetic disk storage memory.Above-mentioned memory 4000 is used to store a set of program code, above-mentioned input equipment 1000, defeated Equipment 2000 and processor 3000 are used to call the program code stored in memory 4000 out, perform the following operations:
Above-mentioned processor 3000, is used for:
Under noctovision environment, the infrared face image of target user is obtained by infrared camera;
The infrared face image is analyzed, target human face region and target environment parameter are obtained;
Focusing position is determined according to the target human face region, and determines that target is shot according to the target environment parameter Parameter;
It is shot according to the focusing position, target acquisition parameters control visible image capturing head, obtains target person Face image;
The target facial image is matched with default face template;
In the target facial image and the default face template successful match, it is unlocked operation.
Optionally, described that the infrared face image is analyzed, target human face region and target environment parameter are obtained, The processor 3000 is specifically used for:
Image segmentation is carried out to the infrared face image, obtains human face region;
Obtain the target contrast of the human face region;
According to the mapping relations between preset contrast and environmental parameter, the corresponding target of the target contrast is determined Environmental parameter.
Optionally, described to match the target facial image with default face template, the processor 3000 has Body is used for:
Image quality evaluation is carried out to the target facial image, obtains objective image quality evaluation of estimate;
According to the mapping relations between preset image quality evaluation values and matching threshold, the objective image quality is determined The corresponding object matching threshold value of evaluation of estimate;
Contours extract is carried out to the target facial image, obtains the first circumference;
Feature point extraction is carried out to the target facial image, obtains fisrt feature point set;
First circumference is matched with the second circumference of the default face template, obtains first With value;
The fisrt feature point set is matched with the second feature point set of the default face template, obtains second With value;
Object matching value is determined according to first matching value, second matching value;
When the object matching value is greater than the object matching threshold value, confirm that the target facial image is preset with described Face template successful match.
Optionally, the processor 3000 also particularly useful for:
Determine the characteristic point distribution density of the target facial image;
When the characteristic point distribution density is less than pre-set density threshold value, by the target facial image and the infrared people Face image carries out image co-registration, obtains fusion facial image;
It is described to match the target facial image with default face template, comprising:
The fusion facial image is matched with the default face template.
Optionally, described that the target facial image and the infrared face image are subjected to image co-registration, it is merged Facial image, the processor 3000 are specifically used for:
The infrared face image is zoomed in and out into processing, obtains Infrared Targets facial image, the Infrared Targets face The size of image is consistent with the size of the target facial image;
Multi-resolution decomposition is carried out to the Infrared Targets facial image, obtains the first low frequency component image and the second high frequency division Spirogram picture;
The multi-resolution decomposition is carried out to the target facial image, obtains the second low frequency component image and the second high frequency division Spirogram picture;
According to the mapping relations between preset environmental parameter and the first weight, the Infrared Targets facial image pair is determined The first object weighted value answered;
The corresponding second target weight value of the target facial image is calculated according to the first object weighted value;
For the first low frequency component image, the second low frequency component image, the first object weighted value, described Second target weight value is weighted, and obtains target low frequency component image;
Take absolute value that big principle is taken to be transported for the first high fdrequency component image, the second high fdrequency component image It calculates, obtains targeted high frequency component image;
The target low frequency component image, the targeted high frequency component image are carried out corresponding with the multi-resolution decomposition Inverse transformation obtains the fusion facial image.
The embodiment of the present application also provides a kind of electronic equipment, as shown in figure 5, for ease of description, illustrating only and this Apply for the relevant part of embodiment, it is disclosed by specific technical details, please refer to the embodiment of the present application method part.The electronics is set It include mobile phone, tablet computer, PDA (personal digital assistant, personal digital assistant), POS for that can be Any terminal device such as (point of sales, point-of-sale terminal), vehicle-mounted computer, by taking electronic equipment is mobile phone as an example:
Fig. 5 shows the block diagram of the part-structure of mobile phone relevant to electronic equipment provided by the embodiments of the present application.Ginseng Fig. 5 is examined, mobile phone includes: radio frequency (radio frequency, RF) circuit 910, memory 920, input unit 930, sensor 950, voicefrequency circuit 960, Wireless Fidelity (wireless fidelity, WiFi) module 970, processor 980 and power supply 990 equal components.It will be understood by those skilled in the art that handset structure shown in Fig. 5 does not constitute the restriction to mobile phone, it can To include perhaps combining certain components or different component layouts than illustrating more or fewer components.
It is specifically introduced below with reference to each component parts of the Fig. 5 to mobile phone:
Input unit 930 can be used for receiving the number or character information of input, and generate with the user setting of mobile phone with And the related key signals input of function control.Specifically, input unit 930 may include touching display screen 933, face identification device 931 and other input equipments 932.Face identification device 931 can be camera, for example, infrared camera, it is seen that light camera shooting Head or dual camera etc..Input unit 930 can also include other input equipments 932.Specifically, other input equipments 932 can include but is not limited to physical button, function key (such as volume control button, switch key etc.), trace ball, mouse, One of operating stick etc. is a variety of.
Wherein, the processor 980, for executing following steps:
Under noctovision environment, the infrared face image of target user is obtained by infrared camera;
The infrared face image is analyzed, target human face region and target environment parameter are obtained;
Focusing position is determined according to the target human face region, and determines that target is shot according to the target environment parameter Parameter;
It is shot according to the focusing position, target acquisition parameters control visible image capturing head, obtains target person Face image;
The target facial image is matched with default face template;
In the target facial image and the default face template successful match, it is unlocked operation.
Processor 980 is the control centre of mobile phone, using the various pieces of various interfaces and connection whole mobile phone, is led to It crosses operation or executes the software program and/or module being stored in memory 920, and call and be stored in memory 920 Data execute the various functions and processing data of mobile phone, to carry out integral monitoring to mobile phone.Optionally, processor 980 can wrap Include one or more processing units;Preferably, processor 980 can integrated processor and modem processor, wherein processing The main processing operation system of device, user interface and application program etc., modem processor mainly handles wireless communication.It can manage Solution, above-mentioned modem processor can not also be integrated into processor 980.
In addition, memory 920 may include high-speed random access memory, it can also include nonvolatile memory, example Such as at least one disk memory, flush memory device or other volatile solid-state parts.
RF circuit 910 can be used for sending and receiving for information.In general, RF circuit 910 includes but is not limited to antenna, at least one A amplifier, transceiver, coupler, low-noise amplifier (low noise amplifier, LNA), duplexer etc..This Outside, RF circuit 910 can also be communicated with network and other equipment by wireless communication.Above-mentioned wireless communication can be used any logical Beacon standard or agreement, including but not limited to global system for mobile communications (global system of mobile Communication, GSM), general packet radio service (general packet radio service, GPRS), code point Multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division Multiple access, WCDMA), long term evolution (long term evolution, LTE), Email, short message service (short messaging service, SMS) etc..
Mobile phone may also include at least one sensor 950, such as optical sensor, motion sensor and other sensors. Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light Light and shade adjust the brightness of touching display screen, proximity sensor can when mobile phone is moved in one's ear, close touching display screen and/ Or backlight.As a kind of motion sensor, accelerometer sensor can detect (generally three axis) acceleration in all directions Size can detect that size and the direction of gravity when static, can be used to identify mobile phone posture application (such as horizontal/vertical screen switching, Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;It can also configure as mobile phone The other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, details are not described herein.
Voicefrequency circuit 960, loudspeaker 961, microphone 962 can provide the audio interface between user and mobile phone.Audio-frequency electric Electric signal after the audio data received conversion can be transferred to loudspeaker 961, be converted to sound by loudspeaker 961 by road 960 Signal plays;On the other hand, the voice signal of collection is converted to electric signal by microphone 962, is turned after being received by voicefrequency circuit 960 Audio data is changed to, then by after the processing of audio data playback process device 980, through RF circuit 910 to be sent to such as the other hand Machine, or audio data is played to memory 920 to be further processed.
WiFi belongs to short range wireless transmission technology, and mobile phone can help user's transceiver electronics postal by WiFi module 970 Part, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 5 is shown WiFi module 970, but it is understood that, and it is not belonging to must be configured into for mobile phone, it can according to need do not changing completely Become in the range of the essence of invention and omits.
Mobile phone further includes the power supply 990 (such as battery) powered to all parts, it is preferred that power supply can pass through power supply pipe Reason system and processor 980 are logically contiguous, to realize management charging, electric discharge and power managed by power-supply management system Etc. functions.
Although being not shown, mobile phone can also include camera, bluetooth module etc., and details are not described herein.
In earlier figures 1, embodiment shown in Fig. 2, each step method process can be realized based on the structure of the mobile phone.
In earlier figures 3A, Fig. 3 B, embodiment shown in Fig. 4, each unit function can be realized based on the structure of the mobile phone.
The embodiment of the present application also provides a kind of computer storage medium, wherein computer storage medium storage is for electricity The computer program of subdata exchange, the computer program make computer execute any as recorded in above method embodiment Some or all of method step.
The embodiment of the present application also provides a kind of computer program product, and above-mentioned computer program product includes storing calculating The non-transient computer readable storage medium of machine program, above-mentioned computer program are operable to that computer is made to execute such as above-mentioned side Some or all of either record method step in method embodiment.The computer program product can be a software installation Packet.
Although the application is described in conjunction with each embodiment herein, however, implementing the application claimed In the process, those skilled in the art are by checking the attached drawing, disclosure and the appended claims, it will be appreciated that and it is real Other variations of the existing open embodiment.In the claims, " comprising " (comprising) word is not excluded for other compositions Part or step, "a" or "an" are not excluded for multiple situations.Claim may be implemented in single processor or other units In several functions enumerating.Mutually different has been recited in mutually different dependent certain measures, it is not intended that these are arranged It applies to combine and generates good effect.
It will be understood by those skilled in the art that embodiments herein can provide as method, apparatus (equipment) or computer journey Sequence product.Therefore, complete hardware embodiment, complete software embodiment or combining software and hardware aspects can be used in the application The form of embodiment.Moreover, it wherein includes the calculating of computer usable program code that the application, which can be used in one or more, The computer program implemented in machine usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) The form of product.Computer program is stored/distributed in suitable medium, is provided together with other hardware or as hardware A part can also use other distribution forms, such as pass through the wired or wireless telecommunication system of Internet or other.
The application be referring to the embodiment of the present application method, apparatus (equipment) and computer program product flow chart with/ Or block diagram describes.It should be understood that each process that can be realized by computer program instructions in flowchart and/or the block diagram and/ Or the combination of the process and/or box in box and flowchart and/or the block diagram.It can provide these computer program instructions To general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices processor to generate one A machine so that by the instruction that the processor of computer or other programmable data processing devices executes generate for realizing The device for the function of being specified in one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Although the application is described in conjunction with specific features and embodiment, it is clear that, do not departing from this Shen In the case where spirit and scope please, it can be carry out various modifications and is combined.Correspondingly, the specification and drawings are only institute The exemplary illustration for the application that attached claim is defined, and be considered as covered within the scope of the application any and all and repair Change, change, combining or equivalent.Obviously, those skilled in the art the application can be carried out various modification and variations without It is detached from spirit and scope.If in this way, these modifications and variations of the application belong to the claim of this application and its Within the scope of equivalent technologies, then the application is also intended to include these modifications and variations.

Claims (10)

1. a kind of face identification method characterized by comprising
Under noctovision environment, the infrared face image of target user is obtained by infrared camera;
The infrared face image is analyzed, target human face region and target environment parameter are obtained;
Focusing position is determined according to the target human face region, and determines target shooting ginseng according to the target environment parameter Number;
It is shot according to the focusing position, target acquisition parameters control visible image capturing head, obtains target face figure Picture;
The target facial image is matched with default face template;
In the target facial image and the default face template successful match, it is unlocked operation.
2. being obtained the method according to claim 1, wherein described analyze the infrared face image Target human face region and target environment parameter, comprising:
Image segmentation is carried out to the infrared face image, obtains human face region;
Obtain the target contrast of the human face region;
According to the mapping relations between preset contrast and environmental parameter, the corresponding target environment of the target contrast is determined Parameter.
3. method according to claim 1 or 2, which is characterized in that described by the target facial image and default face Template is matched, comprising:
Image quality evaluation is carried out to the target facial image, obtains objective image quality evaluation of estimate;
According to the mapping relations between preset image quality evaluation values and matching threshold, the objective image quality evaluation is determined It is worth corresponding object matching threshold value;
Contours extract is carried out to the target facial image, obtains the first circumference;
Feature point extraction is carried out to the target facial image, obtains fisrt feature point set;
First circumference is matched with the second circumference of the default face template, obtains the first matching Value;
The fisrt feature point set is matched with the second feature point set of the default face template, obtains the second matching Value;
Object matching value is determined according to first matching value, second matching value;
When the object matching value is greater than the object matching threshold value, the target facial image and the default face are confirmed Template matching success.
4. method according to claim 1-3, which is characterized in that the method also includes:
Determine the characteristic point distribution density of the target facial image;
When the characteristic point distribution density is less than pre-set density threshold value, by the target facial image and the infrared face figure As carrying out image co-registration, fusion facial image is obtained;
It is described to match the target facial image with default face template, comprising:
The fusion facial image is matched with the default face template.
5. according to the method described in claim 4, it is characterized in that, described by the target facial image and the infrared face Image carries out image co-registration, obtains fusion facial image, comprising:
The infrared face image is zoomed in and out into processing, obtains Infrared Targets facial image, the Infrared Targets facial image Size it is consistent with the size of the target facial image;
Multi-resolution decomposition is carried out to the Infrared Targets facial image, obtains the first low frequency component image and the second high fdrequency component figure Picture;
The multi-resolution decomposition is carried out to the target facial image, obtains the second low frequency component image and the second high fdrequency component figure Picture;
According to the mapping relations between preset environmental parameter and the first weight, determine that the Infrared Targets facial image is corresponding First object weighted value;
The corresponding second target weight value of the target facial image is calculated according to the first object weighted value;
For the first low frequency component image, the second low frequency component image, the first object weighted value, described second Target weight value is weighted, and obtains target low frequency component image;
Take absolute value that big principle is taken to carry out operation for the first high fdrequency component image, the second high fdrequency component image, Obtain targeted high frequency component image;
The target low frequency component image, the targeted high frequency component image are subjected to contravariant corresponding with the multi-resolution decomposition It changes, obtains the fusion facial image.
6. a kind of face identification device characterized by comprising
Acquiring unit, for obtaining the infrared face image of target user by infrared camera under noctovision environment;
Analytical unit obtains target human face region and target environment parameter for analyzing the infrared face image;
Determination unit, for determining focusing position according to the target human face region, and it is true according to the target environment parameter Set the goal acquisition parameters;
Shooting unit is obtained for being shot according to the focusing position, target acquisition parameters control visible image capturing head To target facial image;
Matching unit, for matching the target facial image with default face template;
Unlocking unit, for being unlocked operation in the target facial image and the default face template successful match.
7. device according to claim 6, which is characterized in that analyze described the infrared face image, obtain In terms of target human face region and target environment parameter, the analytical unit is specifically used for:
Image segmentation is carried out to the infrared face image, obtains human face region;
Obtain the target contrast of the human face region;
According to the mapping relations between preset contrast and environmental parameter, the corresponding target environment of the target contrast is determined Parameter.
8. device according to claim 6 or 7, which is characterized in that described by the target facial image and default people In terms of face template is matched, the matching unit is specifically used for:
Image quality evaluation is carried out to the target facial image, obtains objective image quality evaluation of estimate;
According to the mapping relations between preset image quality evaluation values and matching threshold, the objective image quality evaluation is determined It is worth corresponding object matching threshold value;
Contours extract is carried out to the target facial image, obtains the first circumference;
Feature point extraction is carried out to the target facial image, obtains fisrt feature point set;
First circumference is matched with the second circumference of the default face template, obtains the first matching Value;
The fisrt feature point set is matched with the second feature point set of the default face template, obtains the second matching Value;
Object matching value is determined according to first matching value, second matching value;
When the object matching value is greater than the object matching threshold value, the target facial image and the default face are confirmed Template matching success.
9. a kind of electronic equipment characterized by comprising processor and memory;And one or more programs, it is one Or multiple programs are stored in the memory, and are configured to be executed by the processor, described program includes being used for Such as the instruction of any one of claim 1-5 method.
10. a kind of computer readable storage medium, which is characterized in that it is used to store computer program, wherein the computer Program makes computer execute the method according to claim 1 to 5.
CN201811026785.4A 2018-09-04 2018-09-04 Face identification method and relevant apparatus Pending CN109241908A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811026785.4A CN109241908A (en) 2018-09-04 2018-09-04 Face identification method and relevant apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811026785.4A CN109241908A (en) 2018-09-04 2018-09-04 Face identification method and relevant apparatus

Publications (1)

Publication Number Publication Date
CN109241908A true CN109241908A (en) 2019-01-18

Family

ID=65067245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811026785.4A Pending CN109241908A (en) 2018-09-04 2018-09-04 Face identification method and relevant apparatus

Country Status (1)

Country Link
CN (1) CN109241908A (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110209245A (en) * 2019-06-17 2019-09-06 Oppo广东移动通信有限公司 Face identification method and Related product
CN110418064A (en) * 2019-09-03 2019-11-05 北京字节跳动网络技术有限公司 Focusing method, device, electronic equipment and storage medium
CN110765502A (en) * 2019-10-30 2020-02-07 Oppo广东移动通信有限公司 Information processing method and related product
CN110837821A (en) * 2019-12-05 2020-02-25 深圳市亚略特生物识别科技有限公司 Identity recognition method, equipment and electronic system based on biological characteristics
CN111160175A (en) * 2019-12-19 2020-05-15 中科寒武纪科技股份有限公司 Intelligent pedestrian violation behavior management method and related product
CN111160186A (en) * 2019-12-20 2020-05-15 上海寒武纪信息科技有限公司 Intelligent garbage classification processing method and related products
CN111178297A (en) * 2019-12-31 2020-05-19 上海联影医疗科技有限公司 Image processing method and device, electronic equipment and medium
CN111653012A (en) * 2020-05-29 2020-09-11 浙江大华技术股份有限公司 Gate control method, gate and device with storage function
CN111649749A (en) * 2020-06-24 2020-09-11 万翼科技有限公司 Navigation method based on BIM (building information modeling), electronic equipment and related product
CN111783561A (en) * 2020-06-12 2020-10-16 万翼科技有限公司 Picture examination result correction method, electronic equipment and related products
CN111832437A (en) * 2020-06-24 2020-10-27 万翼科技有限公司 Building drawing identification method, electronic equipment and related product
CN111831992A (en) * 2020-06-29 2020-10-27 万翼科技有限公司 Platform authority management method, electronic equipment and related products
CN111832750A (en) * 2019-04-11 2020-10-27 深圳市家家分类科技有限公司 User registration method of garbage classification platform and related products
CN111851341A (en) * 2020-06-29 2020-10-30 广东荣文科技集团有限公司 Congestion early warning method, intelligent indicator and related products
CN111865369A (en) * 2020-08-14 2020-10-30 Oppo(重庆)智能科技有限公司 Antenna control method, antenna control device and storage medium
CN111937497A (en) * 2020-07-07 2020-11-13 深圳市锐明技术股份有限公司 Control method, control device and infrared camera
CN112037732A (en) * 2020-09-11 2020-12-04 广州小鹏自动驾驶科技有限公司 Apparatus, system, and storage medium for brightness control of vehicle display based on infrared camera
CN112102623A (en) * 2020-08-24 2020-12-18 深圳云天励飞技术股份有限公司 Traffic violation identification method and device and intelligent wearable device
CN112269853A (en) * 2020-11-16 2021-01-26 Oppo广东移动通信有限公司 Search processing method, search processing device and storage medium
CN112840374A (en) * 2020-06-30 2021-05-25 深圳市大疆创新科技有限公司 Image processing method, image acquisition device, unmanned aerial vehicle system and storage medium
CN113434773A (en) * 2021-07-13 2021-09-24 富途网络科技(深圳)有限公司 Target recommendation method, device and storage medium
CN116895094A (en) * 2023-09-11 2023-10-17 杭州魔点科技有限公司 Dark environment imaging method, system, device and medium based on binocular fusion
CN117058738A (en) * 2023-08-07 2023-11-14 深圳市华谕电子科技信息有限公司 Remote face detection and recognition method and system for mobile law enforcement equipment
US20230377192A1 (en) * 2022-05-23 2023-11-23 Dell Products, L.P. System and method for detecting postures of a user of an information handling system (ihs) during extreme lighting conditions

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136516A (en) * 2013-02-08 2013-06-05 上海交通大学 Face recognition method and system fusing visible light and near-infrared information
CN105975908A (en) * 2016-04-26 2016-09-28 汉柏科技有限公司 Face recognition method and device thereof
CN107230196A (en) * 2017-04-17 2017-10-03 江南大学 Infrared and visible light image fusion method based on non-down sampling contourlet and target confidence level
CN107566753A (en) * 2017-09-29 2018-01-09 努比亚技术有限公司 Method, photo taking and mobile terminal
CN107679481A (en) * 2017-09-27 2018-02-09 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN107679473A (en) * 2017-09-22 2018-02-09 广东欧珀移动通信有限公司 Solve lock control method and Related product

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136516A (en) * 2013-02-08 2013-06-05 上海交通大学 Face recognition method and system fusing visible light and near-infrared information
CN105975908A (en) * 2016-04-26 2016-09-28 汉柏科技有限公司 Face recognition method and device thereof
CN107230196A (en) * 2017-04-17 2017-10-03 江南大学 Infrared and visible light image fusion method based on non-down sampling contourlet and target confidence level
CN107679473A (en) * 2017-09-22 2018-02-09 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN107679481A (en) * 2017-09-27 2018-02-09 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN107566753A (en) * 2017-09-29 2018-01-09 努比亚技术有限公司 Method, photo taking and mobile terminal

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832750B (en) * 2019-04-11 2021-10-22 深圳市家家分类科技有限公司 User registration method of garbage classification platform and related products
CN111832750A (en) * 2019-04-11 2020-10-27 深圳市家家分类科技有限公司 User registration method of garbage classification platform and related products
CN110209245A (en) * 2019-06-17 2019-09-06 Oppo广东移动通信有限公司 Face identification method and Related product
CN110209245B (en) * 2019-06-17 2021-01-08 Oppo广东移动通信有限公司 Face recognition method and related product
CN110418064A (en) * 2019-09-03 2019-11-05 北京字节跳动网络技术有限公司 Focusing method, device, electronic equipment and storage medium
CN110765502A (en) * 2019-10-30 2020-02-07 Oppo广东移动通信有限公司 Information processing method and related product
CN110765502B (en) * 2019-10-30 2022-02-18 Oppo广东移动通信有限公司 Information processing method and related product
CN110837821A (en) * 2019-12-05 2020-02-25 深圳市亚略特生物识别科技有限公司 Identity recognition method, equipment and electronic system based on biological characteristics
CN111160175A (en) * 2019-12-19 2020-05-15 中科寒武纪科技股份有限公司 Intelligent pedestrian violation behavior management method and related product
CN111160186B (en) * 2019-12-20 2023-05-19 上海寒武纪信息科技有限公司 Intelligent garbage classification processing method and related products
CN111160186A (en) * 2019-12-20 2020-05-15 上海寒武纪信息科技有限公司 Intelligent garbage classification processing method and related products
CN111178297A (en) * 2019-12-31 2020-05-19 上海联影医疗科技有限公司 Image processing method and device, electronic equipment and medium
CN111653012A (en) * 2020-05-29 2020-09-11 浙江大华技术股份有限公司 Gate control method, gate and device with storage function
CN111653012B (en) * 2020-05-29 2022-06-07 浙江大华技术股份有限公司 Gate control method, gate and device with storage function
CN111783561A (en) * 2020-06-12 2020-10-16 万翼科技有限公司 Picture examination result correction method, electronic equipment and related products
CN111832437B (en) * 2020-06-24 2024-03-01 万翼科技有限公司 Building drawing identification method, electronic equipment and related products
CN111649749A (en) * 2020-06-24 2020-09-11 万翼科技有限公司 Navigation method based on BIM (building information modeling), electronic equipment and related product
CN111832437A (en) * 2020-06-24 2020-10-27 万翼科技有限公司 Building drawing identification method, electronic equipment and related product
CN111851341A (en) * 2020-06-29 2020-10-30 广东荣文科技集团有限公司 Congestion early warning method, intelligent indicator and related products
CN111831992A (en) * 2020-06-29 2020-10-27 万翼科技有限公司 Platform authority management method, electronic equipment and related products
CN112840374A (en) * 2020-06-30 2021-05-25 深圳市大疆创新科技有限公司 Image processing method, image acquisition device, unmanned aerial vehicle system and storage medium
CN111937497A (en) * 2020-07-07 2020-11-13 深圳市锐明技术股份有限公司 Control method, control device and infrared camera
CN111937497B (en) * 2020-07-07 2024-02-09 深圳市锐明技术股份有限公司 Control method, control device and infrared camera
CN111865369A (en) * 2020-08-14 2020-10-30 Oppo(重庆)智能科技有限公司 Antenna control method, antenna control device and storage medium
CN112102623A (en) * 2020-08-24 2020-12-18 深圳云天励飞技术股份有限公司 Traffic violation identification method and device and intelligent wearable device
CN112037732A (en) * 2020-09-11 2020-12-04 广州小鹏自动驾驶科技有限公司 Apparatus, system, and storage medium for brightness control of vehicle display based on infrared camera
CN112037732B (en) * 2020-09-11 2021-12-07 广州小鹏自动驾驶科技有限公司 Apparatus, system, and storage medium for brightness control of vehicle display based on infrared camera
CN112269853B (en) * 2020-11-16 2023-06-13 Oppo广东移动通信有限公司 Retrieval processing method, device and storage medium
CN112269853A (en) * 2020-11-16 2021-01-26 Oppo广东移动通信有限公司 Search processing method, search processing device and storage medium
CN113434773B (en) * 2021-07-13 2023-07-25 富途网络科技(深圳)有限公司 Target recommendation method, device and storage medium
CN113434773A (en) * 2021-07-13 2021-09-24 富途网络科技(深圳)有限公司 Target recommendation method, device and storage medium
US20230377192A1 (en) * 2022-05-23 2023-11-23 Dell Products, L.P. System and method for detecting postures of a user of an information handling system (ihs) during extreme lighting conditions
US11836825B1 (en) * 2022-05-23 2023-12-05 Dell Products L.P. System and method for detecting postures of a user of an information handling system (IHS) during extreme lighting conditions
CN117058738A (en) * 2023-08-07 2023-11-14 深圳市华谕电子科技信息有限公司 Remote face detection and recognition method and system for mobile law enforcement equipment
CN117058738B (en) * 2023-08-07 2024-05-03 深圳市华谕电子科技信息有限公司 Remote face detection and recognition method and system for mobile law enforcement equipment
CN116895094A (en) * 2023-09-11 2023-10-17 杭州魔点科技有限公司 Dark environment imaging method, system, device and medium based on binocular fusion
CN116895094B (en) * 2023-09-11 2024-01-30 杭州魔点科技有限公司 Dark environment imaging method, system, device and medium based on binocular fusion

Similar Documents

Publication Publication Date Title
CN109241908A (en) Face identification method and relevant apparatus
CN109117725A (en) Face identification method and device
CN107590461B (en) Face recognition method and related product
CN104135609B (en) Auxiliary photo-taking method, apparatus and terminal
CN107480496B (en) Unlocking control method and related product
CN107832675A (en) Processing method of taking pictures and Related product
CN107679482A (en) Solve lock control method and Related product
CN107423699B (en) Biopsy method and Related product
CN107657218B (en) Face recognition method and related product
CN108985212A (en) Face identification method and device
CN107403147B (en) Iris living body detection method and related product
CN107862265A (en) Image processing method and related product
CN107292285A (en) Living iris detection method and Related product
CN109146498A (en) Face method of payment and relevant apparatus
CN108093134A (en) The anti-interference method and Related product of electronic equipment
CN107451446B (en) Unlocking control method and related product
CN107590463A (en) Face identification method and Related product
CN107480488B (en) Unlocking control method and related product
CN107679481A (en) Solve lock control method and Related product
CN107580114A (en) Biometric discrimination method, mobile terminal and computer-readable recording medium
CN107451454B (en) Unlocking control method and related product
CN107633499A (en) Image processing method and related product
CN107463818A (en) Solve lock control method and Related product
CN109190448A (en) Face identification method and device
CN107633235A (en) Solve lock control method and Related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190118

WD01 Invention patent application deemed withdrawn after publication