CN112650379A - Activation method and device, electronic equipment and computer readable storage medium - Google Patents

Activation method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112650379A
CN112650379A CN202011608327.9A CN202011608327A CN112650379A CN 112650379 A CN112650379 A CN 112650379A CN 202011608327 A CN202011608327 A CN 202011608327A CN 112650379 A CN112650379 A CN 112650379A
Authority
CN
China
Prior art keywords
face recognition
camera
face
activating
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011608327.9A
Other languages
Chinese (zh)
Inventor
张金凤
林佩材
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202011608327.9A priority Critical patent/CN112650379A/en
Publication of CN112650379A publication Critical patent/CN112650379A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3231Monitoring the presence, absence or movement of users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/96Management of image or video recognition tasks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an activation method and device, electronic equipment and a computer readable storage medium. The activation method is applied to face recognition equipment, the face recognition equipment comprises a camera and an infrared sensor, and the method comprises the following steps: acquiring the current environment brightness; and according to the relation between the ambient brightness and the brightness threshold, adopting a corresponding activation strategy for the face recognition equipment, wherein the activation strategy comprises the steps of activating the face recognition equipment by using the camera and activating the face recognition equipment by using the infrared sensor.

Description

Activation method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of security technologies, and in particular, to an activation method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the maturity of face recognition technology, more and more scenes apply face recognition technology to perform identity verification. Specifically, the face recognition equipment performs identity verification by using a face recognition technology, and identity verification can be completed without human participation, so that the labor cost is saved.
In order to save the energy consumption of the face recognition equipment, the face recognition equipment can be in a standby state when no person needs to perform identity verification through the face recognition equipment, and the face recognition equipment is activated when the face recognition equipment is detected to perform identity verification through the face recognition equipment.
Disclosure of Invention
The application provides an activation method and device, electronic equipment and a computer readable storage medium.
The application provides an activation method, which is applied to face recognition equipment, wherein the face recognition equipment comprises a camera and an infrared sensor, and the method comprises the following steps:
acquiring the current environment brightness;
and according to the relation between the environment brightness and the brightness threshold value, adopting a corresponding activation strategy for the face recognition equipment, wherein the activation strategy comprises the steps of activating the face recognition equipment by using the camera and activating the face recognition equipment by using the infrared sensor.
With reference to any embodiment of the present application, the applying, according to the relationship between the ambient brightness and the brightness threshold, a corresponding activation policy to the face recognition device includes:
and under the condition that the ambient brightness is greater than or equal to the brightness threshold, controlling the camera to keep an on state and closing the infrared sensor, and activating the face recognition equipment by using the camera.
In combination with any embodiment of the present application, activating the face recognition device using the camera includes:
and detecting a target object in the identification area by using the camera, and activating the face identification equipment.
With reference to any embodiment of the present application, the applying, according to the relationship between the ambient brightness and the brightness threshold, a corresponding activation policy to the face recognition device includes:
and under the condition that the ambient brightness is smaller than the brightness threshold, closing the camera and starting the infrared sensor, and activating the face recognition equipment by using the infrared sensor.
In combination with any embodiment of the present application, the activating the face recognition device by using the infrared sensor includes:
and in response to the infrared sensor detecting that a target object exists in the recognition area, activating the face recognition device.
With reference to any embodiment of the present application, the detecting, by using the camera, a target object in a recognition area and activating the face recognition device includes:
acquiring a first image to be processed by using the camera, wherein an acquisition area of the camera is the identification area;
and under the condition that an activation condition is met, determining that the target object in the recognition area is detected, and activating the face recognition equipment, wherein the activation condition comprises that the first image to be processed contains a face to be detected.
With reference to any embodiment of the present application, before the activating the face recognition device, the method further includes:
acquiring a second image to be processed by using the camera;
the activating condition further includes determining that the second image to be processed includes the face to be detected.
With reference to any embodiment of the present application, the acquiring time of the first to-be-processed image is a first time, and before the acquiring of the second to-be-processed image by using the camera, the method further includes:
acquiring an acquisition time interval;
the using the camera to acquire a second image to be processed includes:
and acquiring the second image to be processed by using the camera at a second time, wherein the time interval between the second time and the first time is the acquisition time interval.
With reference to any embodiment of the present application, before the activating the face recognition device, the method further includes:
acquiring an activation direction range;
obtaining the moving direction of the target object according to the first image to be processed and the second image to be processed;
the activation condition further comprises that the movement direction is in the activation direction range.
With reference to any embodiment of the present application, the activating the face recognition device in response to the infrared sensor detecting that there is a target object in the recognition area includes:
acquiring a first infrared light quantity detected by the infrared sensor;
and under the condition that the first infrared light quantity is larger than a light quantity threshold value, determining that the target object exists in the recognition area, and activating the face recognition device.
With reference to any embodiment of the present application, after the case that the first infrared light amount is greater than the light amount threshold value and before the determination that the target object is in the identification area, the method further includes:
starting the camera, and acquiring a third image to be processed by using the camera;
obtaining infrared light compensation quantity according to the third image to be processed;
correcting the infrared light quantity detected by the infrared sensor according to the infrared light compensation quantity to obtain a second infrared light quantity;
the determining that the target object is in the identification area includes:
and determining that the target object exists in the identification area when the second infrared light quantity is smaller than or equal to the light quantity threshold value.
With reference to any one of the embodiments of the present application, the correcting the amount of infrared light detected by the infrared sensor according to the infrared light compensation amount to obtain a second amount of infrared light includes:
and determining the sum of the infrared light compensation amount and the first infrared light amount to obtain the second infrared light amount.
In combination with any one of the embodiments of the present application, the face recognition device further includes an infrared lamp, and the infrared light amount detected by the infrared sensor is corrected according to the infrared light compensation amount to obtain a second infrared light amount, including:
determining the driving current of the infrared lamp according to the infrared light compensation amount;
and driving the infrared lamp by the driving current, and acquiring a second infrared light quantity detected by the infrared sensor as the second infrared light quantity.
With reference to any embodiment of the present application, before obtaining the infrared light compensation amount according to the third image to be processed, the method further includes:
acquiring a mapping relation between color and infrared light compensation;
obtaining an infrared light compensation amount according to the third image to be processed, including:
performing feature extraction processing on the target object in the third image to be processed to obtain feature data;
obtaining the color of the target object according to the characteristic data;
and obtaining the infrared light compensation amount according to the mapping relation and the color of the target object.
The application provides a face identification device, face identification device includes camera and infrared sensor, face identification device still includes:
the acquisition unit is used for acquiring the current environment brightness;
and the activation unit is used for adopting a corresponding activation strategy for the face recognition equipment according to the relation between the ambient brightness and the brightness threshold, wherein the activation strategy comprises the steps of activating the face recognition equipment by using the camera and activating the face recognition equipment by using the infrared sensor.
In combination with any embodiment of the present application, the activation unit is specifically configured to:
and under the condition that the ambient brightness is greater than or equal to the brightness threshold, controlling the camera to keep an on state and closing the infrared sensor, and activating the face recognition equipment by using the camera.
In combination with any embodiment of the present application, the activation unit is specifically configured to:
and detecting a target object in the identification area by using the camera, and activating the face identification equipment.
In combination with any embodiment of the present application, the activation unit is specifically configured to:
and under the condition that the ambient brightness is smaller than the brightness threshold, closing the camera and starting the infrared sensor, and activating the face recognition equipment by using the infrared sensor.
In combination with any embodiment of the present application, the activation unit is specifically configured to:
and in response to the infrared sensor detecting that a target object exists in the recognition area, activating the face recognition device.
In combination with any embodiment of the present application, the activation unit is specifically configured to:
acquiring a first image to be processed by using the camera, wherein an acquisition area of the camera is the identification area;
and under the condition that an activation condition is met, determining that the target object in the recognition area is detected, and activating the face recognition equipment, wherein the activation condition comprises that the first image to be processed contains a face to be detected.
In combination with any embodiment of the present application, the activation unit is specifically configured to:
acquiring a second image to be processed by using the camera;
the activating condition further includes determining that the second image to be processed includes the face to be detected.
With reference to any embodiment of the present application, the acquiring unit is further configured to acquire an acquiring time interval before the camera is used to acquire a second image to be processed;
the activation unit is specifically configured to:
and acquiring the second image to be processed by using the camera at a second time, wherein the time interval between the second time and the first time is the acquisition time interval.
With reference to any embodiment of the present application, the obtaining unit is further configured to obtain an activation direction range before the face recognition device is activated;
the face recognition apparatus further includes:
the first processing unit is used for obtaining the moving direction of the target object according to the first image to be processed and the second image to be processed;
the activation condition further comprises that the movement direction is in the activation direction range.
In combination with any embodiment of the present application, the activation unit is specifically configured to:
acquiring a first infrared light quantity detected by the infrared sensor;
and under the condition that the first infrared light quantity is larger than a light quantity threshold value, determining that the target object exists in the recognition area, and activating the face recognition device.
With reference to any embodiment of the present application, after the face recognition device determines that the first infrared light amount is greater than a light amount threshold, before determining that the target object exists in the recognition area, the face recognition device starts the camera, and acquires a third image to be processed by using the camera;
the face recognition apparatus further includes:
the second processing unit is used for obtaining the infrared light compensation amount according to the third image to be processed;
the third processing unit is used for correcting the infrared light quantity detected by the infrared sensor according to the infrared light compensation quantity to obtain a second infrared light quantity;
the activation unit is specifically configured to:
and determining that the target object exists in the identification area when the second infrared light quantity is smaller than or equal to the light quantity threshold value.
With reference to any embodiment of the present application, the third processing unit is specifically configured to:
and determining the sum of the infrared light compensation amount and the first infrared light amount to obtain the second infrared light amount.
In combination with any embodiment of the present application, the face recognition device further includes an infrared lamp, and the third processing unit is specifically configured to:
determining the driving current of the infrared lamp according to the infrared light compensation amount;
and driving the infrared lamp by the driving current, and acquiring a second infrared light quantity detected by the infrared sensor as the second infrared light quantity.
With reference to any embodiment of the present application, the obtaining unit is further configured to obtain a mapping relationship between a color and infrared light compensation before obtaining the infrared light compensation amount according to the third image to be processed;
the third processing unit is specifically configured to:
performing feature extraction processing on the target object in the third image to be processed to obtain feature data;
obtaining the color of the target object according to the characteristic data;
and obtaining the infrared light compensation amount according to the mapping relation and the color of the target object.
In some embodiments, an electronic device is provided, comprising: a processor and a memory for storing computer program code comprising computer instructions, which, when executed by the processor, cause the electronic device to perform the method as described above for the activation method and any one of its possible implementations.
In some embodiments, there is provided another electronic device, comprising: a processor, transmitting means, input means, output means and a memory for storing computer program code comprising computer instructions, which, when executed by the processor, cause the electronic device to perform the method as the activation method and any one of its possible implementations.
In some embodiments, a computer-readable storage medium is provided, in which a computer program is stored, the computer program comprising program instructions which, if executed by a processor, cause the processor to carry out the method of the above-mentioned activation method and any one of its possible implementations.
In some embodiments, a computer program product is provided, which comprises a computer program or instructions, which, if run on a computer, causes the computer to perform the method of the activation method described above and any possible implementation thereof.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flowchart of an activation method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an activation device according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a hardware structure of a face recognition device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more, "at least two" means two or three and three or more, "and/or" for describing an association relationship of associated objects, meaning that three relationships may exist, for example, "a and/or B" may mean: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" may indicate that the objects associated with each other are in an "or" relationship, meaning any combination of the items, including single item(s) or multiple items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural. The character "/" may also represent a division in a mathematical operation, e.g., a/b-a divided by b; 6/3 ═ 2. At least one of the following "or similar expressions.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
With the maturity of face recognition technology, more and more scenes apply face recognition technology to perform identity verification. Specifically, the face recognition equipment performs identity verification by using a face recognition technology, and identity verification can be completed without human participation, so that the labor cost is saved.
In order to save the energy consumption of the face recognition equipment, the face recognition equipment can be in a standby or dormant state when no person needs to perform identity verification through the face recognition equipment, and the face recognition equipment is activated when the face recognition equipment is detected to perform identity verification through the face recognition equipment.
In the traditional technology, the face recognition equipment detects whether a person identifies an area by using an infrared sensor, and determines whether the person needs to perform identity verification through the face recognition equipment. However, when the illumination of the environment where the face recognition device is located is strong or outdoors, the infrared light emitted by the infrared sensor is easily weakened, the detection accuracy of the infrared sensor is low, and the recognition efficiency of the face recognition device is low, so that the recognition efficiency is affected.
Based on this, the embodiment of the application provides a technical scheme to improve the recognition efficiency of the face recognition device. The execution main part of this application embodiment is face identification equipment, and wherein, face identification equipment includes camera and infrared sensor, and this camera is the visible light camera.
The face recognition device can be any electronic device capable of executing the technical scheme disclosed by the embodiment of the method. Optionally, the face recognition device may be one of the following: cell-phone, computer, server, panel computer. The embodiments of the present application will be described below with reference to the drawings.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an activation method according to an embodiment of the present disclosure.
101. And acquiring the current ambient brightness.
In the embodiment of the application, the current environment brightness is the brightness of the environment where the face recognition device is located. In one possible implementation, the face recognition device further includes a brightness sensor. The face recognition equipment detects the brightness of the environment where the face recognition equipment is located by using a brightness sensor to obtain the brightness of the current environment.
In another implementation manner of obtaining the current ambient brightness, the face recognition device receives the current ambient brightness input by the user through the input component to obtain the current ambient brightness. The above-mentioned input assembly includes: keyboard, mouse, touch screen, touch pad, audio input device, etc.
In another implementation manner of obtaining the current ambient brightness, the face recognition device receives the current ambient brightness sent by the terminal to obtain the current ambient brightness. Optionally, the terminal may be any one of the following: cell-phone, computer, panel computer, server, wearable equipment.
102. And according to the relation between the environment brightness and the brightness threshold value, adopting a corresponding activation strategy for the face recognition equipment, wherein the activation strategy comprises the steps of activating the face recognition equipment by using the camera and activating the face recognition equipment by using the infrared sensor. After the face recognition device is activated from a standby state or a dormant state, the face recognition device can enter a working state, namely, the face recognition process is started.
In the embodiment of the application, the brightness threshold is used for judging whether the brightness of the environment where the face recognition device is located is too large. In one implementation of obtaining the brightness threshold, the face recognition device receives the brightness threshold input by the user through the input component to obtain the brightness threshold.
In another implementation manner of obtaining the brightness threshold, the face recognition device receives the brightness threshold sent by the terminal to obtain the brightness threshold.
In an embodiment of the present application, the relationship between the ambient brightness and the brightness threshold includes at least one of: the ambient brightness is greater than the brightness threshold, the ambient brightness is equal to the brightness threshold, and the ambient brightness is less than the brightness threshold.
The ambient brightness is greater than or equal to the brightness threshold value, which represents that the brightness of the environment where the face recognition device is located is too large, and at this time, if an infrared sensor is used to detect whether a person is in the recognition area, a large error will be generated. And when the brightness of the environment is smaller than the brightness threshold value, the brightness representing the environment where the face recognition equipment is located is smaller, and at the moment, whether people exist in the recognition area can be detected by using the infrared sensor.
In the embodiment of the application, the face recognition device comprises a non-working state and a working state. If at least one component of the face recognition device for detecting whether a person enters the recognition area is called a detection component, and at least one component of the face recognition device except the detection component is called a non-detection component. Then the detection component is in an operative state and the non-detection component is in an inoperative state, in the case where the face recognition device is in the inoperative state. And under the condition that the face recognition equipment is in a working state, the detection component and the non-detection component are both in a working state. In the embodiment of the present application, the non-operating state of the component (including the detection component and the non-detection component) may be a standby state of the component, a power-off state of the component, or a sleep state of the component.
When the recognition area is free of people, the face recognition equipment is in a non-working state, and when the recognition area is occupied, the face recognition equipment is activated to be in a working state, so that the energy consumption of the face recognition equipment can be saved, and the efficiency of the face recognition equipment can be improved. Therefore, whether the face recognition device is in an operating state by activating the face device is based on whether a person is in the recognition area.
In the embodiment of the application, the activation strategy comprises two implementation modes of activating the face recognition device by using a camera and activating the face recognition device by using an infrared sensor.
The detection accuracy of detecting whether a person exists in the identification area by using the infrared sensor is low when the illumination is strong, and the detection accuracy of detecting whether a person exists in the identification area by using the camera is high. When the illumination is weak, the detection accuracy for detecting whether a person exists in the identification area by using the infrared sensor is high, and the detection accuracy for detecting whether a person exists in the identification area by using the camera is low. Therefore, the face recognition device determines a corresponding activation strategy according to the relation between the ambient brightness and the brightness threshold, so that the detection accuracy is improved, and the recognition efficiency of the face recognition device is improved.
As an alternative implementation, the face recognition device performs the following steps in the process of performing step 102:
1. and controlling the camera to keep an on state and close the infrared sensor when the ambient brightness is greater than or equal to the brightness threshold, and activating the face recognition equipment by using the camera.
Under the condition that the ambient brightness is greater than or equal to the brightness threshold value, the face equipment is activated by the camera, the recognition efficiency can be improved, and the energy consumption of the face recognition equipment can be saved by turning off the infrared sensor.
As an alternative embodiment, the face recognition device executes the following steps in the process of activating the face recognition device by using the camera:
2. and detecting a target object in the identification area by using the camera, and activating the face identification equipment.
In this step, the target object includes a person. The face recognition device can acquire a visible light image of a recognition area by using a camera. By performing face detection on the visible light image, whether the visible light image contains a face can be determined. Determining that a person is in the identification area under the condition that the visible light image contains the face; in the case where it is determined that the face is not included in the visible-light image, it is determined that no person is present in the recognition area.
As an alternative implementation, the face recognition device performs the following steps in the process of performing step 102:
3. and under the condition that the ambient brightness is smaller than the brightness threshold, closing the camera and opening the infrared sensor, and activating the face recognition equipment by using the infrared sensor.
The illumination intensity is smaller than the brightness threshold value, which indicates that the environment where the face recognition device is located is dark, and the accuracy of detecting whether people exist in the recognition area by using the infrared sensor is high. Accordingly, the face recognition apparatus may detect whether a person is present in the recognition area using an infrared sensor. When the infrared sensor is used for detecting whether people exist in the identification area, the camera is in a closed state, so that the energy consumption and the loss of the face identification equipment can be reduced, the service life of the face identification equipment is prolonged, and the effect of saving the cost is achieved.
As an alternative embodiment, the face recognition device performs the following steps in the process of activating the face recognition device by using the infrared sensor:
4. and activating the face recognition equipment in response to the infrared sensor detecting that the target object exists in the recognition area.
In this step, the target object includes a person. And activating the face recognition equipment under the condition that the infrared sensor detects that a person exists in the recognition area.
In the embodiment of the application, the face recognition device determines whether the illumination intensity of the environment where the face recognition device is located is too large according to the current environment brightness and the brightness threshold. Because infrared sensor's detection accuracy under the great environment of illumination intensity is lower, infrared sensor's detection accuracy under the lower environment of illumination intensity is higher, and face identification equipment uses the camera to detect whether someone gets into the discernment region when the illumination intensity of environment is too big, and face identification equipment uses infrared sensor to detect whether someone gets into the discernment region when the illumination intensity of environment is too little, can improve detection accuracy. Thereby improving the recognition efficiency of the face recognition equipment.
As an alternative implementation, the face recognition device performs the following steps in the process of performing step 2:
5. and acquiring a first image to be processed by using the camera, wherein the acquisition area of the camera is the identification area.
6. And under the condition that an activation condition is met, determining that the target object in the recognition area is detected, and activating the face recognition equipment, wherein the activation condition comprises that the first image to be processed contains a face to be detected.
In the embodiment of the application, the face to be detected is the face of the first image to be processed. And under the condition that the first image to be processed comprises at least two faces, the face to be detected is any one face in the first image to be processed.
In this step, the first to-be-processed image includes a face to be detected, and the person in the first to-be-processed image is represented, that is, it is determined that a target object (i.e., a person) exists in the recognition area. Therefore, the face recognition device should be activated to be in an operating state.
The face recognition device can determine whether the first image to be processed contains a face by carrying out face detection processing on the first image to be processed.
In a possible implementation manner, the face recognition device processes the first image to be processed through a face detection algorithm, so as to implement face detection processing on the first image to be processed. The face detection algorithm may be one of the following: only one-eye algorithms (you only look once, YOLO), target detection algorithms (DMP), single-image multi-target detection algorithms (SSD), fast-RCNN algorithms, etc. are required.
In another possible implementation manner, the face detection processing on the first image to be processed may be implemented by a convolutional neural network. The image with the labeling information is used as training data to train the convolutional neural network, so that the trained convolutional neural network can finish the face detection processing of the image. The labeling information of the image in the training data is the position information of the face frame, and the face frame comprises the face.
As a possible implementation manner, before activating the face recognition device, the face recognition device further performs the following steps:
7. and acquiring a second image to be processed by using the camera.
In the embodiment of the application, the second image to be processed is different from the first image to be processed. Specifically, the acquisition time of the first image to be processed and the acquisition time of the second image to be processed are different.
For example, the face recognition device may first use the camera to acquire a first image to be processed, and then use the camera to acquire a second image to be processed; the face recognition equipment can also use the camera to collect the second image to be processed first, and then use the camera to collect the first image to be processed.
8. The activating condition further includes determining that the second image to be processed includes the face to be detected.
If the first image to be processed and the second image to be processed both contain the face to be detected, it is indicated that the figure corresponding to the face to be detected is in the recognition area at least at two different times, and at this time, the probability that the figure corresponding to the face to be detected enters the recognition area by mistake is low.
Therefore, the face recognition equipment is activated under the condition that the first image to be processed and the second image to be processed both contain the face to be detected, so that the face recognition equipment enters a working state, the recognition efficiency can be improved, the consumption of the face recognition equipment is saved, and the service life of the face recognition equipment is prolonged.
For example, zhang san does not want to pass the verification of the face recognition device, but zhang san enters the recognition area of the face recognition device by mistake, and the face recognition device acquires the first to-be-processed image containing zhang san when zhang san enters the recognition area by mistake. Zhang III immediately leaves the identification area after recognizing that the user wrongly enters the identification area. And after Zhang III leaves the recognition area, the face recognition equipment acquires a second image to be processed.
Because the second image to be processed does not contain Zhang III face, the face recognition device is not activated, and therefore the recognition efficiency is improved.
As an alternative embodiment, the acquisition time of the first to-be-processed image is the first time, and before performing step 7, the face recognition apparatus further performs the following steps:
9. acquisition time intervals are obtained.
In one implementation of obtaining the acquisition time interval, the face recognition device receives the acquisition time interval input by the user through the input component to obtain the acquisition time interval.
In another implementation manner of acquiring the acquisition time interval, the face recognition device receives the acquisition time interval sent by the terminal to acquire the acquisition time interval.
In yet another implementation of acquiring the acquisition time interval, the face recognition device stores the acquisition time interval. The face recognition device acquires the acquisition time interval by reading the acquisition time interval from the storage medium.
After step 9, the face recognition device performs the following steps in the process of performing step 7:
10. and acquiring the second to-be-processed image by using the camera at a second time, wherein the time interval between the second time and the first time is the acquisition time interval.
In this step, the time interval between the second time and the first time is the acquisition time interval, that is, the acquisition time interval between the first image to be processed and the second image to be processed is the time interval acquired in step 9.
For example, the face recognition device acquires the first time to be processed at 17 o 'clock 4 min 3 sec in 12/23/2020, that is, the first time is 4 min 3 sec in 17 o' clock in 12/23/2020. Assuming a time interval of 1 second, the second time is 12, 23, 17 o' clock, 4 minutes and 4 seconds in 2020. Namely, the face recognition device acquires the second image to be processed at 17 o' clock, 4 min and 4 sec in 12, 23 and 2020.
If the first time and the second time are too short, the first image to be processed and the second image to be processed both contain the face to be detected as activation conditions, and false activation is easy to occur. For example, zhang san enters the recognition area by mistake at 17 o 'clock 4 min 4 sec 12/23/2020, and the face recognition apparatus acquires the first to-be-processed image at 17 o' clock 4 min 4 sec 12/23/2020, where the first to-be-processed image contains the face of zhang san. Zhang three waited 3 seconds in the recognition area and then left the recognition area. The face recognition device acquires the second image to be processed at 17 o' clock, 4 min and 5 sec in 12, 23 and 23 of 2020, and at this time, the second image to be processed also contains the face of Zhang III.
Since the first image to be processed and the second image to be processed both contain three faces, the face recognition device will be activated. But zhang san apparently does not want authentication by the face recognition device. Therefore, the face recognition device will be activated by mistake.
If the first time and the second time are too long, the first image to be processed and the second image to be processed both contain the face to be detected as the activation condition, so that the situation of untimely activation is easy to occur. For example, zhang san enters the recognition area at 17 o 'clock 4 min and 4 sec at 12/23/2020, and waits for the verification of the face recognition device before walking to the face recognition device at 17 o' clock 4 min and 7 sec at 12/23/2020.
Assuming that the acquisition time interval is 5 seconds, Zhang III at least needs to wait 2 seconds before the face recognition device to perform face recognition. Obviously, when the collection time interval is too long, the situation of untimely activation is easy to occur.
In step 9 and step 10, by selecting an appropriate value for the acquisition time interval, the probability of occurrence of a false activation condition and the probability of occurrence of a non-timely activation condition can be reduced, thereby reducing consumption of the face recognition device and improving user experience.
As an alternative embodiment, before the face recognition device is activated, the following steps are further performed:
11. an activation direction range is obtained.
In the embodiment of the application, the activation direction range is used for judging whether the face to be detected moves towards the face recognition device. In one possible implementation, the activation direction range may be an angular range. For example, the activation direction range is [ -30 °, 30 ° ]. For convenience of description, in the embodiments of the present application, [ α, β ] denotes a value range of α or more and β or less.
In one implementation of obtaining the activation direction range, the face recognition device receives the activation direction range input by the user through the input component to obtain the activation direction range.
In another implementation manner of acquiring the activation direction range, the face recognition device receives the activation direction range sent by the terminal to acquire the activation direction range.
In yet another implementation of acquiring the acquisition time interval, the face recognition device stores an activation direction range. The face recognition device obtains the activation direction range by reading the activation direction range from the storage medium.
12. And obtaining the moving direction of the person corresponding to the face to be detected according to the first image to be processed and the second image to be processed.
In this embodiment of the application, the moving direction of the target object (i.e., the person corresponding to the face to be detected) may be an included angle between the moving direction of the target object and a baseline of the face recognition device. Wherein the baseline refers to a straight line parallel to the optical axis of the camera.
In one possible implementation, the face recognition device obtains a movement direction detection model before performing step 12. The face recognition equipment processes the first image to be processed and the second image to be processed by using the moving direction detection model to obtain the moving direction of the figure corresponding to the face to be detected.
The movement direction detection model is obtained by training a deep learning model using movement direction training data. The movement direction training data includes at least one pair of images, each pair including two different images, and both images including the same person. For example (example 1), the image pair a includes an image a and an image B, and it is assumed that both the image a and the image B contain a person B, and the position of the person B in the image a is different from the position of the person B in the image B. In the training process, the deep learning model processes the image a to obtain the moving direction of the character b.
Each image pair in the moving direction training data corresponds to one piece of labeling information, wherein the labeling information is the moving direction of a person in the image pair. And monitoring the segmentation result output by the deep learning model through the labeled information, updating the parameters of the deep learning model, and finishing the training of the deep learning model.
For example, the electronic device processes the image a in example 1 using the deep learning model to obtain the moving direction of the person b. And (3) updating parameters of the deep learning model based on the loss obtained by the labeling information of the image pair a and the moving direction of the character b, finishing the training of the deep learning model, and obtaining a moving direction detection model.
In another possible implementation manner, the face recognition device performs face detection processing on the first image to be processed to obtain a first position of the face to be detected in the first image to be processed. And the face recognition equipment carries out face detection processing on the second image to be processed to obtain a second position of the face to be detected in the second image to be processed. Because the first image to be processed and the second image to be processed are acquired by the same camera, the pixel coordinate system of the first image to be processed is the same as the pixel coordinate system of the second image to be processed. A point determined by a first position in the pixel coordinate system is referred to as a first point, and a point determined by a second position in the pixel coordinate system is referred to as a second point. And determining an included angle between a straight line of the first point and the second point and a longitudinal axis of the pixel coordinate system as the moving direction of the target object.
13. The activation condition further includes that the moving direction is in the activation direction range.
In the embodiment of the application, the moving direction of the target object is within the range of the activation direction, which indicates that the target object moves towards the face recognition device, that is, the probability that the target object needs to be verified by the face recognition device is higher. At this time, the face recognition device is activated, so that the recognition efficiency can be improved.
The moving direction of the target object is outside the range of the activation direction, which indicates that the target object is within the recognition area, but the target object does not move toward the face recognition device, i.e., the probability that the target object needs to be verified by the face recognition device is low. At this time, the face recognition device is activated, and false activation occurs with a high probability.
Therefore, in this step, the moving direction is in the range of the activation direction as the activation condition, so that the recognition efficiency of the face recognition device can be improved.
As an alternative implementation, the face recognition device performs the following steps in the process of performing step 4:
14. a first amount of infrared light detected by the infrared sensor is acquired.
In this embodiment, the first infrared light amount is an infrared light amount detected by the infrared sensor in an operating state. In one possible implementation, the face recognition device periodically obtains the amount of infrared light detected by the infrared sensor from the infrared sensor.
In another possible implementation, the infrared sensor transmits the detected amount of infrared light to a processor of the face recognition device in real time.
15. And when the first infrared light quantity is larger than a light quantity threshold value, determining that the target object exists in the recognition area, and activating the face recognition device.
In the embodiment of the application, the first infrared light quantity is larger than the light quantity threshold value, which indicates that the infrared sensor detects that someone is in the identification area. Therefore, in the case where the face recognition device determines that the first infrared light amount is larger than the light amount threshold, it is determined that there is a target object (i.e., a person) within the recognition area, and the face recognition device is activated. In a case where the face recognition device determines that the first infrared light amount is smaller than the light amount threshold, the face recognition device is not activated. In the case where the face recognition device determines that the first amount of infrared light is equal to the light amount threshold, it is determined that there is no target object in the recognition area, and the face recognition device is not activated.
As an alternative embodiment, after determining that the first infrared light amount is greater than the light amount threshold value, before determining that the target object exists in the recognition area, the face recognition device further performs the following steps:
16. and starting the camera, and acquiring a third image to be processed by using the camera.
In the embodiment of the application, the third image to be processed is different from the first image to be processed, and the third image to be processed is different from the second image to be processed.
17. And obtaining the infrared light compensation amount according to the third image to be processed.
The color of the clothing on the person may affect the detection accuracy of the infrared sensor due to the different absorption of infrared light by different colors. For example, black has a greater ability to absorb infrared light than white. If three pieces of cloth and four pieces of plum are both in the identification area, and three pieces of cloth are black clothes, and four pieces of plum are white clothes. Because the black has stronger absorption capacity to infrared light than the white, the infrared light quantity obtained by three-dimensional reflection of the infrared light emitted by the infrared sensor is lower than that obtained by four-dimensional reflection of the infrared light. Which in turn results in the infrared sensor detecting the presence of lie four but not Zhang three.
In this step, the infrared compensation amount is positively correlated with the absorption capability of the color of the target object to the infrared light. And the face recognition equipment obtains the color of the target object according to the third image to be processed, and corrects the detection result of the infrared sensor according to the color of the target object. Specifically, the face recognition device obtains the infrared light compensation amount according to the third image to be processed, and uses the infrared light compensation amount to correct the detection result of the infrared sensor in subsequent processing.
In a possible implementation manner, before the face recognition device executes step 17, the face recognition device acquires an infrared light compensation amount detection network. And the video stream processing device processes the third image to be processed by using the infrared light compensation amount detection network to obtain the infrared light compensation amount.
The infrared light compensation amount detection network can be a convolutional neural network, and the convolutional neural network is trained by taking a plurality of images as training data, so that the trained convolutional neural network can obtain the infrared light compensation amount matched with the images. The labeling information of the training data includes an infrared light compensation amount. And monitoring the result output by the convolutional neural network through the labeling information, updating the parameters of the convolutional neural network, and finishing the training of the convolutional neural network.
18. And correcting the infrared light quantity detected by the infrared sensor according to the infrared light compensation quantity to obtain a second infrared light quantity.
In one possible implementation manner, the face recognition device determines the sum of the infrared light compensation amount and the first infrared light amount to obtain the second infrared light amount. For example, the infrared light compensation amount is 10 lux, and the first infrared light amount is 30 lux. The face recognition device uses 10+ 30-40 lux as the second infrared light amount.
In another possible implementation manner, the face recognition device includes an infrared lamp, and the infrared lamp is used for performing infrared supplementary lighting. And the face recognition equipment determines the driving current of the infrared lamp according to the infrared light compensation quantity. The face recognition device drives the infrared lamp with the driving current, and acquires a second infrared light amount detected by the infrared sensor as the second infrared light amount.
The face recognition equipment determines the driving current of the infrared lamp according to the infrared light compensation amount, drives the infrared lamp through the driving current, can achieve infrared light supplement, and further reduces the influence of the clothes color of the target object on the detection accuracy of the infrared sensor.
After the second infrared light quantity is obtained, the face recognition device determines that a target object exists in the recognition area under the condition that the second infrared light quantity is smaller than the light quantity threshold value, namely, people exist in the recognition area; the face recognition device determines that there is a target object in the recognition area, i.e., that there is a person in the recognition area, or in the case where it is determined that the second amount of infrared light is equal to the light amount threshold.
In the embodiment of the application, the face recognition device can improve the detection accuracy of the infrared sensor by executing the steps 16 to 18, so that the recognition efficiency of the face recognition device is improved.
As an alternative embodiment, before executing step 17, the face recognition apparatus further executes the following steps:
19. and acquiring a mapping relation between the color and the infrared light compensation.
In the embodiment of the present application, the mapping relationship represents mapping between color and infrared light compensation. In this mapping relation, the amount of infrared light compensation is larger for a color having a higher infrared light absorption capability.
In one implementation of obtaining the mapping relationship between the color and the infrared light compensation, the face recognition device receives the mapping relationship between the color and the infrared light compensation input by the user through the input component to obtain the mapping relationship between the color and the infrared light compensation.
In another implementation manner of obtaining the mapping relationship between the color and the infrared light compensation, the face recognition device receives the mapping relationship between the color and the infrared light compensation sent by the terminal, and obtains the mapping relationship between the color and the infrared light compensation.
In yet another implementation of obtaining the mapping relationship between the color and the infrared light compensation, the mapping relationship between the color and the infrared light compensation is stored in the face recognition device. The face recognition device obtains the mapping relation between the color and the infrared light compensation by reading the mapping relation between the color and the infrared light compensation from the storage medium.
After the mapping relationship is obtained, the face recognition device executes the following steps in the process of executing step 17:
20. and performing feature extraction processing on the target object in the third image to be processed to obtain feature data.
In the embodiment of the application, the characteristic data carrying information includes color information of the target object. The feature extraction processing can be realized through a deep learning model, and the deep learning model is trained by taking a plurality of images with labeling information as training data, so that the trained deep learning model can finish the feature extraction processing of the images. The annotation information of the images in the training data includes: color information of a person in the image. In the process of training the deep learning model by using the training data, the deep learning model extracts the characteristic data of the image from the image and determines the color information of the person in the image according to the characteristic data. And monitoring the result obtained by the deep learning model in the training process by taking the marking information as the monitoring information, updating the parameters of the deep learning model, and finishing the training of the deep learning model. In this way, the face recognition device may perform feature extraction processing on the third image using the trained deep learning model to obtain feature data of the target object.
21. And obtaining the color of the target object according to the characteristic data.
22. And obtaining the infrared light compensation amount according to the mapping relation and the color of the target object.
Based on the technical scheme provided by the embodiment of the application, the embodiment of the application also provides a possible application scene. With the development of face recognition technology, face recognition technology has been widely applied to different application scenarios, wherein confirming the identity of a person through face recognition is an important application scenario, for example, real-name authentication, identity authentication, and the like are performed through face recognition technology.
The face recognition technology obtains face feature data by performing feature extraction processing on a face image obtained by collecting a face region of a person. And comparing the extracted face feature data with the face feature data in the database to determine the identity of the person in the face image.
However, recently, an event of attacking the face recognition technology using a "non-living body" face image has been increasingly occurring. The above-mentioned "non-living body" face image includes: paper photos, electronic images, etc. The face recognition technology is attacked by using the non-living body face image, namely, the non-living body face image is used for replacing the face area of a person, so that the effect of deceiving the face recognition technology is achieved. For example, Zhang III places a photo of Li IV in front of a cell phone of Li IV for face recognition unlocking. The mobile phone shoots the photo of the fourth plum through the camera to obtain a face image of the face area containing the fourth plum, further determines the identity of the third plum as the fourth plum, and unlocks the mobile phone. Therefore, Zhang III realizes the unlocking of the Li IV mobile phone by successfully deceiving the face recognition technology of the mobile phone by using the Li IV photo. The effective prevention of "non-living" human face images is of great importance to the attack of face recognition technology (hereinafter referred to as two-dimensional attack).
In the prior art, the face recognition device can realize living body detection based on an RGB camera and an infrared camera, and reduce the probability of occurrence of an event that a non-living body face image attacks the face recognition technology.
Obviously, when no one needs to perform identity verification through a face recognition technology, the infrared camera and the RGB camera do not need to work. Therefore, based on the technical scheme provided by the embodiment of the application, when the fact that someone needs to perform identity authentication (namely, in the identification area of the face identification device) is detected, the face identification device is activated, so that the energy consumption and the consumption of the face identification device can be reduced, the service life of the face identification device is prolonged, and the identification efficiency is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a face recognition device according to an embodiment of the present disclosure. The face recognition device comprises a camera 11, an infrared sensor 12, an acquisition unit 13, an activation unit 14, a first processing unit 15, a second processing unit 16 and a third processing unit 17, and further comprises:
an obtaining unit 13, configured to obtain current ambient brightness;
and the activating unit 14 is configured to apply a corresponding activation policy to the face recognition device according to the relationship between the ambient brightness and the brightness threshold, where the activation policy includes activating the face recognition device by using the camera and activating the face recognition device by using the infrared sensor.
In combination with any embodiment of the present application, the activation unit 14 is specifically configured to:
and under the condition that the ambient brightness is greater than or equal to the brightness threshold, controlling the camera to keep an on state and closing the infrared sensor, and activating the face recognition equipment by using the camera.
In combination with any embodiment of the present application, the activation unit 14 is specifically configured to:
and detecting a target object in the identification area by using the camera, and activating the face identification equipment.
In combination with any embodiment of the present application, the activation unit 14 is specifically configured to:
and under the condition that the ambient brightness is smaller than the brightness threshold, closing the camera and starting the infrared sensor, and activating the face recognition equipment by using the infrared sensor.
In combination with any embodiment of the present application, the activation unit 14 is specifically configured to:
and in response to the infrared sensor detecting that a target object exists in the recognition area, activating the face recognition device.
In combination with any embodiment of the present application, the activation unit 14 is specifically configured to:
acquiring a first image to be processed by using the camera, wherein an acquisition area of the camera is the identification area;
and under the condition that an activation condition is met, determining that the target object in the recognition area is detected, and activating the face recognition equipment, wherein the activation condition comprises that the first image to be processed contains a face to be detected.
In combination with any embodiment of the present application, the activation unit 14 is specifically configured to:
acquiring a second image to be processed by using the camera;
the activating condition further includes determining that the second image to be processed includes the face to be detected.
With reference to any embodiment of the present application, the acquiring time of the first to-be-processed image is a first time, and the acquiring unit 13 is further configured to acquire an acquiring time interval before the camera is used to acquire a second to-be-processed image;
the activation unit 14 is specifically configured to:
and acquiring the second image to be processed by using the camera at a second time, wherein the time interval between the second time and the first time is the acquisition time interval.
With reference to any embodiment of the present application, the obtaining unit 13 is further configured to obtain an activation direction range before the face recognition device is activated;
the face recognition apparatus 1 further includes:
a first processing unit 15, configured to obtain a moving direction of the target object according to the first image to be processed and the second image to be processed;
the activation condition further comprises that the movement direction is in the activation direction range.
In combination with any embodiment of the present application, the activation unit 14 is specifically configured to:
acquiring a first infrared light quantity detected by the infrared sensor;
and under the condition that the first infrared light quantity is larger than a light quantity threshold value, determining that the target object exists in the recognition area, and activating the face recognition device.
With reference to any embodiment of the present application, after the face recognition device 1 determines that the first infrared light amount is greater than the light amount threshold, before determining that the target object exists in the recognition area, the camera 11 is started, and a third image to be processed is acquired by using the camera 11;
the face recognition apparatus 1 further includes:
the second processing unit 16 is configured to obtain an infrared light compensation amount according to the third image to be processed;
a third processing unit 17, configured to correct the amount of infrared light detected by the infrared sensor according to the infrared light compensation amount, so as to obtain a second amount of infrared light;
the activation unit 14 is specifically configured to:
and determining that the target object exists in the identification area when the second infrared light quantity is smaller than or equal to the light quantity threshold value.
With reference to any embodiment of the present application, the third processing unit 17 is specifically configured to:
and determining the sum of the infrared light compensation amount and the first infrared light amount to obtain the second infrared light amount.
In combination with any embodiment of the present application, the face recognition device further includes an infrared lamp, and the third processing unit is specifically configured to:
determining the driving current of the infrared lamp according to the infrared light compensation amount;
and driving the infrared lamp by the driving current, and acquiring a second infrared light quantity detected by the infrared sensor as the second infrared light quantity.
With reference to any embodiment of the present application, the obtaining unit 13 is further configured to obtain a mapping relationship between a color and infrared light compensation before obtaining an infrared light compensation amount according to the third image to be processed;
the third processing unit 17 is specifically configured to:
performing feature extraction processing on the target object in the third image to be processed to obtain feature data;
obtaining the color of the target object according to the characteristic data;
and obtaining the infrared light compensation amount according to the mapping relation and the color of the target object.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present application may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Fig. 3 is a schematic diagram of a hardware structure of a face recognition device according to an embodiment of the present application. The face recognition apparatus 2 comprises a processor 21, a memory 22, an input device 23, an output device 24. The processor 21, the memory 22, the input device 23 and the output device 24 are coupled by a connector, which includes various interfaces, transmission lines or buses, etc., and the embodiment of the present application is not limited thereto. It should be appreciated that in various embodiments of the present application, coupled refers to being interconnected in a particular manner, including being directly connected or indirectly connected through other devices, such as through various interfaces, transmission lines, buses, and the like.
The processor 21 may include one or more processors, for example, one or more Central Processing Units (CPUs), and in the case that the processor 21 is one CPU, the CPU may be a single-core CPU or a multi-core CPU.
The processor 21 is used for calling the program codes and data in the memory and executing the steps in the above method embodiments. Specifically, reference may be made to the description of the method embodiment, which is not repeated herein.
The memory 22 is used to store program codes and data for the network devices.
The memory 22 includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), which is used for related instructions and data.
The input means 23 are for inputting data and/or signals and the output means 24 are for outputting data and/or signals. The output device 24 and the input device 23 may be separate devices or may be an integral device.
It is understood that, in the embodiment of the present application, the memory 22 may be used to store not only the relevant instructions, but also relevant data, for example, the memory 22 may be used to store the current ambient brightness acquired through the input device 23, and the embodiment of the present application is not limited to the data specifically stored in the memory.
It will be appreciated that figure 3 only shows a simplified design of a face recognition device. In practical applications, the face recognition apparatuses may further include other necessary components, including but not limited to any number of input/output devices, processors, memories, etc., and all face recognition apparatuses that can implement the embodiments of the present application are within the scope of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It is also clear to those skilled in the art that the descriptions of the various embodiments of the present application have different emphasis, and for convenience and brevity of description, the same or similar parts may not be repeated in different embodiments, so that the parts that are not described or not described in detail in a certain embodiment may refer to the descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media that can store program codes, such as a read-only memory (ROM) or a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Claims (10)

1. An activation method is applied to a face recognition device, the face recognition device comprises a camera and an infrared sensor, and the method comprises the following steps:
acquiring the current environment brightness;
and according to the relation between the ambient brightness and the brightness threshold, adopting a corresponding activation strategy for the face recognition equipment, wherein the activation strategy comprises the steps of activating the face recognition equipment by using the camera and activating the face recognition equipment by using the infrared sensor.
2. The method according to claim 1, wherein the applying the corresponding activation policy to the face recognition device according to the relationship between the ambient brightness and the brightness threshold comprises:
and under the condition that the ambient brightness is greater than or equal to the brightness threshold, controlling the camera to keep an on state and closing the infrared sensor, and activating the face recognition equipment by using the camera.
3. The method of claim 1 or 2, wherein activating the face recognition device with the camera comprises:
and detecting a target object in the identification area by using the camera, and activating the face identification equipment.
4. The method according to claim 1, wherein the applying the corresponding activation policy to the face recognition device according to the relationship between the ambient brightness and the brightness threshold comprises:
and under the condition that the ambient brightness is smaller than the brightness threshold, closing the camera and starting the infrared sensor, and activating the face recognition equipment by using the infrared sensor.
5. The method of claim 1 or 4, wherein the activating the face recognition device with the infrared sensor comprises:
and in response to the infrared sensor detecting that a target object exists in the recognition area, activating the face recognition device.
6. The method according to any one of claims 3 to 5, wherein the detecting a target object within a recognition area using the camera, activating the face recognition device, comprises:
acquiring a first image to be processed by using the camera, wherein an acquisition area of the camera is the identification area;
and under the condition that an activation condition is met, determining that the target object in the recognition area is detected, and activating the face recognition equipment, wherein the activation condition comprises that the first image to be processed contains a face to be detected.
7. The method of claim 6, wherein prior to said activating the face recognition device, the method further comprises:
acquiring a second image to be processed by using the camera;
the activating condition further includes determining that the second image to be processed includes the face to be detected.
8. The utility model provides a face identification equipment, its characterized in that, face identification equipment includes camera and infrared sensor, face identification equipment still includes:
the acquisition unit is used for acquiring the current environment brightness;
and the activation unit is used for adopting a corresponding activation strategy for the face recognition equipment according to the relation between the ambient brightness and the brightness threshold, wherein the activation strategy comprises the steps of activating the face recognition equipment by using the camera and activating the face recognition equipment by using the infrared sensor.
9. An electronic device, comprising: a processor and a memory for storing computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored, which computer program comprises program instructions which, if executed by a processor, cause the processor to carry out the method of any one of claims 1 to 7.
CN202011608327.9A 2020-12-29 2020-12-29 Activation method and device, electronic equipment and computer readable storage medium Pending CN112650379A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011608327.9A CN112650379A (en) 2020-12-29 2020-12-29 Activation method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011608327.9A CN112650379A (en) 2020-12-29 2020-12-29 Activation method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112650379A true CN112650379A (en) 2021-04-13

Family

ID=75364249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011608327.9A Pending CN112650379A (en) 2020-12-29 2020-12-29 Activation method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112650379A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951833A (en) * 2017-03-02 2017-07-14 北京旷视科技有限公司 Human face image collecting device and man face image acquiring method
CN107220621A (en) * 2017-05-27 2017-09-29 北京小米移动软件有限公司 Terminal carries out the method and device of recognition of face
CN207037676U (en) * 2017-03-02 2018-02-23 北京旷视科技有限公司 Human face image collecting device
CN109151390A (en) * 2018-09-21 2019-01-04 深圳市九洲电器有限公司 A kind of ultra-low illumination night vision method, system and high definition camera device
CN109635760A (en) * 2018-12-18 2019-04-16 深圳市捷顺科技实业股份有限公司 A kind of face identification method and relevant device
CN110532992A (en) * 2019-09-04 2019-12-03 深圳市捷顺科技实业股份有限公司 A kind of face identification method based on visible light and near-infrared
CN110532957A (en) * 2019-08-30 2019-12-03 北京市商汤科技开发有限公司 Face identification method and device, electronic equipment and storage medium
CN209746637U (en) * 2019-01-14 2019-12-06 上海理工大学 Vehicle lock control system based on face recognition and gesture recognition
WO2020172991A1 (en) * 2019-02-25 2020-09-03 深圳传音通讯有限公司 Terminal unlocking method and terminal
CN111652131A (en) * 2020-06-02 2020-09-11 浙江大华技术股份有限公司 Face recognition device, light supplementing method thereof and readable storage medium
CN111814561A (en) * 2020-06-11 2020-10-23 浙江大华技术股份有限公司 Face recognition method, face recognition equipment and access control system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951833A (en) * 2017-03-02 2017-07-14 北京旷视科技有限公司 Human face image collecting device and man face image acquiring method
CN207037676U (en) * 2017-03-02 2018-02-23 北京旷视科技有限公司 Human face image collecting device
CN107220621A (en) * 2017-05-27 2017-09-29 北京小米移动软件有限公司 Terminal carries out the method and device of recognition of face
CN109151390A (en) * 2018-09-21 2019-01-04 深圳市九洲电器有限公司 A kind of ultra-low illumination night vision method, system and high definition camera device
CN109635760A (en) * 2018-12-18 2019-04-16 深圳市捷顺科技实业股份有限公司 A kind of face identification method and relevant device
CN209746637U (en) * 2019-01-14 2019-12-06 上海理工大学 Vehicle lock control system based on face recognition and gesture recognition
WO2020172991A1 (en) * 2019-02-25 2020-09-03 深圳传音通讯有限公司 Terminal unlocking method and terminal
CN110532957A (en) * 2019-08-30 2019-12-03 北京市商汤科技开发有限公司 Face identification method and device, electronic equipment and storage medium
CN110532992A (en) * 2019-09-04 2019-12-03 深圳市捷顺科技实业股份有限公司 A kind of face identification method based on visible light and near-infrared
CN111652131A (en) * 2020-06-02 2020-09-11 浙江大华技术股份有限公司 Face recognition device, light supplementing method thereof and readable storage medium
CN111814561A (en) * 2020-06-11 2020-10-23 浙江大华技术股份有限公司 Face recognition method, face recognition equipment and access control system

Similar Documents

Publication Publication Date Title
CN105893920B (en) Face living body detection method and device
WO2021218180A1 (en) Method and apparatus for controlling unlocking of vehicle door, and vehicle, device, medium and program
KR101598771B1 (en) Method and apparatus for authenticating biometric by using face recognizing
WO2019080797A1 (en) Living body detection method, terminal, and storage medium
WO2018176399A1 (en) Image collection method and device
US20160379042A1 (en) Apparatuses, systems, and methods for confirming identity
US11328044B2 (en) Dynamic recognition method and terminal device
CN107527021B (en) Biometric pattern opening method and related product
CN104933344A (en) Mobile terminal user identity authentication device and method based on multiple biological feature modals
US9930525B2 (en) Method and system for eyeprint recognition unlocking based on environment-filtering frames
CN204791017U (en) Mobile terminal users authentication device based on many biological characteristics mode
CN106934269A (en) Iris unlocking method, iris recognition display methods and mobile terminal
US20120320181A1 (en) Apparatus and method for security using authentication of face
US9449217B1 (en) Image authentication
CN108090340B (en) Face recognition processing method, face recognition processing device and intelligent terminal
US11100891B2 (en) Electronic device using under-display fingerprint identification technology and waking method thereof
WO2020216091A1 (en) Image processing method and related apparatus
CN103856614A (en) Method and device for avoiding error hibernation of mobile terminal
CN109684993B (en) Face recognition method, system and equipment based on nostril information
CN111183431A (en) Fingerprint identification method and terminal equipment
US20140369553A1 (en) Method for triggering signal and in-vehicle electronic apparatus
CN111104917A (en) Face-based living body detection method and device, electronic equipment and medium
CN107729833B (en) Face detection method and related product
CN107291238B (en) Data processing method and device
CN113591526A (en) Face living body detection method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination