CN108345845B - Image sensor, lens module, mobile terminal, face recognition method and device - Google Patents

Image sensor, lens module, mobile terminal, face recognition method and device Download PDF

Info

Publication number
CN108345845B
CN108345845B CN201810083033.5A CN201810083033A CN108345845B CN 108345845 B CN108345845 B CN 108345845B CN 201810083033 A CN201810083033 A CN 201810083033A CN 108345845 B CN108345845 B CN 108345845B
Authority
CN
China
Prior art keywords
image
visible light
light
face
infrared light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810083033.5A
Other languages
Chinese (zh)
Other versions
CN108345845A (en
Inventor
孟彦强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810083033.5A priority Critical patent/CN108345845B/en
Publication of CN108345845A publication Critical patent/CN108345845A/en
Priority to PCT/CN2019/073403 priority patent/WO2019144956A1/en
Application granted granted Critical
Publication of CN108345845B publication Critical patent/CN108345845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/17Image acquisition using hand-held instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The invention discloses an image sensor, a lens module, a mobile terminal, a face recognition method and a face recognition device, wherein the image sensor comprises: the micro-lens structure comprises a micro-lens layer, a photosensitive element layer and a filter layer arranged between the micro-lens layer and the photosensitive element layer; incident light sequentially passes through the micro-lens layer and the filter layer and then is transmitted to the photosensitive element layer, and the photosensitive element layer comprises a first photosensitive area corresponding to visible light and a second photosensitive area corresponding to infrared light. The invention carries out image feature matching on the image features of the infrared light image of the face to be verified, can determine whether the face in front of the camera is a real face or a face photo, and improves the safety and accuracy of face recognition.

Description

Image sensor, lens module, mobile terminal, face recognition method and device
Technical Field
The invention relates to the field of image processing, in particular to an image sensor, a lens module, a mobile terminal, a face recognition method and a face recognition device.
Background
Nowadays, with the application of the face recognition technology to the intelligent mobile terminal, the face recognition technology is more and more concerned. As a very important branch of biometric identification technology, face recognition technology has become a very popular research field in the field of computer vision and pattern recognition.
The human face recognition technology is a technology for recognizing the distribution characteristics of facial features, such as facial features, outlines and the like of people, and has good non-invasive performance compared with other biological recognition technologies because the facial features and the outlines of different people are distributed differently, and a good recognition effect can be achieved without interfering the normal behaviors of people.
At present, a face recognition technology applied to a mobile terminal cannot accurately judge whether an image shot by a camera is a real face, and a lawbreaker can use a face photo as the real face to complete face recognition, so that the safety of face recognition is poor.
Disclosure of Invention
The invention provides an image sensor, a lens module, a mobile terminal, a face recognition method and a face recognition device, and aims to solve the problem that the face recognition safety is low because whether an image shot by a camera is a real face cannot be accurately judged.
In a first aspect, an embodiment of the present invention provides an image sensor, including:
the micro-lens structure comprises a micro-lens layer, a photosensitive element layer and a filter layer arranged between the micro-lens layer and the photosensitive element layer;
the incident light sequentially passes through the micro-lens layer and the filter layer and then is transmitted to the photosensitive element layer, and the photosensitive element layer comprises a first photosensitive area corresponding to visible light and a second photosensitive area corresponding to infrared light.
In a second aspect, an embodiment of the invention provides a lens module including the image sensor.
In a third aspect, an embodiment of the present invention provides a mobile terminal, including the lens module.
In a fourth aspect, an embodiment of the present invention provides a face recognition method, including:
acquiring a visible light image and an infrared light image after image acquisition is carried out on a face to be verified;
determining whether the image features of the visible light image are matched with the image features of a pre-stored visible light facial image and whether the image features of the infrared light image are matched with the image features of a pre-stored infrared light facial image;
and if the image characteristics of the visible light image are matched with the image characteristics of the pre-stored visible light face image and the image characteristics of the infrared light image are matched with the image characteristics of the pre-stored infrared light face image, determining that the verification of the face to be verified is successful.
In a fifth aspect, an embodiment of the present invention further provides a face recognition apparatus, including:
the acquisition module is used for acquiring a visible light image and an infrared light image after image acquisition is carried out on the face to be verified;
a first determining module, configured to determine whether an image feature of the visible light image matches an image feature of a pre-stored visible light facial image, and whether an image feature of the infrared light image matches an image feature of a pre-stored infrared light facial image;
and the second determining module is used for determining that the verification of the face to be verified is successful if the image characteristics of the visible light image are matched with the image characteristics of the pre-stored visible light face image and the image characteristics of the infrared light image are matched with the image characteristics of the pre-stored infrared light face image.
In a sixth aspect, an embodiment of the present invention further provides a mobile terminal, including: comprising a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program realizing the steps of the face recognition method as described above when executed by said processor.
Therefore, in the embodiment of the invention, the image characteristics of the infrared light image after the image acquisition of the face to be verified are matched, so that whether the face in front of the camera is a real face or a face picture can be determined, and the safety and the accuracy of face recognition are improved; moreover, the verification of the face to be verified is determined only after the image characteristics of the visible light image and the image characteristics of the infrared light image are matched successfully, so that the accuracy of face recognition is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive labor.
FIG. 1 shows one of the schematic structural diagrams of an embodiment of an image sensor of the present invention;
FIG. 2 is a second schematic diagram of an embodiment of an image sensor;
FIG. 3 is a third schematic view of an embodiment of an image sensor according to the present invention;
FIG. 4 shows a flow chart of an embodiment of a face recognition method of the present invention;
FIG. 5 is a schematic structural diagram of an embodiment of a face recognition apparatus according to the present invention;
FIG. 6 is a second schematic structural diagram of an embodiment of a face recognition apparatus according to the present invention;
fig. 7 shows a block diagram of an embodiment of a mobile terminal of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 to 3, an embodiment of the present invention provides an image sensor applied to the above-mentioned face recognition method, including:
the micro-lens structure comprises a micro-lens layer 11, a photosensitive element layer and a filter layer arranged between the micro-lens layer 11 and the photosensitive element layer;
the incident light sequentially passes through the microlens layer 11 and the filter layer and then is transmitted to the photosensitive device layer, and the photosensitive device layer forms a first photosensitive region 131 including visible light and a second photosensitive region 132 corresponding to infrared light. A visible light image may be obtained by collecting the photosensitive signals in the first photosensitive area 131, and an infrared image may be obtained by collecting the photosensitive signals in the second photosensitive area 132.
Specifically, the incident light ray mentioned above refers to a natural light ray.
The image sensor in the embodiment of the invention can realize visible light imaging and infrared light imaging. By carrying out image feature matching on the image features of the infrared light image after the image acquisition is carried out on the face to be verified, whether the face in front of the camera is a real face or a face picture can be determined, and the safety and the accuracy of face recognition are improved; moreover, the verification of the face to be verified is determined only after the image characteristics of the visible light image and the image characteristics of the infrared light image are matched successfully, so that the accuracy of face recognition is improved.
Specifically, in the embodiment of the present invention, the image sensor in the embodiment of the present invention includes three implementations according to different designs of the filter layer and the photosensitive element layer, and the microlens layer 11 of the image sensor in the three implementations is the same, so only the respective filter layer and the photosensitive element layer will be described hereinafter.
Referring to fig. 1, a first implementation of the image sensor in the embodiment of the present invention is described below, in the first implementation, an infrared light sensing diode with a sensing wavelength range between 780nm and 1mm is disposed in the second sensing region 132, and the filter layer includes: a first filter layer including a invisible light filter region 1211 for passing visible light and a first pass region 1212a for passing natural light; a second filter layer including a color filter region 1221 for passing visible light and a second passing region 1222a for passing natural light; the invisible light filtering area 1211, the color filter area 1221 and the first photosensitive area 131 correspond to each other, and the first passing area 1212a, the filter area 1222 and the second photosensitive area 132 correspond to each other.
The invisible light filter region 1211 refers to a filter region that blocks the passage of invisible light and allows the passage of visible light; natural light is allowed to pass through the first passing area 1212a of the first filter layer, and no filtering structure or filtering substance for preventing the natural light from passing through is disposed at the first passing area 1212a of the first filter layer. That is, the light reaching the position of the color filter area 1221 is visible light, and the light reaching the position of the second passing area 1222a of the second filter layer is natural light.
The color filter area 1221 includes a monochromatic light passing area for passing three kinds of light, namely red light, green light and blue light, and accordingly includes a red light sensing sub-area, a green light sensing sub-area and a blue light sensing sub-area at the first light sensing area 131, where the red light passing area on the color filter area 1221 corresponds to the red light sensing sub-area on the first light sensing area 131, the green light passing area on the color filter area 1221 corresponds to the green light sensing sub-area on the first light sensing area 131, and the blue light passing area on the color filter area 1221 corresponds to the blue light sensing sub-area on the first light sensing area 131. After the visible light reaches any one of the monochromatic light passing areas on the color filter area 1221, the light of the color corresponding to the monochromatic light passing area in the visible light passes through the area to reach the position on the first photosensitive area 131 corresponding to the monochromatic light passing area.
Natural light is allowed to pass through the second passing area 1222a of the second filter layer, and a filter structure or a filter substance for preventing the natural light from passing through is not provided at the position of the second passing area 1222a of the second filter layer. That is, the light reaching the position of the second photosensitive region 132 is a natural light.
In this first implementation manner, the first photosensitive region 131 is provided with a visible light sensing photodiode, and includes: the red light sensing diode is positioned in the red light sensing subarea, the green light sensing diode is positioned in the green light sensing subarea, and the blue light sensing diode is positioned in the blue light sensing subarea; the light sensing wavelength range of the red light sensing diode is between 640 and 780nm, the light sensing wavelength range of the green light sensing diode is between 505nm and 525nm, and the light sensing wavelength of the blue light sensing diode is between 475nm and 505 nm. The monochromatic light reaching the three photosensitive sub-regions of the first photosensitive region 131 converts the optical signal into an electrical signal through the corresponding monochromatic photodiode, and then generates a visible light image according to the electrical signal.
Since the sensing wavelength range of the infrared light sensing diode in the second sensing region 132 is between 780nm and 1mm, that is, it can only convert the optical signal of the infrared light in the natural light. Therefore, the natural light reaching the second photosensitive region 132 can perform signal conversion of the infrared light, thereby generating an infrared light image.
In the following, a second implementation of the image sensor in the embodiment of the present invention is described, and referring to fig. 2, in the second implementation, the filter layer includes: a first filter layer including a invisible light filter region 1211 for passing visible light and a third passing region 1212b for passing natural light; a second filter layer including a color filter region 1221 for passing visible light and a first filter region 1222b for passing infrared light; the invisible light filtering area 1211, the color filtering area 1221 and the first photosensitive area 131 correspond, and the third passing area 1212b, the first filtering area 1222b and the second photosensitive area 132 correspond.
The invisible light filter region 1211 refers to a filter region that blocks the passage of invisible light and allows the passage of visible light; natural light is allowed to pass through the third passing area 1212b of the first filter layer, and no filtering structure or filtering substance for preventing the natural light from passing through is disposed at the position of the third passing area 1212b of the first filter layer. That is, the light reaching the position of the color filter area 1221 is visible light, and the light reaching the position of the first filter area 1222b is natural light.
The color filter area 1221 includes a monochromatic light passing area for passing three kinds of light, namely red light, green light and blue light, and accordingly includes a red light sensing sub-area, a green light sensing sub-area and a blue light sensing sub-area at the first light sensing area 131, where the red light passing area on the color filter area 1221 corresponds to the red light sensing sub-area on the first light sensing area 131, the green light passing area on the color filter area 1221 corresponds to the green light sensing sub-area on the first light sensing area 131, and the blue light passing area on the color filter area 1221 corresponds to the blue light sensing sub-area on the first light sensing area 131. After the visible light reaches any one of the monochromatic light passing areas on the color filter area 1221, the light of the color corresponding to the monochromatic light passing area in the visible light passes through the area to reach the position on the first photosensitive area 131 corresponding to the monochromatic light passing area.
The first filter area 1222b is a filter area that blocks other light and only allows infrared light to pass through, natural light passes through the first pass area 1212b of the first filter layer to reach the position of the first filter area 1222b, visible light in the natural light is filtered out by the first filter area 1222b, and infrared light enters the second photosensitive area 132.
In the second implementation manner, a visible light sensing diode is disposed in the first light sensing region 131, an infrared light sensing diode is disposed in the second light sensing region 132, and the light sensing wavelengths of the visible light sensing diode and the infrared light sensing diode are different. The visible light sensing photodiode disposed in the first photosensitive region 131 includes: the red light sensing diode is positioned in the red light sensing subarea, the green light sensing diode is positioned in the green light sensing subarea, and the blue light sensing diode is positioned in the blue light sensing subarea; the sensitization wavelength range of red light sensitization diode is located between 640 to 780nm, and the sensitization wavelength range of green light sensitization diode is located between 505nm to 525nm, and the sensitization wavelength of blue light sensitization diode is located between 475nm to 505nm, reaches the monochromatic light on the three sensitization subregion of first sensitization region 131, converts light signal into the signal of telecommunication through the monochromatic light photodiode that corresponds, and then generates the visible light image according to this signal of telecommunication. The photosensitive wavelength range of the infrared light sensing diode arranged in the second photosensitive region 132 is the wavelength range of infrared light, i.e., between 780nm and 1mm, and the infrared light reaching the second photosensitive region 132 is converted into an electrical signal by the infrared light sensing diode, so as to generate an infrared light image according to the electrical signal.
Specifically, in the second implementation manner, photodiodes with the same photosensitive wavelength range may be further disposed in the first photosensitive region 131 and the second photosensitive region 132.
In a second implementation manner of the image sensor of the present invention, no region for filtering visible light is provided on the first filter layer, which facilitates manufacturing of the first filter layer and the second filter layer. Since the light reaching the second photosensitive region 132 is only an infrared light, it is possible to ensure better image quality of the generated infrared light image, compared to the image sensor in the first implementation.
Referring to fig. 3, an image sensor of a third implementation of the present invention is provided in fig. 3, in which a filter layer of the image sensor includes: a first filter layer including a invisible light filter region 1211 for passing visible light and a second filter region 1212c for passing infrared light; a second filter layer including a color filter region 1221 for passing visible light and a fourth pass region 1222c for passing infrared light; the invisible light filtering area 1211, the color filtering area 1221 and the first photosensitive area 131 correspond to each other, and the second filtering area 1212c, the fourth filtering area 1222c and the second photosensitive area 132 correspond to each other.
The invisible light filter region 1211 refers to a filter region that blocks the passage of invisible light and allows the passage of visible light; the second filter region 1212c is a filter region that blocks other light and allows only infrared light to pass through. That is, the light reaching the color filter area 1221 is visible light, and the light reaching the fourth pass area 1222c of the second filter layer is infrared light.
The fourth passing area 1222c of the second filter layer allows infrared light to pass therethrough, and a filtering structure or a filtering substance for preventing light from passing therethrough is not disposed at the fourth passing area 1222c of the second filter layer.
The color filter area 1221 includes a monochromatic light passing area for passing three kinds of light, namely red light, green light and blue light, and accordingly includes a red light sensing sub-area, a green light sensing sub-area and a blue light sensing sub-area at the first light sensing area 131, where the red light passing area on the color filter area 1221 corresponds to the red light sensing sub-area on the first light sensing area 131, the green light passing area on the color filter area 1221 corresponds to the green light sensing sub-area on the first light sensing area 131, and the blue light passing area on the color filter area 1221 corresponds to the blue light sensing sub-area on the first light sensing area 131. After the visible light reaches any one of the monochromatic light passing areas on the color filter area 1221, the light of the color corresponding to the monochromatic light passing area in the visible light passes through the area to reach the position on the first photosensitive area 131 corresponding to the monochromatic light passing area.
In the third implementation manner, a visible light sensing diode is disposed in the first photosensitive region 131, an infrared light sensing diode is disposed in the second photosensitive region 132, and the visible light sensing diode and the infrared light sensing diode have different sensing wavelengths. Wherein, the visible light sense photodiode that sets up in the first photosensitive zone includes: the red light sensing diode is positioned in the red light sensing subarea, the green light sensing diode is positioned in the green light sensing subarea, and the blue light sensing diode is positioned in the blue light sensing subarea; the sensitization wavelength range of red light sensitization diode is located between 640 to 780nm, and the sensitization wavelength range of green light sensitization diode is located between 505nm to 525nm, and the sensitization wavelength of blue light sensitization diode is located between 475nm to 505nm, reaches the monochromatic light on the three sensitization subregion of first sensitization region 131, converts light signal into the signal of telecommunication through the monochromatic light photodiode that corresponds, and then generates the visible light image according to this signal of telecommunication. The photosensitive wavelength range of the infrared light sensing diode arranged in the second photosensitive region 132 is the wavelength range of infrared light, i.e., between 780nm and 1mm, and the infrared light reaching the second photosensitive region 132 is converted into an electrical signal by the infrared light sensing diode, so as to generate an infrared light image according to the electrical signal.
Specifically, in the third implementation, photodiodes with the same sensing wavelength range may be further disposed in the first photosensitive region 131 and the second photosensitive region 132.
Since the light reaching the second photosensitive region 132 is only an infrared light, it is possible to ensure better image quality of the generated infrared light image, compared to the image sensor in the first implementation. The image sensor of the third implementation differs from the image sensor of the second implementation in that the visible light filter layer is disposed at a different position, which also enables visible light imaging and infrared light imaging.
Among them, the following conditions can be satisfied for the image sensor in each of the above three implementations.
The first condition is that the invisible light filter area 1211 is an ir filter coating applied on the color filter area 1221 or an ir filter separately disposed.
If the infrared light filtering layer is coated on the color filtering area 1221, both the first passing area 1212a and the second passing area 1222a in the first implementation are empty areas or lenses without a filtering structure or a filtering substance, the third passing area 1212b in the second implementation is empty areas or lenses without a filtering structure or a filtering substance, and the fourth passing area 1222c in the third implementation is empty areas or lenses without a filtering structure or a filtering substance.
Similarly, if the ir filters are separately disposed, the first passing area 1212a and the second passing area 1222a in the first implementation are both vacant areas or lenses without the filter structure or the filter material, the third passing area 1212b in the second implementation is vacant areas or lenses without the filter structure or the filter material, and the fourth passing area 1222c in the third implementation is vacant areas or lenses without the filter structure or the filter material.
The image sensor may further satisfy one of a second condition that the photosensitive element layer includes a plurality of pixel units, wherein each pixel unit includes the first photosensitive region 131 and the second photosensitive region 132, and a third condition.
At this time, for one pixel unit, the first photosensitive region 131 is three monochromatic pixel sub-units, which are: a red light pixel sub-unit, a green light pixel sub-unit, and a blue light pixel sub-unit, and the second photosensitive region 132 is an infrared light pixel sub-unit.
The third condition is that the photosensitive element layer includes a plurality of pixel units, a portion of the pixel units includes the first photosensitive region 131 and the second photosensitive region 132, and another portion of the pixel units includes only the first photosensitive region 131.
At this time, for a portion of the pixel units including the first photosensitive region 131 and the second photosensitive region 132, the first photosensitive region of each pixel unit of the portion of the pixel units includes a red light pixel sub-unit, a green light pixel sub-unit and a blue light pixel sub-unit, and the second photosensitive region 132 is an infrared light pixel sub-unit.
For another portion of the pixel cells including only the first photosensitive region 131, the first photosensitive region 131 in each of the portion of the pixel cells includes two red light pixel sub-cells, one green light pixel sub-cell, and one blue light pixel sub-cell.
In addition, in the third condition, the number of a part of pixel units including the first photosensitive region 131 and the second photosensitive region 132 is smaller than that of another part of pixel units including only the first photosensitive region.
A photosensitive element layer that satisfies the third condition will result in a better visible light image than a photosensitive element layer that satisfies the second condition.
By the image sensor provided by the embodiment of the invention, the image characteristics of the visible light image and the image characteristics of the infrared light image after the image acquisition is carried out on the face to be verified are subjected to image characteristic matching, so that whether the face in front of the camera is a real face can be determined, and the safety and the accuracy of face recognition are improved; moreover, the verification of the face to be verified is determined only after the image characteristics of the visible light image and the image characteristics of the infrared light image are matched successfully, so that the accuracy of face recognition is improved.
The embodiment of the invention also provides a lens module which comprises the image sensor.
The lens group of the lens module is arranged adjacent to the image sensor, and an infrared filter is omitted.
The embodiment of the invention also provides a mobile terminal which comprises the lens module.
Referring to fig. 4, an embodiment of the present invention provides a face recognition method, including:
step 201, acquiring a visible light image and an infrared light image after image acquisition is performed on a face to be verified.
When the face recognition is needed, the mobile terminal controls the camera to be started, the face detection is carried out in the photographing area of the camera, and when the face is detected, the camera photographs the photographing area.
Specifically, the manner of controlling the camera to take the picture is different according to the function of the camera installed on the mobile terminal, and the description of this part will be described in detail later.
Step 202, it is determined whether the image features of the visible light image match the image features of the pre-stored visible light facial image, and whether the image features of the infrared light image match the image features of the pre-stored infrared light facial image.
In step 202, determining whether the image features of the visible light image match the image features of the pre-stored visible light facial image includes:
step 2021, extracting a first image feature from the visible light image, and extracting a second image feature from the pre-stored visible light facial image;
step 2022, comparing the first image feature with the second image feature;
step 2023, if the comparison result of the first image feature and the second image feature is greater than or equal to the first predetermined threshold, determining that the image feature of the visible light image matches the image feature of the pre-stored visible light facial image;
in step 2024, if the comparison result of the first image feature and the second image feature is smaller than the first predetermined threshold, it is determined that the image feature of the visible light image does not match the image feature of the pre-stored visible light facial image.
In step 202, the step of determining whether the image features of the infrared light image match the image features of the pre-stored infrared light face image includes:
step 205, extracting a third image feature from the infrared light image, and extracting a fourth image feature from a prestored infrared light face image;
step 206, comparing the third image characteristic with the fourth image characteristic;
step 207, if the comparison result of the third image characteristic and the fourth image characteristic is greater than or equal to a second predetermined threshold, determining that the image characteristic of the infrared light image is matched with the image characteristic of the pre-stored infrared light facial image;
in step 208, if the comparison result of the third image feature and the fourth image feature is smaller than the second predetermined threshold, it is determined that the image feature of the visible light image does not match the image feature of the pre-stored visible light facial image.
In step 2021, the first image feature and the second image feature refer to features of five sense organs of the human face to be verified in the visible light image.
In step 2025, the third image feature and the fourth image feature refer to features of five sense organs of the human face to be verified in the infrared light image.
In step 203, if the image characteristics of the visible light image are matched with the image characteristics of the pre-stored visible light facial image, and the image characteristics of the infrared light image are matched with the image characteristics of the pre-stored infrared light facial image, it is determined that the verification of the face to be verified is successful.
And if the image characteristics of the visible light image are matched with the image characteristics of the pre-stored visible light face image and/or the image characteristics of the infrared light image are matched with the image characteristics of the pre-stored infrared light face image, determining that the verification of the face to be verified is failed.
Specifically, a plurality of pre-stored visible light facial images and a plurality of pre-stored infrared facial images are pre-stored in the mobile terminal, and when the image characteristics of the visible light images are matched with the image characteristics of one of the pre-stored visible light images, the image characteristics of the visible light images are considered to be matched with the image characteristics of the pre-stored visible light facial images; similarly, when the image feature of the infrared light image is matched with the image feature of one of the plurality of pre-stored visible light images, the image feature of the visible light image is considered to be matched with the image feature of the pre-stored infrared light facial image.
According to the face recognition method provided by the embodiment of the invention, the image characteristics of the infrared light image and the image characteristics of the visible light image after the image acquisition is carried out on the face to be verified are subjected to image characteristic matching, so that whether the face in front of the camera is a real face or a face picture can be determined, and the safety and the accuracy of face recognition are improved; moreover, the verification of the face to be verified is determined only after the image characteristics of the visible light image and the image characteristics of the infrared light image are matched successfully, so that the accuracy of face recognition is improved.
Furthermore, in the embodiment of the present invention, there are two ways to acquire the visible light image and the infrared light image of the face to be verified according to the difference between the cameras installed on the mobile terminal.
Wherein, when installing visible light camera and the infrared light camera of shooing on mobile terminal, the step of obtaining the visible light image and the infrared image after carrying out image acquisition to the face of waiting to verify includes:
taking a picture through a visible light shooting camera to obtain a visible light image;
and photographing through the infrared photographing camera to obtain an infrared image.
The image sensor of the visible light shooting camera comprises a first photosensitive area, a second photosensitive area and a third photosensitive area, wherein the photosensitive element layer in the image sensor of the visible light shooting camera can only perform photosensitive on visible light; and the photosensitive element layer in the image sensor of the infrared photographing camera can only be used for photosensitive the infrared light. That is, only visible-light-sensing photodiodes are included in all pixel cells of the photosensitive element layer in the image sensor of the visible-light photographing camera; in all pixel units of the photosensitive element layer in the image sensor of the infrared photographing camera, only the infrared light photosensitive diode is included.
The two cameras do not interfere with each other, so that the imaging effect of the finally imaged visible light image and the infrared light image is better.
Furthermore, in the embodiment of the present invention, the step of acquiring the visible light image and the infrared light image after the image acquisition is performed on the face to be verified includes:
when the mobile terminal is in a visible light photographing mode, photographing through a camera to obtain a visible light image photographed by the camera;
when the mobile terminal is in the infrared light photographing mode, photographing is carried out through the camera, and an infrared light image photographed by the camera is obtained.
Specifically, in this mode, the visible light imaging and the infrared image are acquired by the camera module. The photosensitive element layer of the image sensor of the camera module comprises a first photosensitive area corresponding to visible light and a second photosensitive area corresponding to infrared light, a visible light sensitive diode is arranged in the first photosensitive area, and an infrared light sensitive diode is arranged in the second photosensitive area. Explaining by using one pixel unit in a photosensitive element layer of the camera, a visible light photosensitive diode is arranged in a first photosensitive area in the pixel unit, and the explanation specifically includes: a red light sensing photodiode, a green light sensing photodiode, and a blue light sensing photodiode, the visible light sensing photodiodes being disposed in the second light sensing region in the pixel unit, the four photodiodes forming a pixel unit. Wherein, four photosensitive diode all are connected with a control circuit, and this control circuit specifically includes: the base electrode of the first switching tube is connected with the controller, and the collector electrode of the first switching tube is connected with the photosensitive diode; the emitter of the first switching tube is connected with the collector of the second switching tube, and the base of the second switching tube is connected with the controller; the base electrode of the third switching tube is connected with the emitting electrode of the first switching tube and the collector electrode of the second switching tube, and the emitting electrode of the third switching tube is connected with the emitting electrode of the second switching tube; and the base electrode of the fourth switch tube is connected with the controller, the emitter electrode of the fourth switch tube is connected with the collector electrode of the third switch tube, and the collector electrode of the fourth switch tube is connected to the output end. The second switching tube, the third switching tube and the fourth switching tube are in a conducting state.
When the controller is in a visible light photographing mode, the controller sends a signal to a first switch tube connected with the visible light sensing photodiode, so that the first switch tube conducts the connection between the visible light sensing photodiode and a third switch tube, and at the moment, an optical signal incident to the visible light sensing photodiode is converted into an electrical signal through the visible light sensing photodiode, and then the electrical signal is output to the outside after passing through the first switch tube, the third switch tube and the fourth switch tube in sequence; in the process of being in the visible light photographing mode, the controller does not control the conduction of the first switching tube connected with the infrared light sensing diode, so that the infrared light sensing diode cannot be conducted with the corresponding third switching tube; thereby achieving the purpose of imaging only visible light.
When the infrared photographing device is in an infrared photographing mode, the controller sends a signal to a first switch tube connected with the infrared light sensing diode, so that the first switch tube conducts the connection between the infrared light sensing diode and a third switch tube, and at the moment, a light signal incident to the infrared light sensing diode is converted into an electric signal through the infrared light sensing diode, and then the electric signal passes through the first switch tube, the third switch tube and a fourth switch tube in sequence and is output outwards; in the process of the infrared light photographing mode, the controller does not control the conduction of the first switching tube connected with the visible light sensing photodiode, so that the visible light sensing photodiode cannot be conducted with the corresponding third switching tube; thereby achieving the purpose of imaging only infrared light.
The mode of realizing visible light imaging and infrared light imaging through a camera can increase the arrangement space of other components in the mobile space and reduce the manufacturing cost of the mobile terminal.
Specifically, the face recognition method in the embodiment of the invention can be used for unlocking the screen of the mobile terminal, and in such an implementation mode, when the face to be verified is successfully verified, the screen is unlocked.
Or, the face recognition method in the embodiment of the present invention may also be applied to the payment field of the mobile terminal, in such an implementation manner, the face recognition may be performed only when the mobile terminal enters the corresponding payment interface, and the payment is performed when the face recognition is successful.
In the embodiment of the invention, the image characteristics of the infrared light image and the image characteristics of the visible light image after the image acquisition of the face to be verified are matched, so that whether the face in front of the camera is a real face or a face picture can be determined, and the safety and the accuracy of face recognition are improved; moreover, the verification of the face to be verified is determined only after the image characteristics of the visible light image and the image characteristics of the infrared light image are successfully matched, so that the accuracy of face recognition is improved.
Referring to fig. 5 to 7, an embodiment of the present invention further provides a face recognition apparatus 300, including:
the acquisition module 301 is configured to acquire a visible light image and an infrared light image after image acquisition is performed on a face to be verified;
a first determining module 302, configured to determine whether an image feature of the visible light image matches an image feature of a pre-stored visible light facial image, and whether an image feature of the infrared light image matches an image feature of a pre-stored infrared light facial image;
the second determining module 303 is configured to determine that the verification of the face to be verified is successful if the image feature of the visible light image matches the image feature of the pre-stored visible light face image, and the image feature of the infrared light image matches the image feature of the pre-stored infrared light face image.
Referring to fig. 6, the acquisition module 301 includes:
the first obtaining unit 3011, configured to take a picture through a camera when the mobile terminal is in a visible light photographing mode, and obtain a visible light image taken by the camera;
the second obtaining unit 3012 is configured to take a photo through the camera when the mobile terminal is in the infrared light photographing mode, and obtain an infrared light image obtained after the photo is taken by the camera.
Referring to fig. 6, the first determination module 302 includes:
a first extraction unit 3021 configured to extract a first image feature from the visible light image and extract a second image feature from the pre-stored visible light face image;
a first comparing unit 3022, configured to compare the first image feature with the second image feature;
a first determining unit 3023, configured to determine that the image feature of the visible light image matches the image feature of the pre-stored visible light facial image if the comparison result of the first image feature and the second image feature is greater than or equal to a first predetermined threshold;
a second determining unit 3024, configured to determine that the image feature of the visible light image does not match the image feature of the pre-stored visible light face image if the comparison result of the first image feature and the second image feature is smaller than the first predetermined threshold.
Referring to fig. 6, the first determination module 302 further includes:
a second extraction unit 3025 for extracting a third image feature from the infrared light image and a fourth image feature from a pre-stored infrared light face image;
a second comparing unit 3026, configured to compare the third image feature with the fourth image feature;
a third determining unit 3027 configured to determine that the image feature of the infrared light image matches the image feature of the pre-stored infrared light facial image if the comparison result of the third image feature and the fourth image feature is greater than or equal to a second predetermined threshold;
a fourth determining unit 3028, configured to determine that the image feature of the visible light image does not match the image feature of the pre-stored visible light face image if the comparison result of the third image feature and the fourth image feature is smaller than a second predetermined threshold.
The mobile terminal provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiment in fig. 4, and is not described herein again to avoid repetition. By carrying out image feature matching on the image features of the infrared light image after the image acquisition is carried out on the face to be verified, whether the face in front of the camera is a real face or not can be determined, and the safety and the accuracy of face recognition are improved; moreover, the verification of the face to be verified is determined only after the image characteristics of the visible light image and the image characteristics of the infrared light image are matched successfully, so that the accuracy of face recognition is improved.
Figure 7 is a schematic diagram of a hardware configuration of a mobile terminal implementing various embodiments of the present invention,
the mobile terminal 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power supply 411. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 7 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The radio frequency unit 401 is configured to receive and transmit data under the control of the processor 410;
the processor 410 is used for acquiring a visible light image and an infrared light image after image acquisition is carried out on a face to be verified; and if the image characteristics of the visible light image and the image characteristics of the prestored visible light facial image are matched, and the image characteristics of the infrared light image and the image characteristics of the prestored infrared light facial image are matched, determining that the verification of the face to be verified is successful.
The image characteristics of the visible light image and the image characteristics of the infrared light image after the image acquisition of the face to be verified are matched, and the face to be verified is successfully verified only after the two image characteristics are successfully matched, so that the accuracy of face recognition is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 410; in addition, the uplink data is transmitted to the base station. Typically, radio unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio unit 401 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 402, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output related to a specific function performed by the mobile terminal 400 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive audio or video signals. The input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphic processor 4041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 401 in case of the phone call mode.
The mobile terminal 400 also includes at least one sensor 405, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 4061 and/or the backlight when the mobile terminal 400 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 406 is used to display information input by the user or information provided to the user. The Display unit 406 may include a Display panel 4061, and the Display panel 4061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. Touch panel 4071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 4071 using a finger, a stylus, or any suitable object or attachment). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 410, receives a command from the processor 410, and executes the command. In addition, the touch panel 4071 can be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 4071, the user input unit 407 may include other input devices 4072. Specifically, the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 4071 can be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of the touch event. Although in fig. 7, the touch panel 4071 and the display panel 4061 are two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 4071 and the display panel 4061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 408 is an interface through which an external device is connected to the mobile terminal 400. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 400 or may be used to transmit data between the mobile terminal 400 and external devices.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby integrally monitoring the mobile terminal. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The mobile terminal 400 may further include a power supply 411 (e.g., a battery) for supplying power to various components, and preferably, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the mobile terminal 400 includes some functional modules that are not shown, and thus, are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 410, a memory 409, and a computer program that is stored in the memory 409 and can be run on the processor 410, and when being executed by the processor 410, the computer program implements each process of the above-mentioned embodiment of the face recognition method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned embodiment of the face recognition method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
While the preferred embodiments of the present invention have been described, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.

Claims (11)

1. An image sensor, comprising a microlens layer, a photosensitive element layer, and a filter layer disposed between the microlens layer and the photosensitive element layer;
incident light sequentially passes through the micro-lens layer and the filter layer and then is transmitted to the photosensitive element layer, and the photosensitive element layer comprises a first photosensitive area corresponding to visible light and a second photosensitive area corresponding to infrared light; the first photosensitive area and the second photosensitive area are provided with photosensitive diodes with the same photosensitive wavelength range;
the filter layer includes:
a first filter layer including an invisible light filtering region for passing visible light and a third passing region for passing natural light;
a second filter layer including a color filter region for passing visible light and a first filter region for passing infrared light;
the invisible light filtering area, the color filtering area and the first photosensitive area correspond to each other, and the third passing area, the first filtering area and the second photosensitive area correspond to each other;
alternatively, the filter layer includes:
a first filter layer including a non-visible light filter region for passing visible light and a second filter region for passing infrared light;
a second filter layer including a color filter region for passing visible light and a fourth pass region for passing infrared light;
the invisible light filtering area, the color filtering area and the first photosensitive area correspond to each other, and the second filtering area, the fourth filtering area and the second photosensitive area correspond to each other;
the photosensitive element layer comprises a plurality of pixel units, each pixel unit in a part of the pixel units comprises the first photosensitive area and the second photosensitive area, and each pixel unit in the other part of the pixel units only comprises the first photosensitive area;
the number of a part of pixel units comprising the first photosensitive area and the second photosensitive area is less than that of the other part of pixel units comprising only the first photosensitive area.
2. A lens module comprising the image sensor of claim 1.
3. The lens module as recited in claim 2, wherein the lens group of the lens module is disposed adjacent to the image sensor.
4. A mobile terminal, characterized by comprising the lens module of any one of claims 2-3.
5. A face recognition method applied to the mobile terminal according to claim 4, the face recognition method comprising:
acquiring a visible light image and an infrared light image after image acquisition is carried out on a face to be verified;
determining whether the image features of the visible light image are matched with the image features of a pre-stored visible light facial image and whether the image features of the infrared light image are matched with the image features of a pre-stored infrared light facial image;
and if the image characteristics of the visible light image are matched with the image characteristics of the pre-stored visible light face image, and the image characteristics of the infrared light image are matched with the image characteristics of the pre-stored infrared light face image, determining that the verification of the face to be verified is successful.
6. The face recognition method of claim 5, wherein the step of obtaining the visible light image and the infrared light image after the image acquisition of the face to be verified comprises:
when the mobile terminal is in a visible light photographing mode, photographing through a camera to obtain the visible light image photographed by the camera;
when the mobile terminal is in an infrared light photographing mode, photographing is carried out through the camera, and the infrared light image photographed by the camera is obtained.
7. The face recognition method of claim 5, wherein the step of determining whether the image features of the visible light image match the image features of a pre-stored visible light facial image comprises:
extracting a first image feature from the visible light image, and extracting a second image feature from the pre-stored visible light facial image;
comparing the first image feature with the second image feature;
if the comparison result of the first image characteristic and the second image characteristic is greater than or equal to a first preset threshold value, determining that the image characteristic of the visible light image is matched with the image characteristic of a pre-stored visible light face image;
and if the comparison result of the first image characteristic and the second image characteristic is smaller than the first preset threshold value, determining that the image characteristic of the visible light image is not matched with the image characteristic of the pre-stored visible light facial image.
8. A face recognition apparatus applied to the mobile terminal according to claim 4, the face recognition apparatus comprising:
the acquisition module is used for acquiring a visible light image and an infrared light image after image acquisition is carried out on the face to be verified;
the first determining module is used for determining whether the visible light image is matched with a pre-stored visible light face image or not and whether the infrared light image is matched with a pre-stored infrared light face image or not;
and the second determining module is used for determining that the verification of the face to be verified is successful if the image characteristics of the visible light image are determined to be matched with the image characteristics of the pre-stored visible light face image and the image characteristics of the infrared light image are determined to be matched with the image characteristics of the pre-stored infrared light face image.
9. The face recognition apparatus of claim 8, wherein the obtaining module comprises:
the mobile terminal comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for taking a picture through a camera when the mobile terminal is in a visible light shooting mode and acquiring a visible light image shot by the camera;
and the second acquisition unit is used for taking a picture through the camera when the mobile terminal is in an infrared light photographing mode and acquiring the infrared light image after the camera takes a picture.
10. The face recognition apparatus of claim 8, wherein the first determination module comprises:
a first extraction unit, configured to extract a first image feature from the visible light image, and extract a second image feature from the pre-stored visible light facial image;
the first comparison unit is used for comparing the first image characteristic with the second image characteristic;
a first determining unit, configured to determine that an image feature of the visible light image matches an image feature of a pre-stored visible light facial image if a comparison result of the first image feature and the second image feature is greater than or equal to a first predetermined threshold;
a second determining unit, configured to determine that the image feature of the visible light image does not match an image feature of a pre-stored visible light face image if a comparison result of the first image feature and the second image feature is smaller than the first predetermined threshold.
11. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the face recognition method according to any one of claims 5 to 7.
CN201810083033.5A 2018-01-29 2018-01-29 Image sensor, lens module, mobile terminal, face recognition method and device Active CN108345845B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810083033.5A CN108345845B (en) 2018-01-29 2018-01-29 Image sensor, lens module, mobile terminal, face recognition method and device
PCT/CN2019/073403 WO2019144956A1 (en) 2018-01-29 2019-01-28 Image sensor, camera module, mobile terminal, and facial recognition method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810083033.5A CN108345845B (en) 2018-01-29 2018-01-29 Image sensor, lens module, mobile terminal, face recognition method and device

Publications (2)

Publication Number Publication Date
CN108345845A CN108345845A (en) 2018-07-31
CN108345845B true CN108345845B (en) 2020-09-25

Family

ID=62961693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810083033.5A Active CN108345845B (en) 2018-01-29 2018-01-29 Image sensor, lens module, mobile terminal, face recognition method and device

Country Status (2)

Country Link
CN (1) CN108345845B (en)
WO (1) WO2019144956A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108345845B (en) * 2018-01-29 2020-09-25 维沃移动通信有限公司 Image sensor, lens module, mobile terminal, face recognition method and device
CN108897178A (en) * 2018-08-31 2018-11-27 武汉华星光电技术有限公司 Colored filter substrate and display panel
CN109143704B (en) * 2018-09-13 2021-03-12 合肥京东方光电科技有限公司 Display panel and terminal equipment
CN112131906A (en) 2019-06-24 2020-12-25 Oppo广东移动通信有限公司 Optical fingerprint sensor and electronic equipment with same
CN110532992B (en) * 2019-09-04 2023-01-10 深圳市捷顺科技实业股份有限公司 Human face recognition method based on visible light and near infrared
CN111447423A (en) * 2020-03-25 2020-07-24 浙江大华技术股份有限公司 Image sensor, imaging apparatus, and image processing method
CN114143427A (en) * 2021-11-23 2022-03-04 歌尔科技有限公司 Camera assembly, mobile terminal and camera-based body temperature measuring method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504721A (en) * 2009-03-13 2009-08-12 北京中星微电子有限公司 Method and apparatus for identity authentication based on face image
CN103839952A (en) * 2012-11-20 2014-06-04 采钰科技股份有限公司 Image-sensing apparatus
CN105023005A (en) * 2015-08-05 2015-11-04 王丽婷 Face recognition apparatus and recognition method thereof
CN106982329A (en) * 2017-04-28 2017-07-25 广东欧珀移动通信有限公司 Imaging sensor, focusing control method, imaging device and mobile terminal
CN206370880U (en) * 2017-01-25 2017-08-01 徐鹤菲 A kind of dual camera imaging system and mobile terminal

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9531961B2 (en) * 2015-05-01 2016-12-27 Duelight Llc Systems and methods for generating a digital image using separate color and intensity data
US10148936B2 (en) * 2013-07-01 2018-12-04 Omnivision Technologies, Inc. Multi-band image sensor for providing three-dimensional color images
CN104394306B (en) * 2014-11-24 2018-02-27 北京中科虹霸科技有限公司 Camera module and equipment for the multichannel multizone plated film of iris recognition
CN205666883U (en) * 2016-03-23 2016-10-26 徐鹤菲 Support compound imaging system and mobile terminal of formation of image of near infrared and visible light
CN108345845B (en) * 2018-01-29 2020-09-25 维沃移动通信有限公司 Image sensor, lens module, mobile terminal, face recognition method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504721A (en) * 2009-03-13 2009-08-12 北京中星微电子有限公司 Method and apparatus for identity authentication based on face image
CN103839952A (en) * 2012-11-20 2014-06-04 采钰科技股份有限公司 Image-sensing apparatus
CN105023005A (en) * 2015-08-05 2015-11-04 王丽婷 Face recognition apparatus and recognition method thereof
CN206370880U (en) * 2017-01-25 2017-08-01 徐鹤菲 A kind of dual camera imaging system and mobile terminal
CN106982329A (en) * 2017-04-28 2017-07-25 广东欧珀移动通信有限公司 Imaging sensor, focusing control method, imaging device and mobile terminal

Also Published As

Publication number Publication date
WO2019144956A1 (en) 2019-08-01
CN108345845A (en) 2018-07-31

Similar Documents

Publication Publication Date Title
CN108345845B (en) Image sensor, lens module, mobile terminal, face recognition method and device
CN109190509B (en) Identity recognition method, device and computer readable storage medium
CN107463883A (en) Biometric discrimination method and Related product
CN108600712B (en) Image sensor, mobile terminal and image shooting method
WO2015003522A1 (en) Face recognition method, apparatus, and mobile terminal
CN109788174B (en) Light supplementing method and terminal
CN110062171B (en) Shooting method and terminal
CN109246360A (en) A kind of reminding method and mobile terminal
CN108965666B (en) Mobile terminal and image shooting method
CN108206892B (en) Method and device for protecting privacy of contact person, mobile terminal and storage medium
CN109525837B (en) Image generation method and mobile terminal
CN108769410B (en) Information sending method and mobile terminal
CN108229420B (en) Face recognition method and mobile terminal
CN109819166A (en) A kind of image processing method and electronic equipment
CN109814799A (en) Screen response control mehtod and terminal device
WO2022247762A1 (en) Electronic device, and fingerprint unlocking method and fingerprint unlocking apparatus therefor
CN108616687A (en) A kind of photographic method, device and mobile terminal
CN110933307A (en) Electronic equipment and image processing method
CN110855897B (en) Image shooting method and device, electronic equipment and storage medium
CN108345657B (en) Picture screening method and mobile terminal
CN108960097B (en) Method and device for obtaining face depth information
CN110929540A (en) Scanning code identification method and device
CN108600623B (en) Refocusing display method and terminal device
CN109104522B (en) Face recognition method and mobile terminal
CN107895108B (en) Operation management method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant